Jan 16 08:58:01.173752 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 16 08:58:01.173787 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 08:58:01.173805 kernel: BIOS-provided physical RAM map: Jan 16 08:58:01.173815 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 16 08:58:01.173838 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 16 08:58:01.173848 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 16 08:58:01.173860 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 16 08:58:01.173870 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 16 08:58:01.173880 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 16 08:58:01.173896 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 16 08:58:01.173906 kernel: NX (Execute Disable) protection: active Jan 16 08:58:01.173916 kernel: APIC: Static calls initialized Jan 16 08:58:01.173931 kernel: SMBIOS 2.8 present. Jan 16 08:58:01.173942 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 16 08:58:01.173954 kernel: Hypervisor detected: KVM Jan 16 08:58:01.173970 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 16 08:58:01.173986 kernel: kvm-clock: using sched offset of 3343903751 cycles Jan 16 08:58:01.173999 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 16 08:58:01.174011 kernel: tsc: Detected 2494.140 MHz processor Jan 16 08:58:01.174023 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 16 08:58:01.174035 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 16 08:58:01.174047 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 16 08:58:01.174059 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 16 08:58:01.174070 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 16 08:58:01.174086 kernel: ACPI: Early table checksum verification disabled Jan 16 08:58:01.174098 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 16 08:58:01.174110 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:58:01.174122 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:58:01.174134 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:58:01.174145 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 16 08:58:01.174157 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:58:01.174168 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:58:01.174180 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:58:01.174195 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 08:58:01.174207 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 16 08:58:01.174218 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 16 08:58:01.174230 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 16 08:58:01.174242 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 16 08:58:01.174253 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 16 08:58:01.174265 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 16 08:58:01.174285 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 16 08:58:01.174297 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 16 08:58:01.174310 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 16 08:58:01.174323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 16 08:58:01.174335 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 16 08:58:01.174350 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 16 08:58:01.174363 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 16 08:58:01.174379 kernel: Zone ranges: Jan 16 08:58:01.174391 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 16 08:58:01.174403 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 16 08:58:01.174416 kernel: Normal empty Jan 16 08:58:01.174428 kernel: Movable zone start for each node Jan 16 08:58:01.174440 kernel: Early memory node ranges Jan 16 08:58:01.174453 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 16 08:58:01.174465 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 16 08:58:01.174477 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 16 08:58:01.174493 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 16 08:58:01.174505 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 16 08:58:01.174520 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 16 08:58:01.174532 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 16 08:58:01.174545 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 16 08:58:01.174558 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 16 08:58:01.174570 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 16 08:58:01.174583 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 16 08:58:01.174595 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 16 08:58:01.174612 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 16 08:58:01.174624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 16 08:58:01.174636 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 16 08:58:01.174649 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 16 08:58:01.174661 kernel: TSC deadline timer available Jan 16 08:58:01.174673 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 16 08:58:01.174686 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 16 08:58:01.174698 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 16 08:58:01.174715 kernel: Booting paravirtualized kernel on KVM Jan 16 08:58:01.174731 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 16 08:58:01.174743 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 16 08:58:01.174755 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 16 08:58:01.174768 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 16 08:58:01.174780 kernel: pcpu-alloc: [0] 0 1 Jan 16 08:58:01.174792 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 16 08:58:01.174806 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 08:58:01.175022 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 16 08:58:01.175052 kernel: random: crng init done Jan 16 08:58:01.175065 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 08:58:01.175078 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 16 08:58:01.175091 kernel: Fallback order for Node 0: 0 Jan 16 08:58:01.175105 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 16 08:58:01.175118 kernel: Policy zone: DMA32 Jan 16 08:58:01.175131 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 08:58:01.175144 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Jan 16 08:58:01.175157 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 08:58:01.175174 kernel: Kernel/User page tables isolation: enabled Jan 16 08:58:01.175187 kernel: ftrace: allocating 37918 entries in 149 pages Jan 16 08:58:01.175200 kernel: ftrace: allocated 149 pages with 4 groups Jan 16 08:58:01.175229 kernel: Dynamic Preempt: voluntary Jan 16 08:58:01.175242 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 08:58:01.175256 kernel: rcu: RCU event tracing is enabled. Jan 16 08:58:01.175269 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 08:58:01.175282 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 08:58:01.175296 kernel: Rude variant of Tasks RCU enabled. Jan 16 08:58:01.175313 kernel: Tracing variant of Tasks RCU enabled. Jan 16 08:58:01.175326 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 08:58:01.175339 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 08:58:01.175352 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 16 08:58:01.175366 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 08:58:01.175386 kernel: Console: colour VGA+ 80x25 Jan 16 08:58:01.175399 kernel: printk: console [tty0] enabled Jan 16 08:58:01.175412 kernel: printk: console [ttyS0] enabled Jan 16 08:58:01.175425 kernel: ACPI: Core revision 20230628 Jan 16 08:58:01.175439 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 16 08:58:01.175456 kernel: APIC: Switch to symmetric I/O mode setup Jan 16 08:58:01.175469 kernel: x2apic enabled Jan 16 08:58:01.175482 kernel: APIC: Switched APIC routing to: physical x2apic Jan 16 08:58:01.175495 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 16 08:58:01.175509 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jan 16 08:58:01.175522 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jan 16 08:58:01.175536 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 16 08:58:01.175549 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 16 08:58:01.175578 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 16 08:58:01.175591 kernel: Spectre V2 : Mitigation: Retpolines Jan 16 08:58:01.175606 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 16 08:58:01.175624 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 16 08:58:01.175638 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 16 08:58:01.175652 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 16 08:58:01.175666 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 16 08:58:01.175680 kernel: MDS: Mitigation: Clear CPU buffers Jan 16 08:58:01.175695 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 16 08:58:01.175716 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 16 08:58:01.175731 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 16 08:58:01.175745 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 16 08:58:01.175759 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 16 08:58:01.175773 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 16 08:58:01.175787 kernel: Freeing SMP alternatives memory: 32K Jan 16 08:58:01.175801 kernel: pid_max: default: 32768 minimum: 301 Jan 16 08:58:01.175815 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 16 08:58:01.175878 kernel: landlock: Up and running. Jan 16 08:58:01.175891 kernel: SELinux: Initializing. Jan 16 08:58:01.175905 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 08:58:01.175919 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 08:58:01.175932 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 16 08:58:01.175945 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 08:58:01.175959 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 08:58:01.175973 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 08:58:01.175990 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 16 08:58:01.176004 kernel: signal: max sigframe size: 1776 Jan 16 08:58:01.176017 kernel: rcu: Hierarchical SRCU implementation. Jan 16 08:58:01.176031 kernel: rcu: Max phase no-delay instances is 400. Jan 16 08:58:01.176044 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 16 08:58:01.176058 kernel: smp: Bringing up secondary CPUs ... Jan 16 08:58:01.176071 kernel: smpboot: x86: Booting SMP configuration: Jan 16 08:58:01.176085 kernel: .... node #0, CPUs: #1 Jan 16 08:58:01.176098 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 08:58:01.176115 kernel: smpboot: Max logical packages: 1 Jan 16 08:58:01.176133 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jan 16 08:58:01.176146 kernel: devtmpfs: initialized Jan 16 08:58:01.176160 kernel: x86/mm: Memory block size: 128MB Jan 16 08:58:01.176173 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 08:58:01.176187 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 08:58:01.176200 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 08:58:01.176214 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 08:58:01.176227 kernel: audit: initializing netlink subsys (disabled) Jan 16 08:58:01.176240 kernel: audit: type=2000 audit(1737017879.621:1): state=initialized audit_enabled=0 res=1 Jan 16 08:58:01.176257 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 08:58:01.176271 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 16 08:58:01.176284 kernel: cpuidle: using governor menu Jan 16 08:58:01.176297 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 08:58:01.176310 kernel: dca service started, version 1.12.1 Jan 16 08:58:01.176323 kernel: PCI: Using configuration type 1 for base access Jan 16 08:58:01.176337 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 16 08:58:01.176350 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 08:58:01.176364 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 08:58:01.176381 kernel: ACPI: Added _OSI(Module Device) Jan 16 08:58:01.176394 kernel: ACPI: Added _OSI(Processor Device) Jan 16 08:58:01.176407 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 16 08:58:01.176421 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 08:58:01.176434 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 08:58:01.176458 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 16 08:58:01.176472 kernel: ACPI: Interpreter enabled Jan 16 08:58:01.176485 kernel: ACPI: PM: (supports S0 S5) Jan 16 08:58:01.176499 kernel: ACPI: Using IOAPIC for interrupt routing Jan 16 08:58:01.176516 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 16 08:58:01.176530 kernel: PCI: Using E820 reservations for host bridge windows Jan 16 08:58:01.176543 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 16 08:58:01.176557 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 08:58:01.176847 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 16 08:58:01.177004 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 16 08:58:01.177132 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 16 08:58:01.177156 kernel: acpiphp: Slot [3] registered Jan 16 08:58:01.177170 kernel: acpiphp: Slot [4] registered Jan 16 08:58:01.177184 kernel: acpiphp: Slot [5] registered Jan 16 08:58:01.177215 kernel: acpiphp: Slot [6] registered Jan 16 08:58:01.177229 kernel: acpiphp: Slot [7] registered Jan 16 08:58:01.177242 kernel: acpiphp: Slot [8] registered Jan 16 08:58:01.177257 kernel: acpiphp: Slot [9] registered Jan 16 08:58:01.177270 kernel: acpiphp: Slot [10] registered Jan 16 08:58:01.177284 kernel: acpiphp: Slot [11] registered Jan 16 08:58:01.177303 kernel: acpiphp: Slot [12] registered Jan 16 08:58:01.177317 kernel: acpiphp: Slot [13] registered Jan 16 08:58:01.177330 kernel: acpiphp: Slot [14] registered Jan 16 08:58:01.177345 kernel: acpiphp: Slot [15] registered Jan 16 08:58:01.177358 kernel: acpiphp: Slot [16] registered Jan 16 08:58:01.177372 kernel: acpiphp: Slot [17] registered Jan 16 08:58:01.177385 kernel: acpiphp: Slot [18] registered Jan 16 08:58:01.177399 kernel: acpiphp: Slot [19] registered Jan 16 08:58:01.177412 kernel: acpiphp: Slot [20] registered Jan 16 08:58:01.177426 kernel: acpiphp: Slot [21] registered Jan 16 08:58:01.177445 kernel: acpiphp: Slot [22] registered Jan 16 08:58:01.177460 kernel: acpiphp: Slot [23] registered Jan 16 08:58:01.177474 kernel: acpiphp: Slot [24] registered Jan 16 08:58:01.179981 kernel: acpiphp: Slot [25] registered Jan 16 08:58:01.180010 kernel: acpiphp: Slot [26] registered Jan 16 08:58:01.180024 kernel: acpiphp: Slot [27] registered Jan 16 08:58:01.180038 kernel: acpiphp: Slot [28] registered Jan 16 08:58:01.180051 kernel: acpiphp: Slot [29] registered Jan 16 08:58:01.180065 kernel: acpiphp: Slot [30] registered Jan 16 08:58:01.180087 kernel: acpiphp: Slot [31] registered Jan 16 08:58:01.180101 kernel: PCI host bridge to bus 0000:00 Jan 16 08:58:01.180363 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 16 08:58:01.180512 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 16 08:58:01.180626 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 16 08:58:01.180737 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 16 08:58:01.180872 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 16 08:58:01.180982 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 08:58:01.181179 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 16 08:58:01.181325 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 16 08:58:01.181467 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 16 08:58:01.181591 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 16 08:58:01.181716 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 16 08:58:01.181901 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 16 08:58:01.182034 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 16 08:58:01.182173 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 16 08:58:01.182341 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 16 08:58:01.182496 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 16 08:58:01.182644 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 16 08:58:01.182773 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 16 08:58:01.183161 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 16 08:58:01.183318 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 16 08:58:01.183473 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 16 08:58:01.183610 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 16 08:58:01.183738 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 16 08:58:01.183892 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 16 08:58:01.184020 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 16 08:58:01.184176 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 16 08:58:01.184307 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 16 08:58:01.184466 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 16 08:58:01.184592 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 16 08:58:01.184763 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 16 08:58:01.184903 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 16 08:58:01.185028 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 16 08:58:01.185162 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 16 08:58:01.185315 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 16 08:58:01.185441 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 16 08:58:01.185564 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 16 08:58:01.185686 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 16 08:58:01.185942 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 16 08:58:01.186072 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 16 08:58:01.186194 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 16 08:58:01.186322 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 16 08:58:01.186456 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 16 08:58:01.186579 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 16 08:58:01.186699 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 16 08:58:01.186893 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 16 08:58:01.187034 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 16 08:58:01.187161 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 16 08:58:01.187296 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 16 08:58:01.187315 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 16 08:58:01.187330 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 16 08:58:01.187345 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 16 08:58:01.187359 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 16 08:58:01.187373 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 16 08:58:01.187387 kernel: iommu: Default domain type: Translated Jan 16 08:58:01.187407 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 16 08:58:01.187421 kernel: PCI: Using ACPI for IRQ routing Jan 16 08:58:01.187435 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 16 08:58:01.187449 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 16 08:58:01.187463 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 16 08:58:01.187597 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 16 08:58:01.187722 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 16 08:58:01.188336 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 16 08:58:01.188368 kernel: vgaarb: loaded Jan 16 08:58:01.188384 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 16 08:58:01.188405 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 16 08:58:01.188419 kernel: clocksource: Switched to clocksource kvm-clock Jan 16 08:58:01.188433 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 08:58:01.188465 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 08:58:01.188479 kernel: pnp: PnP ACPI init Jan 16 08:58:01.188493 kernel: pnp: PnP ACPI: found 4 devices Jan 16 08:58:01.188506 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 16 08:58:01.188520 kernel: NET: Registered PF_INET protocol family Jan 16 08:58:01.188534 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 08:58:01.188552 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 16 08:58:01.188566 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 08:58:01.188579 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 16 08:58:01.188593 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 16 08:58:01.188607 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 16 08:58:01.188622 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 08:58:01.188636 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 08:58:01.188650 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 08:58:01.188669 kernel: NET: Registered PF_XDP protocol family Jan 16 08:58:01.188815 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 16 08:58:01.188945 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 16 08:58:01.189057 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 16 08:58:01.189169 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 16 08:58:01.189277 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 16 08:58:01.189412 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 16 08:58:01.189544 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 16 08:58:01.189568 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 16 08:58:01.189698 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 40254 usecs Jan 16 08:58:01.189716 kernel: PCI: CLS 0 bytes, default 64 Jan 16 08:58:01.189731 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 16 08:58:01.189746 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jan 16 08:58:01.189760 kernel: Initialise system trusted keyrings Jan 16 08:58:01.189773 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 16 08:58:01.189787 kernel: Key type asymmetric registered Jan 16 08:58:01.189800 kernel: Asymmetric key parser 'x509' registered Jan 16 08:58:01.189818 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 16 08:58:01.189892 kernel: io scheduler mq-deadline registered Jan 16 08:58:01.189906 kernel: io scheduler kyber registered Jan 16 08:58:01.189920 kernel: io scheduler bfq registered Jan 16 08:58:01.189933 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 16 08:58:01.189947 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 16 08:58:01.189961 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 16 08:58:01.189974 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 16 08:58:01.189988 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 08:58:01.190006 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 16 08:58:01.190020 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 16 08:58:01.190033 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 16 08:58:01.190047 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 16 08:58:01.190060 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 16 08:58:01.190230 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 16 08:58:01.190349 kernel: rtc_cmos 00:03: registered as rtc0 Jan 16 08:58:01.190462 kernel: rtc_cmos 00:03: setting system clock to 2025-01-16T08:58:00 UTC (1737017880) Jan 16 08:58:01.190579 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 16 08:58:01.190596 kernel: intel_pstate: CPU model not supported Jan 16 08:58:01.190610 kernel: NET: Registered PF_INET6 protocol family Jan 16 08:58:01.190623 kernel: Segment Routing with IPv6 Jan 16 08:58:01.190637 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 08:58:01.190650 kernel: NET: Registered PF_PACKET protocol family Jan 16 08:58:01.190664 kernel: Key type dns_resolver registered Jan 16 08:58:01.190677 kernel: IPI shorthand broadcast: enabled Jan 16 08:58:01.190690 kernel: sched_clock: Marking stable (1273006358, 125682992)->(1446585741, -47896391) Jan 16 08:58:01.190707 kernel: registered taskstats version 1 Jan 16 08:58:01.190721 kernel: Loading compiled-in X.509 certificates Jan 16 08:58:01.190734 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 16 08:58:01.190748 kernel: Key type .fscrypt registered Jan 16 08:58:01.190761 kernel: Key type fscrypt-provisioning registered Jan 16 08:58:01.190775 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 08:58:01.190788 kernel: ima: Allocated hash algorithm: sha1 Jan 16 08:58:01.190801 kernel: ima: No architecture policies found Jan 16 08:58:01.190817 kernel: clk: Disabling unused clocks Jan 16 08:58:01.190842 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 16 08:58:01.190856 kernel: Write protecting the kernel read-only data: 36864k Jan 16 08:58:01.190894 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 16 08:58:01.190912 kernel: Run /init as init process Jan 16 08:58:01.190927 kernel: with arguments: Jan 16 08:58:01.190941 kernel: /init Jan 16 08:58:01.190955 kernel: with environment: Jan 16 08:58:01.190969 kernel: HOME=/ Jan 16 08:58:01.190983 kernel: TERM=linux Jan 16 08:58:01.191000 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 16 08:58:01.191019 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 08:58:01.191036 systemd[1]: Detected virtualization kvm. Jan 16 08:58:01.191051 systemd[1]: Detected architecture x86-64. Jan 16 08:58:01.191065 systemd[1]: Running in initrd. Jan 16 08:58:01.191079 systemd[1]: No hostname configured, using default hostname. Jan 16 08:58:01.191099 systemd[1]: Hostname set to . Jan 16 08:58:01.191118 systemd[1]: Initializing machine ID from VM UUID. Jan 16 08:58:01.191133 systemd[1]: Queued start job for default target initrd.target. Jan 16 08:58:01.191147 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 08:58:01.191163 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 08:58:01.191178 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 08:58:01.191193 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 08:58:01.191208 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 08:58:01.191223 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 08:58:01.191245 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 16 08:58:01.191260 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 16 08:58:01.191275 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 08:58:01.191289 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 08:58:01.191304 systemd[1]: Reached target paths.target - Path Units. Jan 16 08:58:01.191319 systemd[1]: Reached target slices.target - Slice Units. Jan 16 08:58:01.191335 systemd[1]: Reached target swap.target - Swaps. Jan 16 08:58:01.191354 systemd[1]: Reached target timers.target - Timer Units. Jan 16 08:58:01.191369 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 08:58:01.191384 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 08:58:01.191399 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 08:58:01.191414 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 08:58:01.191433 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 08:58:01.191448 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 08:58:01.191463 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 08:58:01.191478 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 08:58:01.191493 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 08:58:01.191507 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 08:58:01.191522 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 08:58:01.191537 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 08:58:01.191552 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 08:58:01.191570 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 08:58:01.191585 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:58:01.191601 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 08:58:01.191620 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 08:58:01.191634 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 08:58:01.191650 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 08:58:01.191669 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 08:58:01.191719 systemd-journald[182]: Collecting audit messages is disabled. Jan 16 08:58:01.191755 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 08:58:01.191771 systemd-journald[182]: Journal started Jan 16 08:58:01.191802 systemd-journald[182]: Runtime Journal (/run/log/journal/1bf83e6151d6461e91e16a5facd30030) is 4.9M, max 39.3M, 34.4M free. Jan 16 08:58:01.194917 systemd-modules-load[184]: Inserted module 'overlay' Jan 16 08:58:01.244694 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 08:58:01.246320 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:58:01.253440 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 08:58:01.253476 kernel: Bridge firewalling registered Jan 16 08:58:01.252136 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 16 08:58:01.254215 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 08:58:01.267231 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 08:58:01.277467 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 08:58:01.283084 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 08:58:01.284033 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 08:58:01.297999 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:58:01.314621 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 08:58:01.322404 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 08:58:01.324503 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 08:58:01.336592 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 08:58:01.361610 dracut-cmdline[216]: dracut-dracut-053 Jan 16 08:58:01.366863 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 08:58:01.387473 systemd-resolved[219]: Positive Trust Anchors: Jan 16 08:58:01.388423 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 08:58:01.388489 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 08:58:01.396156 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 16 08:58:01.399263 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 08:58:01.400068 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 08:58:01.532714 kernel: SCSI subsystem initialized Jan 16 08:58:01.547890 kernel: Loading iSCSI transport class v2.0-870. Jan 16 08:58:01.562997 kernel: iscsi: registered transport (tcp) Jan 16 08:58:01.593144 kernel: iscsi: registered transport (qla4xxx) Jan 16 08:58:01.593258 kernel: QLogic iSCSI HBA Driver Jan 16 08:58:01.694881 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 08:58:01.709731 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 08:58:01.754453 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 08:58:01.754580 kernel: device-mapper: uevent: version 1.0.3 Jan 16 08:58:01.755231 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 16 08:58:01.822876 kernel: raid6: avx2x4 gen() 12361 MB/s Jan 16 08:58:01.843816 kernel: raid6: avx2x2 gen() 12861 MB/s Jan 16 08:58:01.860381 kernel: raid6: avx2x1 gen() 12616 MB/s Jan 16 08:58:01.860575 kernel: raid6: using algorithm avx2x2 gen() 12861 MB/s Jan 16 08:58:01.880078 kernel: raid6: .... xor() 12858 MB/s, rmw enabled Jan 16 08:58:01.880213 kernel: raid6: using avx2x2 recovery algorithm Jan 16 08:58:01.917037 kernel: xor: automatically using best checksumming function avx Jan 16 08:58:02.164880 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 08:58:02.192157 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 08:58:02.203602 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 08:58:02.239839 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 16 08:58:02.248974 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 08:58:02.260128 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 08:58:02.289941 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 16 08:58:02.345305 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 08:58:02.352135 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 08:58:02.447691 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 08:58:02.455349 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 08:58:02.489180 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 08:58:02.491887 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 08:58:02.493040 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 08:58:02.495049 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 08:58:02.505265 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 08:58:02.541743 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 08:58:02.555858 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 16 08:58:02.626285 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 16 08:58:02.626463 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 08:58:02.626483 kernel: GPT:9289727 != 125829119 Jan 16 08:58:02.626500 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 08:58:02.626517 kernel: GPT:9289727 != 125829119 Jan 16 08:58:02.626534 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 08:58:02.626552 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 08:58:02.626583 kernel: scsi host0: Virtio SCSI HBA Jan 16 08:58:02.626757 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 16 08:58:02.670256 kernel: cryptd: max_cpu_qlen set to 1000 Jan 16 08:58:02.670289 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Jan 16 08:58:02.651220 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 08:58:02.651457 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 08:58:02.652411 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 08:58:02.653118 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 08:58:02.653381 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:58:02.653914 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:58:02.661225 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:58:02.684230 kernel: AVX2 version of gcm_enc/dec engaged. Jan 16 08:58:02.684314 kernel: AES CTR mode by8 optimization enabled Jan 16 08:58:02.722870 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) Jan 16 08:58:02.730852 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (458) Jan 16 08:58:02.745880 kernel: ACPI: bus type USB registered Jan 16 08:58:02.746844 kernel: usbcore: registered new interface driver usbfs Jan 16 08:58:02.746887 kernel: usbcore: registered new interface driver hub Jan 16 08:58:02.746906 kernel: usbcore: registered new device driver usb Jan 16 08:58:02.764745 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 16 08:58:02.834009 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 16 08:58:02.834395 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 16 08:58:02.834674 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 16 08:58:02.834903 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 16 08:58:02.835127 kernel: hub 1-0:1.0: USB hub found Jan 16 08:58:02.835383 kernel: hub 1-0:1.0: 2 ports detected Jan 16 08:58:02.835597 kernel: libata version 3.00 loaded. Jan 16 08:58:02.835622 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 16 08:58:02.835852 kernel: scsi host1: ata_piix Jan 16 08:58:02.836066 kernel: scsi host2: ata_piix Jan 16 08:58:02.836269 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 16 08:58:02.836296 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 16 08:58:02.834495 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:58:02.842584 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 16 08:58:02.848649 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 16 08:58:02.849451 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 16 08:58:02.868030 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 08:58:02.877262 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 08:58:02.882168 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 08:58:02.920902 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 08:58:02.921955 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 08:58:02.923722 disk-uuid[541]: Primary Header is updated. Jan 16 08:58:02.923722 disk-uuid[541]: Secondary Entries is updated. Jan 16 08:58:02.923722 disk-uuid[541]: Secondary Header is updated. Jan 16 08:58:03.946992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 08:58:03.948151 disk-uuid[550]: The operation has completed successfully. Jan 16 08:58:04.019032 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 08:58:04.019236 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 08:58:04.024256 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 16 08:58:04.044072 sh[563]: Success Jan 16 08:58:04.061872 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 16 08:58:04.134225 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 16 08:58:04.138059 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 16 08:58:04.138721 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 16 08:58:04.172110 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 16 08:58:04.172178 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 16 08:58:04.172191 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 16 08:58:04.174478 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 08:58:04.174583 kernel: BTRFS info (device dm-0): using free space tree Jan 16 08:58:04.183685 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 16 08:58:04.185470 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 08:58:04.191170 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 08:58:04.194343 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 08:58:04.207867 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 08:58:04.207978 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 08:58:04.208001 kernel: BTRFS info (device vda6): using free space tree Jan 16 08:58:04.212909 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 08:58:04.227949 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 16 08:58:04.229291 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 08:58:04.235983 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 08:58:04.242193 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 08:58:04.408256 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 08:58:04.416389 ignition[644]: Ignition 2.19.0 Jan 16 08:58:04.416404 ignition[644]: Stage: fetch-offline Jan 16 08:58:04.417088 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 08:58:04.416465 ignition[644]: no configs at "/usr/lib/ignition/base.d" Jan 16 08:58:04.416475 ignition[644]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:58:04.416590 ignition[644]: parsed url from cmdline: "" Jan 16 08:58:04.421219 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 08:58:04.416594 ignition[644]: no config URL provided Jan 16 08:58:04.416621 ignition[644]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 08:58:04.416633 ignition[644]: no config at "/usr/lib/ignition/user.ign" Jan 16 08:58:04.416639 ignition[644]: failed to fetch config: resource requires networking Jan 16 08:58:04.417423 ignition[644]: Ignition finished successfully Jan 16 08:58:04.454473 systemd-networkd[752]: lo: Link UP Jan 16 08:58:04.454490 systemd-networkd[752]: lo: Gained carrier Jan 16 08:58:04.457956 systemd-networkd[752]: Enumeration completed Jan 16 08:58:04.458427 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 08:58:04.458471 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 08:58:04.458477 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 16 08:58:04.459614 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 08:58:04.459620 systemd-networkd[752]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 08:58:04.460355 systemd[1]: Reached target network.target - Network. Jan 16 08:58:04.460546 systemd-networkd[752]: eth0: Link UP Jan 16 08:58:04.460553 systemd-networkd[752]: eth0: Gained carrier Jan 16 08:58:04.460565 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 08:58:04.465314 systemd-networkd[752]: eth1: Link UP Jan 16 08:58:04.465320 systemd-networkd[752]: eth1: Gained carrier Jan 16 08:58:04.465339 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 08:58:04.471132 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 08:58:04.479934 systemd-networkd[752]: eth1: DHCPv4 address 10.124.0.5/20 acquired from 169.254.169.253 Jan 16 08:58:04.483941 systemd-networkd[752]: eth0: DHCPv4 address 144.126.217.85/20, gateway 144.126.208.1 acquired from 169.254.169.253 Jan 16 08:58:04.503783 ignition[755]: Ignition 2.19.0 Jan 16 08:58:04.503798 ignition[755]: Stage: fetch Jan 16 08:58:04.505419 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jan 16 08:58:04.505442 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:58:04.505576 ignition[755]: parsed url from cmdline: "" Jan 16 08:58:04.505580 ignition[755]: no config URL provided Jan 16 08:58:04.505587 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 08:58:04.505595 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jan 16 08:58:04.505616 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 16 08:58:04.528087 ignition[755]: GET result: OK Jan 16 08:58:04.529760 ignition[755]: parsing config with SHA512: 6a5e9c5231ff14b36b6f788e99a8c74cd8067902af34fe110b71cf7b43a11fe10d57dca2058c32d3510c134da3dc5e886128160339d9e7b813f93cbfb7565cac Jan 16 08:58:04.534430 unknown[755]: fetched base config from "system" Jan 16 08:58:04.534445 unknown[755]: fetched base config from "system" Jan 16 08:58:04.534953 ignition[755]: fetch: fetch complete Jan 16 08:58:04.534454 unknown[755]: fetched user config from "digitalocean" Jan 16 08:58:04.534959 ignition[755]: fetch: fetch passed Jan 16 08:58:04.535014 ignition[755]: Ignition finished successfully Jan 16 08:58:04.538645 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 08:58:04.545108 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 08:58:04.583153 ignition[762]: Ignition 2.19.0 Jan 16 08:58:04.583167 ignition[762]: Stage: kargs Jan 16 08:58:04.583361 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jan 16 08:58:04.583373 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:58:04.584696 ignition[762]: kargs: kargs passed Jan 16 08:58:04.584792 ignition[762]: Ignition finished successfully Jan 16 08:58:04.587782 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 08:58:04.594329 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 08:58:04.623950 ignition[768]: Ignition 2.19.0 Jan 16 08:58:04.623973 ignition[768]: Stage: disks Jan 16 08:58:04.624390 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 16 08:58:04.624415 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:58:04.626077 ignition[768]: disks: disks passed Jan 16 08:58:04.626198 ignition[768]: Ignition finished successfully Jan 16 08:58:04.628297 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 08:58:04.634569 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 08:58:04.635254 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 08:58:04.636210 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 08:58:04.637302 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 08:58:04.638273 systemd[1]: Reached target basic.target - Basic System. Jan 16 08:58:04.645264 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 08:58:04.674599 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 16 08:58:04.678282 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 08:58:04.692245 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 08:58:04.805870 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 16 08:58:04.806371 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 08:58:04.807612 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 08:58:04.820123 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 08:58:04.823622 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 08:58:04.826034 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 16 08:58:04.833455 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 08:58:04.841867 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (785) Jan 16 08:58:04.841906 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 08:58:04.841931 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 08:58:04.841944 kernel: BTRFS info (device vda6): using free space tree Jan 16 08:58:04.841227 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 08:58:04.841282 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 08:58:04.847519 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 08:58:04.849933 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 08:58:04.852605 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 08:58:04.862244 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 08:58:04.931914 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Jan 16 08:58:04.945368 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jan 16 08:58:04.954408 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 16 08:58:04.965665 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Jan 16 08:58:04.966730 coreos-metadata[787]: Jan 16 08:58:04.964 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 08:58:04.977586 coreos-metadata[787]: Jan 16 08:58:04.975 INFO Fetch successful Jan 16 08:58:04.981128 coreos-metadata[788]: Jan 16 08:58:04.981 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 08:58:04.988613 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 16 08:58:04.988803 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 16 08:58:04.996973 coreos-metadata[788]: Jan 16 08:58:04.996 INFO Fetch successful Jan 16 08:58:05.005044 coreos-metadata[788]: Jan 16 08:58:05.004 INFO wrote hostname ci-4081.3.0-f-6fcf2fe32d to /sysroot/etc/hostname Jan 16 08:58:05.007388 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 08:58:05.118985 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 08:58:05.126040 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 08:58:05.129115 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 08:58:05.142860 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 08:58:05.172810 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 08:58:05.184747 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 08:58:05.185623 ignition[907]: INFO : Ignition 2.19.0 Jan 16 08:58:05.185623 ignition[907]: INFO : Stage: mount Jan 16 08:58:05.185623 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 08:58:05.185623 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:58:05.189952 ignition[907]: INFO : mount: mount passed Jan 16 08:58:05.189952 ignition[907]: INFO : Ignition finished successfully Jan 16 08:58:05.190195 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 08:58:05.197088 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 08:58:05.216494 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 08:58:05.245246 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (920) Jan 16 08:58:05.245323 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 08:58:05.247073 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 08:58:05.247188 kernel: BTRFS info (device vda6): using free space tree Jan 16 08:58:05.252876 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 08:58:05.254526 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 08:58:05.284813 ignition[936]: INFO : Ignition 2.19.0 Jan 16 08:58:05.285671 ignition[936]: INFO : Stage: files Jan 16 08:58:05.286998 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 08:58:05.286998 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:58:05.288749 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Jan 16 08:58:05.290873 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 08:58:05.290873 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 08:58:05.295349 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 08:58:05.296188 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 08:58:05.296977 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 08:58:05.296807 unknown[936]: wrote ssh authorized keys file for user: core Jan 16 08:58:05.298680 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 16 08:58:05.298680 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 16 08:58:05.363866 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 16 08:58:05.431567 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 16 08:58:05.431567 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 16 08:58:05.431567 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 16 08:58:05.737215 systemd-networkd[752]: eth1: Gained IPv6LL Jan 16 08:58:05.892208 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 16 08:58:05.965953 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 16 08:58:05.966728 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 16 08:58:05.966728 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 08:58:05.968391 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 08:58:05.968391 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 08:58:05.968391 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 08:58:05.968391 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 08:58:05.968391 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 08:58:05.968391 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 08:58:05.972589 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 08:58:05.972589 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 08:58:05.972589 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 16 08:58:05.972589 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 16 08:58:05.972589 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 16 08:58:05.972589 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 16 08:58:06.313187 systemd-networkd[752]: eth0: Gained IPv6LL Jan 16 08:58:06.472415 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 16 08:58:06.772760 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 16 08:58:06.772760 ignition[936]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 16 08:58:06.774865 ignition[936]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 08:58:06.774865 ignition[936]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 08:58:06.774865 ignition[936]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 16 08:58:06.774865 ignition[936]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 16 08:58:06.774865 ignition[936]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 08:58:06.774865 ignition[936]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 08:58:06.780173 ignition[936]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 08:58:06.780173 ignition[936]: INFO : files: files passed Jan 16 08:58:06.780173 ignition[936]: INFO : Ignition finished successfully Jan 16 08:58:06.776664 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 08:58:06.784151 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 08:58:06.788008 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 08:58:06.791143 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 08:58:06.791303 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 08:58:06.816089 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 08:58:06.816089 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 08:58:06.819176 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 08:58:06.821299 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 08:58:06.822126 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 08:58:06.828124 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 08:58:06.889217 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 08:58:06.889368 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 08:58:06.891337 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 08:58:06.891911 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 08:58:06.893076 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 08:58:06.899212 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 08:58:06.921803 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 08:58:06.929151 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 08:58:06.949330 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 08:58:06.950117 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 08:58:06.951410 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 08:58:06.952278 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 08:58:06.952485 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 08:58:06.953535 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 08:58:06.954081 systemd[1]: Stopped target basic.target - Basic System. Jan 16 08:58:06.955046 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 08:58:06.956094 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 08:58:06.957310 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 08:58:06.958275 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 08:58:06.959146 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 08:58:06.960070 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 08:58:06.961000 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 08:58:06.961686 systemd[1]: Stopped target swap.target - Swaps. Jan 16 08:58:06.962442 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 08:58:06.962635 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 08:58:06.963725 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 08:58:06.965118 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 08:58:06.966057 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 08:58:06.966172 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 08:58:06.966982 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 08:58:06.967222 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 08:58:06.968321 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 08:58:06.968519 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 08:58:06.969535 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 08:58:06.969704 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 08:58:06.970418 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 08:58:06.970589 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 08:58:06.982259 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 08:58:06.986162 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 08:58:06.986660 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 08:58:06.986847 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 08:58:06.987688 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 08:58:06.989004 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 08:58:06.998216 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 08:58:06.998664 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 08:58:07.015860 ignition[989]: INFO : Ignition 2.19.0 Jan 16 08:58:07.018472 ignition[989]: INFO : Stage: umount Jan 16 08:58:07.018472 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 08:58:07.018472 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 08:58:07.018472 ignition[989]: INFO : umount: umount passed Jan 16 08:58:07.018472 ignition[989]: INFO : Ignition finished successfully Jan 16 08:58:07.027800 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 08:58:07.028506 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 08:58:07.028641 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 08:58:07.040001 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 08:58:07.040176 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 08:58:07.066372 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 08:58:07.066504 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 08:58:07.067098 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 08:58:07.067190 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 08:58:07.067732 systemd[1]: Stopped target network.target - Network. Jan 16 08:58:07.071181 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 08:58:07.071314 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 08:58:07.072194 systemd[1]: Stopped target paths.target - Path Units. Jan 16 08:58:07.072732 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 08:58:07.074033 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 08:58:07.074609 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 08:58:07.075406 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 08:58:07.076293 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 08:58:07.076353 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 08:58:07.077137 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 08:58:07.077193 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 08:58:07.077812 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 08:58:07.077950 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 08:58:07.078570 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 08:58:07.078621 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 08:58:07.079657 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 08:58:07.080777 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 08:58:07.081871 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 08:58:07.082028 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 08:58:07.083721 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 08:58:07.083927 systemd-networkd[752]: eth1: DHCPv6 lease lost Jan 16 08:58:07.084682 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 08:58:07.087977 systemd-networkd[752]: eth0: DHCPv6 lease lost Jan 16 08:58:07.092167 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 08:58:07.092391 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 08:58:07.096685 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 08:58:07.096879 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 08:58:07.099603 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 08:58:07.099671 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 08:58:07.105155 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 08:58:07.105790 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 08:58:07.106020 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 08:58:07.107195 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 08:58:07.107310 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:58:07.109740 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 08:58:07.109868 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 08:58:07.110754 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 08:58:07.110850 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 08:58:07.113922 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 08:58:07.131174 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 08:58:07.131409 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 08:58:07.132362 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 08:58:07.132418 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 08:58:07.135179 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 08:58:07.135233 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 08:58:07.135675 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 08:58:07.135736 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 08:58:07.136272 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 08:58:07.136357 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 08:58:07.136862 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 08:58:07.138016 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 08:58:07.143130 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 08:58:07.144751 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 08:58:07.144940 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 08:58:07.146490 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 16 08:58:07.146571 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 08:58:07.148130 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 08:58:07.148234 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 08:58:07.150038 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 08:58:07.150141 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:58:07.153767 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 08:58:07.154482 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 08:58:07.165652 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 08:58:07.166327 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 08:58:07.167477 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 08:58:07.174184 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 08:58:07.187167 systemd[1]: Switching root. Jan 16 08:58:07.213654 systemd-journald[182]: Journal stopped Jan 16 08:58:08.472254 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Jan 16 08:58:08.472354 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 08:58:08.472375 kernel: SELinux: policy capability open_perms=1 Jan 16 08:58:08.472387 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 08:58:08.472399 kernel: SELinux: policy capability always_check_network=0 Jan 16 08:58:08.472415 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 08:58:08.472463 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 08:58:08.472481 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 08:58:08.472493 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 08:58:08.472515 kernel: audit: type=1403 audit(1737017887.381:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 16 08:58:08.472529 systemd[1]: Successfully loaded SELinux policy in 41.609ms. Jan 16 08:58:08.472550 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.194ms. Jan 16 08:58:08.472564 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 08:58:08.472577 systemd[1]: Detected virtualization kvm. Jan 16 08:58:08.472592 systemd[1]: Detected architecture x86-64. Jan 16 08:58:08.472605 systemd[1]: Detected first boot. Jan 16 08:58:08.472617 systemd[1]: Hostname set to . Jan 16 08:58:08.472629 systemd[1]: Initializing machine ID from VM UUID. Jan 16 08:58:08.472642 zram_generator::config[1033]: No configuration found. Jan 16 08:58:08.472658 systemd[1]: Populated /etc with preset unit settings. Jan 16 08:58:08.472674 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 08:58:08.472686 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 08:58:08.472703 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 08:58:08.472717 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 08:58:08.472729 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 08:58:08.472742 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 08:58:08.472754 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 08:58:08.472767 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 08:58:08.472779 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 08:58:08.472791 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 08:58:08.472804 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 08:58:08.494884 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 08:58:08.494969 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 08:58:08.494996 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 08:58:08.495021 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 08:58:08.495043 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 08:58:08.495066 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 08:58:08.495088 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 16 08:58:08.495110 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 08:58:08.495132 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 08:58:08.495185 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 08:58:08.495206 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 08:58:08.495224 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 08:58:08.495245 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 08:58:08.495263 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 08:58:08.495282 systemd[1]: Reached target slices.target - Slice Units. Jan 16 08:58:08.495306 systemd[1]: Reached target swap.target - Swaps. Jan 16 08:58:08.495324 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 08:58:08.495342 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 08:58:08.495362 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 08:58:08.495381 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 08:58:08.495400 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 08:58:08.495421 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 08:58:08.495441 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 08:58:08.495463 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 08:58:08.495487 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 08:58:08.495504 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:08.495523 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 08:58:08.495543 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 08:58:08.495562 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 08:58:08.495581 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 08:58:08.495599 systemd[1]: Reached target machines.target - Containers. Jan 16 08:58:08.495618 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 08:58:08.495635 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:58:08.495659 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 08:58:08.495678 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 08:58:08.495697 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 08:58:08.495717 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 08:58:08.495736 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 08:58:08.495757 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 08:58:08.495779 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 08:58:08.495801 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 08:58:08.495886 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 08:58:08.495909 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 08:58:08.495926 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 08:58:08.495945 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 08:58:08.495962 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 08:58:08.495981 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 08:58:08.495999 kernel: loop: module loaded Jan 16 08:58:08.496022 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 08:58:08.496041 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 08:58:08.496067 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 08:58:08.496085 systemd[1]: verity-setup.service: Deactivated successfully. Jan 16 08:58:08.496102 systemd[1]: Stopped verity-setup.service. Jan 16 08:58:08.496121 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:08.496140 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 08:58:08.496158 kernel: ACPI: bus type drm_connector registered Jan 16 08:58:08.496185 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 08:58:08.496207 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 08:58:08.496228 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 08:58:08.496250 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 08:58:08.496273 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 08:58:08.496295 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 08:58:08.496321 kernel: fuse: init (API version 7.39) Jan 16 08:58:08.496343 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 08:58:08.496365 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 08:58:08.496387 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 08:58:08.496410 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 08:58:08.496506 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 08:58:08.496531 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 08:58:08.496560 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 08:58:08.496584 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 08:58:08.496605 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 08:58:08.496629 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 08:58:08.496647 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 08:58:08.496667 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 08:58:08.496690 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 08:58:08.496712 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 08:58:08.496735 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 08:58:08.496762 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 08:58:08.496783 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 08:58:08.496804 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 08:58:08.508975 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 08:58:08.509015 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 08:58:08.509029 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 08:58:08.509042 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 08:58:08.509055 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 08:58:08.509076 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 08:58:08.509091 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 08:58:08.509107 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 16 08:58:08.509161 systemd-journald[1108]: Collecting audit messages is disabled. Jan 16 08:58:08.509190 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 08:58:08.509204 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 08:58:08.509216 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:58:08.509230 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 08:58:08.509246 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 08:58:08.509259 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 08:58:08.509272 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 08:58:08.509284 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 08:58:08.509298 systemd-journald[1108]: Journal started Jan 16 08:58:08.509325 systemd-journald[1108]: Runtime Journal (/run/log/journal/1bf83e6151d6461e91e16a5facd30030) is 4.9M, max 39.3M, 34.4M free. Jan 16 08:58:08.054876 systemd[1]: Queued start job for default target multi-user.target. Jan 16 08:58:08.512271 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 08:58:08.077066 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 16 08:58:08.077614 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 08:58:08.519102 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 08:58:08.580223 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 08:58:08.583004 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:58:08.587866 kernel: loop0: detected capacity change from 0 to 8 Jan 16 08:58:08.616506 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 08:58:08.627890 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 08:58:08.629841 systemd-tmpfiles[1123]: ACLs are not supported, ignoring. Jan 16 08:58:08.630901 systemd-tmpfiles[1123]: ACLs are not supported, ignoring. Jan 16 08:58:08.637933 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 16 08:58:08.639285 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 08:58:08.642163 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 08:58:08.650129 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 16 08:58:08.654937 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 08:58:08.667143 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 08:58:08.676907 kernel: loop1: detected capacity change from 0 to 211296 Jan 16 08:58:08.686118 systemd-journald[1108]: Time spent on flushing to /var/log/journal/1bf83e6151d6461e91e16a5facd30030 is 66.840ms for 1004 entries. Jan 16 08:58:08.686118 systemd-journald[1108]: System Journal (/var/log/journal/1bf83e6151d6461e91e16a5facd30030) is 8.0M, max 195.6M, 187.6M free. Jan 16 08:58:08.771146 systemd-journald[1108]: Received client request to flush runtime journal. Jan 16 08:58:08.771231 kernel: loop2: detected capacity change from 0 to 142488 Jan 16 08:58:08.702887 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 16 08:58:08.720609 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 08:58:08.721397 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 16 08:58:08.778648 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 08:58:08.799224 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 08:58:08.804056 kernel: loop3: detected capacity change from 0 to 140768 Jan 16 08:58:08.810093 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 08:58:08.873870 kernel: loop4: detected capacity change from 0 to 8 Jan 16 08:58:08.891713 kernel: loop5: detected capacity change from 0 to 211296 Jan 16 08:58:08.891165 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 16 08:58:08.891194 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 16 08:58:08.915760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 08:58:08.921071 kernel: loop6: detected capacity change from 0 to 142488 Jan 16 08:58:08.946058 kernel: loop7: detected capacity change from 0 to 140768 Jan 16 08:58:08.965862 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 16 08:58:08.966576 (sd-merge)[1179]: Merged extensions into '/usr'. Jan 16 08:58:08.980595 systemd[1]: Reloading requested from client PID 1135 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 08:58:08.980836 systemd[1]: Reloading... Jan 16 08:58:09.216936 zram_generator::config[1207]: No configuration found. Jan 16 08:58:09.311888 ldconfig[1132]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 08:58:09.447704 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 08:58:09.510066 systemd[1]: Reloading finished in 528 ms. Jan 16 08:58:09.535829 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 08:58:09.539617 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 08:58:09.550520 systemd[1]: Starting ensure-sysext.service... Jan 16 08:58:09.554643 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 08:58:09.565140 systemd[1]: Reloading requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Jan 16 08:58:09.565166 systemd[1]: Reloading... Jan 16 08:58:09.651861 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 08:58:09.652272 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 16 08:58:09.657528 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 16 08:58:09.660497 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 16 08:58:09.660747 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 16 08:58:09.669532 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 08:58:09.670883 systemd-tmpfiles[1250]: Skipping /boot Jan 16 08:58:09.709597 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 08:58:09.710774 systemd-tmpfiles[1250]: Skipping /boot Jan 16 08:58:09.731923 zram_generator::config[1279]: No configuration found. Jan 16 08:58:09.897147 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 08:58:09.986931 systemd[1]: Reloading finished in 421 ms. Jan 16 08:58:10.007305 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 08:58:10.014545 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 08:58:10.031252 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 08:58:10.036183 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 08:58:10.039117 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 08:58:10.045243 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 08:58:10.049728 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 08:58:10.059641 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 08:58:10.075310 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 08:58:10.078211 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:10.078426 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:58:10.086293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 08:58:10.097846 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 08:58:10.099911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 08:58:10.100575 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:58:10.100783 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:10.103457 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:10.103736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:58:10.104063 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:58:10.104229 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:10.110297 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:10.110571 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:58:10.125699 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 08:58:10.126788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:58:10.128145 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:10.152358 systemd[1]: Finished ensure-sysext.service. Jan 16 08:58:10.154408 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 08:58:10.154591 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 08:58:10.175466 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 08:58:10.176865 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 08:58:10.193212 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 08:58:10.200616 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 08:58:10.201230 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 08:58:10.206546 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 08:58:10.209743 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 08:58:10.212401 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 08:58:10.224143 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 08:58:10.230245 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 08:58:10.231502 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 08:58:10.232983 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 08:58:10.235383 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 08:58:10.246901 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 08:58:10.274572 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 08:58:10.274740 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Jan 16 08:58:10.275453 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 08:58:10.282532 augenrules[1364]: No rules Jan 16 08:58:10.283278 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 08:58:10.326175 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 08:58:10.338169 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 08:58:10.374087 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 08:58:10.374764 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 08:58:10.378881 systemd-resolved[1326]: Positive Trust Anchors: Jan 16 08:58:10.380184 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 08:58:10.380843 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 08:58:10.409015 systemd-resolved[1326]: Using system hostname 'ci-4081.3.0-f-6fcf2fe32d'. Jan 16 08:58:10.415758 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 08:58:10.416746 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 08:58:10.487689 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 16 08:58:10.522973 systemd-networkd[1372]: lo: Link UP Jan 16 08:58:10.525888 systemd-networkd[1372]: lo: Gained carrier Jan 16 08:58:10.534214 systemd-networkd[1372]: Enumeration completed Jan 16 08:58:10.535455 systemd-networkd[1372]: eth0: Configuring with /run/systemd/network/10-fa:0c:07:ea:16:66.network. Jan 16 08:58:10.540945 systemd-networkd[1372]: eth0: Link UP Jan 16 08:58:10.542106 systemd-networkd[1372]: eth0: Gained carrier Jan 16 08:58:10.545002 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 16 08:58:10.545635 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:10.545931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 08:58:10.551929 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 16 08:58:10.555224 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 08:58:10.564108 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 08:58:10.578133 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 08:58:10.583373 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 08:58:10.583453 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 08:58:10.583477 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 08:58:10.583844 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 08:58:10.591439 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 08:58:10.592324 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 08:58:10.595045 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 08:58:10.596997 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 08:58:10.598661 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 08:58:10.600224 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 08:58:10.609534 kernel: ISO 9660 Extensions: RRIP_1991A Jan 16 08:58:10.612475 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 16 08:58:10.618517 systemd[1]: Reached target network.target - Network. Jan 16 08:58:10.629284 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 08:58:10.629908 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 08:58:10.629979 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 08:58:10.655298 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1373) Jan 16 08:58:10.731878 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 16 08:58:10.740869 kernel: ACPI: button: Power Button [PWRF] Jan 16 08:58:10.752910 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 16 08:58:10.754689 systemd-networkd[1372]: eth1: Configuring with /run/systemd/network/10-4a:5f:f7:47:79:ed.network. Jan 16 08:58:10.756785 systemd-networkd[1372]: eth1: Link UP Jan 16 08:58:10.757105 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 16 08:58:10.757203 systemd-networkd[1372]: eth1: Gained carrier Jan 16 08:58:10.764532 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 16 08:58:10.785506 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 08:58:10.794886 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 16 08:58:10.798261 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 08:58:10.836909 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 08:58:10.891023 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 08:58:10.890325 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:58:11.026142 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:58:11.030924 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 16 08:58:11.030996 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 16 08:58:11.033898 kernel: Console: switching to colour dummy device 80x25 Jan 16 08:58:11.034004 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 16 08:58:11.034028 kernel: [drm] features: -context_init Jan 16 08:58:11.036872 kernel: [drm] number of scanouts: 1 Jan 16 08:58:11.036979 kernel: [drm] number of cap sets: 0 Jan 16 08:58:11.037058 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 16 08:58:11.042961 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 16 08:58:11.043055 kernel: Console: switching to colour frame buffer device 128x48 Jan 16 08:58:11.048874 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 16 08:58:11.076584 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 08:58:11.077964 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:58:11.079254 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:58:11.092131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 08:58:11.122743 kernel: EDAC MC: Ver: 3.0.0 Jan 16 08:58:11.147258 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 08:58:11.149931 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 16 08:58:11.157138 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 16 08:58:11.181307 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 08:58:11.210545 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 16 08:58:11.213211 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 08:58:11.215289 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 08:58:11.215918 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 08:58:11.216158 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 08:58:11.216645 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 08:58:11.217183 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 08:58:11.218510 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 08:58:11.218641 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 08:58:11.218681 systemd[1]: Reached target paths.target - Path Units. Jan 16 08:58:11.218753 systemd[1]: Reached target timers.target - Timer Units. Jan 16 08:58:11.219802 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 08:58:11.224311 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 08:58:11.231256 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 08:58:11.233118 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 16 08:58:11.234404 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 08:58:11.237577 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 08:58:11.238099 systemd[1]: Reached target basic.target - Basic System. Jan 16 08:58:11.238653 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 08:58:11.238682 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 08:58:11.246249 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 08:58:11.251047 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 08:58:11.256073 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 08:58:11.262186 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 08:58:11.271962 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 08:58:11.278081 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 08:58:11.278604 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 08:58:11.280754 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 08:58:11.291991 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 08:58:11.296558 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 08:58:11.312265 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 08:58:11.320058 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 08:58:11.322126 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 08:58:11.323210 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 08:58:11.326096 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 08:58:11.331330 coreos-metadata[1437]: Jan 16 08:58:11.331 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 08:58:11.331990 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 08:58:11.335631 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 16 08:58:11.344259 coreos-metadata[1437]: Jan 16 08:58:11.343 INFO Fetch successful Jan 16 08:58:11.358431 dbus-daemon[1438]: [system] SELinux support is enabled Jan 16 08:58:11.360201 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 08:58:11.364242 jq[1439]: false Jan 16 08:58:11.371173 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 08:58:11.371375 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 08:58:11.379336 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 08:58:11.379929 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 08:58:11.384838 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 08:58:11.385076 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 08:58:11.410799 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 08:58:11.412024 jq[1450]: true Jan 16 08:58:11.410894 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 08:58:11.413599 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 08:58:11.413689 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 16 08:58:11.413718 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 08:58:11.440989 extend-filesystems[1440]: Found loop4 Jan 16 08:58:11.443668 extend-filesystems[1440]: Found loop5 Jan 16 08:58:11.443668 extend-filesystems[1440]: Found loop6 Jan 16 08:58:11.443668 extend-filesystems[1440]: Found loop7 Jan 16 08:58:11.443668 extend-filesystems[1440]: Found vda Jan 16 08:58:11.443668 extend-filesystems[1440]: Found vda1 Jan 16 08:58:11.443668 extend-filesystems[1440]: Found vda2 Jan 16 08:58:11.443668 extend-filesystems[1440]: Found vda3 Jan 16 08:58:11.443668 extend-filesystems[1440]: Found usr Jan 16 08:58:11.443668 extend-filesystems[1440]: Found vda4 Jan 16 08:58:11.443668 extend-filesystems[1440]: Found vda6 Jan 16 08:58:11.443668 extend-filesystems[1440]: Found vda7 Jan 16 08:58:11.443668 extend-filesystems[1440]: Found vda9 Jan 16 08:58:11.443668 extend-filesystems[1440]: Checking size of /dev/vda9 Jan 16 08:58:11.490286 tar[1456]: linux-amd64/helm Jan 16 08:58:11.495730 update_engine[1448]: I20250116 08:58:11.478210 1448 main.cc:92] Flatcar Update Engine starting Jan 16 08:58:11.473760 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 08:58:11.473924 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 16 08:58:11.491371 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 08:58:11.506230 jq[1472]: true Jan 16 08:58:11.499723 systemd[1]: Started update-engine.service - Update Engine. Jan 16 08:58:11.506623 update_engine[1448]: I20250116 08:58:11.499508 1448 update_check_scheduler.cc:74] Next update check in 10m10s Jan 16 08:58:11.511454 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 08:58:11.517844 systemd-logind[1447]: New seat seat0. Jan 16 08:58:11.522332 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Jan 16 08:58:11.522351 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 16 08:58:11.523260 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 08:58:11.528280 extend-filesystems[1440]: Resized partition /dev/vda9 Jan 16 08:58:11.541432 extend-filesystems[1486]: resize2fs 1.47.1 (20-May-2024) Jan 16 08:58:11.555870 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 16 08:58:11.656886 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1380) Jan 16 08:58:11.766682 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Jan 16 08:58:11.777758 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 16 08:58:11.785232 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 08:58:11.828265 extend-filesystems[1486]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 16 08:58:11.828265 extend-filesystems[1486]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 16 08:58:11.828265 extend-filesystems[1486]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 16 08:58:11.806156 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 08:58:11.850767 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Jan 16 08:58:11.850767 extend-filesystems[1440]: Found vdb Jan 16 08:58:11.820433 systemd[1]: Starting sshkeys.service... Jan 16 08:58:11.829129 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 08:58:11.831412 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 08:58:11.880942 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 08:58:11.893396 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 08:58:11.912136 sshd_keygen[1478]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 08:58:11.962176 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 08:58:11.971372 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 08:58:11.987428 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 08:58:12.007473 systemd[1]: Started sshd@0-144.126.217.85:22-139.178.68.195:60322.service - OpenSSH per-connection server daemon (139.178.68.195:60322). Jan 16 08:58:12.048873 coreos-metadata[1515]: Jan 16 08:58:12.048 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 08:58:12.052349 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 08:58:12.052573 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 08:58:12.073627 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 08:58:12.081879 coreos-metadata[1515]: Jan 16 08:58:12.080 INFO Fetch successful Jan 16 08:58:12.096066 unknown[1515]: wrote ssh authorized keys file for user: core Jan 16 08:58:12.125349 containerd[1470]: time="2025-01-16T08:58:12.125152634Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 16 08:58:12.145184 update-ssh-keys[1537]: Updated "/home/core/.ssh/authorized_keys" Jan 16 08:58:12.146283 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 08:58:12.153916 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 08:58:12.171205 sshd[1528]: Accepted publickey for core from 139.178.68.195 port 60322 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 08:58:12.165124 systemd[1]: Finished sshkeys.service. Jan 16 08:58:12.177953 sshd[1528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:12.188680 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 08:58:12.200526 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 16 08:58:12.206207 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 08:58:12.233769 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 08:58:12.241266 containerd[1470]: time="2025-01-16T08:58:12.241170301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:12.244431 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 08:58:12.249133 containerd[1470]: time="2025-01-16T08:58:12.245316679Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:58:12.253771 containerd[1470]: time="2025-01-16T08:58:12.253325045Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 16 08:58:12.253771 containerd[1470]: time="2025-01-16T08:58:12.253464205Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 16 08:58:12.254179 containerd[1470]: time="2025-01-16T08:58:12.253963443Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 16 08:58:12.254179 containerd[1470]: time="2025-01-16T08:58:12.254003833Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:12.254861 containerd[1470]: time="2025-01-16T08:58:12.254376112Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:58:12.254861 containerd[1470]: time="2025-01-16T08:58:12.254410074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:12.255588 containerd[1470]: time="2025-01-16T08:58:12.255538620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:58:12.256124 containerd[1470]: time="2025-01-16T08:58:12.255988709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:12.256124 containerd[1470]: time="2025-01-16T08:58:12.256054037Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:58:12.256124 containerd[1470]: time="2025-01-16T08:58:12.256075974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:12.259927 containerd[1470]: time="2025-01-16T08:58:12.258597907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:12.258967 systemd-logind[1447]: New session 1 of user core. Jan 16 08:58:12.261260 containerd[1470]: time="2025-01-16T08:58:12.260960288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 16 08:58:12.261616 containerd[1470]: time="2025-01-16T08:58:12.261243847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 08:58:12.261616 containerd[1470]: time="2025-01-16T08:58:12.261453236Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 16 08:58:12.262001 containerd[1470]: time="2025-01-16T08:58:12.261884190Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 16 08:58:12.262001 containerd[1470]: time="2025-01-16T08:58:12.261970434Z" level=info msg="metadata content store policy set" policy=shared Jan 16 08:58:12.269014 containerd[1470]: time="2025-01-16T08:58:12.268231433Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 16 08:58:12.269014 containerd[1470]: time="2025-01-16T08:58:12.268347169Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 16 08:58:12.269014 containerd[1470]: time="2025-01-16T08:58:12.268368362Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 16 08:58:12.269014 containerd[1470]: time="2025-01-16T08:58:12.268554868Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 16 08:58:12.269014 containerd[1470]: time="2025-01-16T08:58:12.268591132Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 16 08:58:12.269014 containerd[1470]: time="2025-01-16T08:58:12.268898769Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 16 08:58:12.272102 containerd[1470]: time="2025-01-16T08:58:12.270729769Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 16 08:58:12.272102 containerd[1470]: time="2025-01-16T08:58:12.271040938Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 16 08:58:12.272102 containerd[1470]: time="2025-01-16T08:58:12.271064713Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 16 08:58:12.272102 containerd[1470]: time="2025-01-16T08:58:12.271085365Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 16 08:58:12.272102 containerd[1470]: time="2025-01-16T08:58:12.271102151Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 16 08:58:12.272102 containerd[1470]: time="2025-01-16T08:58:12.271120203Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 16 08:58:12.272102 containerd[1470]: time="2025-01-16T08:58:12.271138513Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 16 08:58:12.272102 containerd[1470]: time="2025-01-16T08:58:12.271157976Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 16 08:58:12.272102 containerd[1470]: time="2025-01-16T08:58:12.271187029Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 16 08:58:12.272102 containerd[1470]: time="2025-01-16T08:58:12.271203829Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 16 08:58:12.272102 containerd[1470]: time="2025-01-16T08:58:12.271222868Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 16 08:58:12.272102 containerd[1470]: time="2025-01-16T08:58:12.271238717Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 16 08:58:12.272102 containerd[1470]: time="2025-01-16T08:58:12.271268806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.272102 containerd[1470]: time="2025-01-16T08:58:12.271290999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.272704 containerd[1470]: time="2025-01-16T08:58:12.271310456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.272704 containerd[1470]: time="2025-01-16T08:58:12.271329097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.272704 containerd[1470]: time="2025-01-16T08:58:12.271344771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.272704 containerd[1470]: time="2025-01-16T08:58:12.271360186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.272704 containerd[1470]: time="2025-01-16T08:58:12.271376541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.272704 containerd[1470]: time="2025-01-16T08:58:12.271399383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.272704 containerd[1470]: time="2025-01-16T08:58:12.271415775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.272704 containerd[1470]: time="2025-01-16T08:58:12.271432046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.272704 containerd[1470]: time="2025-01-16T08:58:12.271446831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.272704 containerd[1470]: time="2025-01-16T08:58:12.271460457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.272704 containerd[1470]: time="2025-01-16T08:58:12.271480208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.272704 containerd[1470]: time="2025-01-16T08:58:12.271532061Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 16 08:58:12.272704 containerd[1470]: time="2025-01-16T08:58:12.271564967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.272704 containerd[1470]: time="2025-01-16T08:58:12.271579726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.272704 containerd[1470]: time="2025-01-16T08:58:12.271607003Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 16 08:58:12.277173 containerd[1470]: time="2025-01-16T08:58:12.273580958Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 16 08:58:12.277173 containerd[1470]: time="2025-01-16T08:58:12.273864664Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 16 08:58:12.277173 containerd[1470]: time="2025-01-16T08:58:12.273889531Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 16 08:58:12.277173 containerd[1470]: time="2025-01-16T08:58:12.273912332Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 16 08:58:12.277173 containerd[1470]: time="2025-01-16T08:58:12.273927343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.277173 containerd[1470]: time="2025-01-16T08:58:12.273951617Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 16 08:58:12.277173 containerd[1470]: time="2025-01-16T08:58:12.273968701Z" level=info msg="NRI interface is disabled by configuration." Jan 16 08:58:12.277173 containerd[1470]: time="2025-01-16T08:58:12.273991877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 16 08:58:12.277645 containerd[1470]: time="2025-01-16T08:58:12.274426556Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 16 08:58:12.277645 containerd[1470]: time="2025-01-16T08:58:12.274523191Z" level=info msg="Connect containerd service" Jan 16 08:58:12.277645 containerd[1470]: time="2025-01-16T08:58:12.274587568Z" level=info msg="using legacy CRI server" Jan 16 08:58:12.277645 containerd[1470]: time="2025-01-16T08:58:12.274598255Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 08:58:12.277645 containerd[1470]: time="2025-01-16T08:58:12.276706437Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 16 08:58:12.279121 containerd[1470]: time="2025-01-16T08:58:12.278979499Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 08:58:12.284755 containerd[1470]: time="2025-01-16T08:58:12.282049613Z" level=info msg="Start subscribing containerd event" Jan 16 08:58:12.284755 containerd[1470]: time="2025-01-16T08:58:12.282150889Z" level=info msg="Start recovering state" Jan 16 08:58:12.284755 containerd[1470]: time="2025-01-16T08:58:12.282268729Z" level=info msg="Start event monitor" Jan 16 08:58:12.285485 containerd[1470]: time="2025-01-16T08:58:12.282294693Z" level=info msg="Start snapshots syncer" Jan 16 08:58:12.285485 containerd[1470]: time="2025-01-16T08:58:12.285194233Z" level=info msg="Start cni network conf syncer for default" Jan 16 08:58:12.285485 containerd[1470]: time="2025-01-16T08:58:12.285269102Z" level=info msg="Start streaming server" Jan 16 08:58:12.286390 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 08:58:12.292558 containerd[1470]: time="2025-01-16T08:58:12.286617690Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 08:58:12.292558 containerd[1470]: time="2025-01-16T08:58:12.286711334Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 08:58:12.292558 containerd[1470]: time="2025-01-16T08:58:12.286795430Z" level=info msg="containerd successfully booted in 0.170014s" Jan 16 08:58:12.293096 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 08:58:12.311509 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 08:58:12.333818 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 16 08:58:12.393197 systemd-networkd[1372]: eth0: Gained IPv6LL Jan 16 08:58:12.393977 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 16 08:58:12.402298 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 08:58:12.409354 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 08:58:12.423254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:58:12.440646 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 08:58:12.457345 systemd-networkd[1372]: eth1: Gained IPv6LL Jan 16 08:58:12.459967 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 16 08:58:12.488033 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 08:58:12.561900 systemd[1547]: Queued start job for default target default.target. Jan 16 08:58:12.569445 systemd[1547]: Created slice app.slice - User Application Slice. Jan 16 08:58:12.569533 systemd[1547]: Reached target paths.target - Paths. Jan 16 08:58:12.569587 systemd[1547]: Reached target timers.target - Timers. Jan 16 08:58:12.582115 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 08:58:12.630792 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 08:58:12.631429 systemd[1547]: Reached target sockets.target - Sockets. Jan 16 08:58:12.631457 systemd[1547]: Reached target basic.target - Basic System. Jan 16 08:58:12.631678 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 08:58:12.636608 systemd[1547]: Reached target default.target - Main User Target. Jan 16 08:58:12.636712 systemd[1547]: Startup finished in 290ms. Jan 16 08:58:12.643187 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 08:58:12.736324 systemd[1]: Started sshd@1-144.126.217.85:22-139.178.68.195:60334.service - OpenSSH per-connection server daemon (139.178.68.195:60334). Jan 16 08:58:12.851897 sshd[1570]: Accepted publickey for core from 139.178.68.195 port 60334 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 08:58:12.854403 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:12.874705 systemd-logind[1447]: New session 2 of user core. Jan 16 08:58:12.879189 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 16 08:58:12.880024 tar[1456]: linux-amd64/LICENSE Jan 16 08:58:12.880024 tar[1456]: linux-amd64/README.md Jan 16 08:58:12.912487 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 08:58:12.976050 sshd[1570]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:12.990737 systemd[1]: sshd@1-144.126.217.85:22-139.178.68.195:60334.service: Deactivated successfully. Jan 16 08:58:12.996423 systemd[1]: session-2.scope: Deactivated successfully. Jan 16 08:58:12.999026 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Jan 16 08:58:13.010376 systemd[1]: Started sshd@2-144.126.217.85:22-139.178.68.195:60350.service - OpenSSH per-connection server daemon (139.178.68.195:60350). Jan 16 08:58:13.022104 systemd-logind[1447]: Removed session 2. Jan 16 08:58:13.084740 sshd[1580]: Accepted publickey for core from 139.178.68.195 port 60350 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 08:58:13.086349 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:13.097176 systemd-logind[1447]: New session 3 of user core. Jan 16 08:58:13.105236 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 08:58:13.188970 sshd[1580]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:13.199864 systemd[1]: sshd@2-144.126.217.85:22-139.178.68.195:60350.service: Deactivated successfully. Jan 16 08:58:13.203688 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 08:58:13.205332 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Jan 16 08:58:13.207314 systemd-logind[1447]: Removed session 3. Jan 16 08:58:13.837260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:13.841447 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 08:58:13.845242 systemd[1]: Startup finished in 1.479s (kernel) + 6.599s (initrd) + 6.504s (userspace) = 14.583s. Jan 16 08:58:13.852116 (kubelet)[1591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 08:58:14.805604 kubelet[1591]: E0116 08:58:14.805445 1591 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 08:58:14.810708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 08:58:14.811070 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 08:58:14.811745 systemd[1]: kubelet.service: Consumed 1.522s CPU time. Jan 16 08:58:23.210351 systemd[1]: Started sshd@3-144.126.217.85:22-139.178.68.195:39866.service - OpenSSH per-connection server daemon (139.178.68.195:39866). Jan 16 08:58:23.271775 sshd[1604]: Accepted publickey for core from 139.178.68.195 port 39866 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 08:58:23.273953 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:23.280344 systemd-logind[1447]: New session 4 of user core. Jan 16 08:58:23.288161 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 08:58:23.353939 sshd[1604]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:23.363319 systemd[1]: sshd@3-144.126.217.85:22-139.178.68.195:39866.service: Deactivated successfully. Jan 16 08:58:23.365696 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 08:58:23.368531 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Jan 16 08:58:23.373258 systemd[1]: Started sshd@4-144.126.217.85:22-139.178.68.195:39880.service - OpenSSH per-connection server daemon (139.178.68.195:39880). Jan 16 08:58:23.374940 systemd-logind[1447]: Removed session 4. Jan 16 08:58:23.429606 sshd[1611]: Accepted publickey for core from 139.178.68.195 port 39880 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 08:58:23.431487 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:23.438536 systemd-logind[1447]: New session 5 of user core. Jan 16 08:58:23.444140 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 08:58:23.501893 sshd[1611]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:23.516745 systemd[1]: sshd@4-144.126.217.85:22-139.178.68.195:39880.service: Deactivated successfully. Jan 16 08:58:23.518991 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 08:58:23.521333 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Jan 16 08:58:23.531232 systemd[1]: Started sshd@5-144.126.217.85:22-139.178.68.195:39896.service - OpenSSH per-connection server daemon (139.178.68.195:39896). Jan 16 08:58:23.532794 systemd-logind[1447]: Removed session 5. Jan 16 08:58:23.573163 sshd[1618]: Accepted publickey for core from 139.178.68.195 port 39896 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 08:58:23.575189 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:23.581779 systemd-logind[1447]: New session 6 of user core. Jan 16 08:58:23.588103 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 08:58:23.651574 sshd[1618]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:23.664667 systemd[1]: sshd@5-144.126.217.85:22-139.178.68.195:39896.service: Deactivated successfully. Jan 16 08:58:23.667117 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 08:58:23.668958 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Jan 16 08:58:23.676242 systemd[1]: Started sshd@6-144.126.217.85:22-139.178.68.195:39902.service - OpenSSH per-connection server daemon (139.178.68.195:39902). Jan 16 08:58:23.677776 systemd-logind[1447]: Removed session 6. Jan 16 08:58:23.718963 sshd[1625]: Accepted publickey for core from 139.178.68.195 port 39902 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 08:58:23.721025 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:23.727265 systemd-logind[1447]: New session 7 of user core. Jan 16 08:58:23.733111 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 08:58:23.802523 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 08:58:23.803465 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 08:58:23.818266 sudo[1628]: pam_unix(sudo:session): session closed for user root Jan 16 08:58:23.822523 sshd[1625]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:23.836955 systemd[1]: sshd@6-144.126.217.85:22-139.178.68.195:39902.service: Deactivated successfully. Jan 16 08:58:23.838810 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 08:58:23.840985 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Jan 16 08:58:23.846204 systemd[1]: Started sshd@7-144.126.217.85:22-139.178.68.195:39906.service - OpenSSH per-connection server daemon (139.178.68.195:39906). Jan 16 08:58:23.847729 systemd-logind[1447]: Removed session 7. Jan 16 08:58:23.893027 sshd[1633]: Accepted publickey for core from 139.178.68.195 port 39906 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 08:58:23.895309 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:23.901164 systemd-logind[1447]: New session 8 of user core. Jan 16 08:58:23.907135 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 08:58:23.968161 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 08:58:23.968562 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 08:58:23.972759 sudo[1637]: pam_unix(sudo:session): session closed for user root Jan 16 08:58:23.979452 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 16 08:58:23.980284 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 08:58:24.002217 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 16 08:58:24.004193 auditctl[1640]: No rules Jan 16 08:58:24.004637 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 08:58:24.004856 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 16 08:58:24.007514 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 08:58:24.057599 augenrules[1658]: No rules Jan 16 08:58:24.059585 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 08:58:24.061230 sudo[1636]: pam_unix(sudo:session): session closed for user root Jan 16 08:58:24.066658 sshd[1633]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:24.076723 systemd[1]: sshd@7-144.126.217.85:22-139.178.68.195:39906.service: Deactivated successfully. Jan 16 08:58:24.078951 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 08:58:24.080995 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Jan 16 08:58:24.088216 systemd[1]: Started sshd@8-144.126.217.85:22-139.178.68.195:39910.service - OpenSSH per-connection server daemon (139.178.68.195:39910). Jan 16 08:58:24.089631 systemd-logind[1447]: Removed session 8. Jan 16 08:58:24.127614 sshd[1666]: Accepted publickey for core from 139.178.68.195 port 39910 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 08:58:24.129555 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 08:58:24.134183 systemd-logind[1447]: New session 9 of user core. Jan 16 08:58:24.143178 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 08:58:24.202681 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 08:58:24.203061 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 08:58:24.728351 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 08:58:24.730285 (dockerd)[1684]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 08:58:25.063204 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 08:58:25.075537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:58:25.290127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:25.305458 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 08:58:25.312020 dockerd[1684]: time="2025-01-16T08:58:25.311897810Z" level=info msg="Starting up" Jan 16 08:58:25.438183 kubelet[1699]: E0116 08:58:25.438043 1699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 08:58:25.444422 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 08:58:25.444734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 08:58:25.487430 dockerd[1684]: time="2025-01-16T08:58:25.487126670Z" level=info msg="Loading containers: start." Jan 16 08:58:25.620865 kernel: Initializing XFRM netlink socket Jan 16 08:58:25.656034 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 16 08:58:25.733076 systemd-networkd[1372]: docker0: Link UP Jan 16 08:58:26.149484 systemd-timesyncd[1346]: Contacted time server 207.192.69.118:123 (2.flatcar.pool.ntp.org). Jan 16 08:58:26.149578 systemd-timesyncd[1346]: Initial clock synchronization to Thu 2025-01-16 08:58:26.149114 UTC. Jan 16 08:58:26.150277 systemd-resolved[1326]: Clock change detected. Flushing caches. Jan 16 08:58:26.173722 dockerd[1684]: time="2025-01-16T08:58:26.173535905Z" level=info msg="Loading containers: done." Jan 16 08:58:26.191989 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1477501064-merged.mount: Deactivated successfully. Jan 16 08:58:26.195320 dockerd[1684]: time="2025-01-16T08:58:26.195262862Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 08:58:26.195447 dockerd[1684]: time="2025-01-16T08:58:26.195390404Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 16 08:58:26.195538 dockerd[1684]: time="2025-01-16T08:58:26.195516222Z" level=info msg="Daemon has completed initialization" Jan 16 08:58:26.239231 dockerd[1684]: time="2025-01-16T08:58:26.238744728Z" level=info msg="API listen on /run/docker.sock" Jan 16 08:58:26.239062 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 08:58:27.245931 containerd[1470]: time="2025-01-16T08:58:27.245868554Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 16 08:58:27.850916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1982437711.mount: Deactivated successfully. Jan 16 08:58:29.442944 containerd[1470]: time="2025-01-16T08:58:29.442799391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:29.444502 containerd[1470]: time="2025-01-16T08:58:29.444444872Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35140730" Jan 16 08:58:29.445207 containerd[1470]: time="2025-01-16T08:58:29.444959913Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:29.448928 containerd[1470]: time="2025-01-16T08:58:29.448847430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:29.450971 containerd[1470]: time="2025-01-16T08:58:29.450623115Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 2.204688036s" Jan 16 08:58:29.450971 containerd[1470]: time="2025-01-16T08:58:29.450687027Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 16 08:58:29.482858 containerd[1470]: time="2025-01-16T08:58:29.482806434Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 16 08:58:31.231285 containerd[1470]: time="2025-01-16T08:58:31.229968683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:31.231941 containerd[1470]: time="2025-01-16T08:58:31.231882173Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32216641" Jan 16 08:58:31.232813 containerd[1470]: time="2025-01-16T08:58:31.232778321Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:31.236178 containerd[1470]: time="2025-01-16T08:58:31.236122407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:31.237658 containerd[1470]: time="2025-01-16T08:58:31.237605437Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 1.754518431s" Jan 16 08:58:31.237825 containerd[1470]: time="2025-01-16T08:58:31.237806675Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 16 08:58:31.264875 containerd[1470]: time="2025-01-16T08:58:31.264826265Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 16 08:58:32.350202 containerd[1470]: time="2025-01-16T08:58:32.350082198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:32.352134 containerd[1470]: time="2025-01-16T08:58:32.351999532Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:32.352134 containerd[1470]: time="2025-01-16T08:58:32.352077968Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17332841" Jan 16 08:58:32.357639 containerd[1470]: time="2025-01-16T08:58:32.357531917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:32.359845 containerd[1470]: time="2025-01-16T08:58:32.359459278Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 1.094381908s" Jan 16 08:58:32.359845 containerd[1470]: time="2025-01-16T08:58:32.359522280Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 16 08:58:32.394945 containerd[1470]: time="2025-01-16T08:58:32.394797083Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 16 08:58:32.397051 systemd-resolved[1326]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 16 08:58:33.657866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1665588212.mount: Deactivated successfully. Jan 16 08:58:34.254248 containerd[1470]: time="2025-01-16T08:58:34.254033256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:34.256057 containerd[1470]: time="2025-01-16T08:58:34.255985845Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 16 08:58:34.260168 containerd[1470]: time="2025-01-16T08:58:34.260110942Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:34.263162 containerd[1470]: time="2025-01-16T08:58:34.263069465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:34.264228 containerd[1470]: time="2025-01-16T08:58:34.263918568Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 1.869034858s" Jan 16 08:58:34.264228 containerd[1470]: time="2025-01-16T08:58:34.263960848Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 16 08:58:34.296926 containerd[1470]: time="2025-01-16T08:58:34.296237092Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 16 08:58:34.831531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3919011561.mount: Deactivated successfully. Jan 16 08:58:35.463521 systemd-resolved[1326]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 16 08:58:35.852918 containerd[1470]: time="2025-01-16T08:58:35.852721283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:35.854421 containerd[1470]: time="2025-01-16T08:58:35.854336922Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 16 08:58:35.855315 containerd[1470]: time="2025-01-16T08:58:35.855268681Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:35.859364 containerd[1470]: time="2025-01-16T08:58:35.859286871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:35.861175 containerd[1470]: time="2025-01-16T08:58:35.860965285Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.564681891s" Jan 16 08:58:35.861175 containerd[1470]: time="2025-01-16T08:58:35.861026505Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 16 08:58:35.896723 containerd[1470]: time="2025-01-16T08:58:35.896684883Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 16 08:58:36.109428 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 08:58:36.117506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:58:36.278581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:36.301794 (kubelet)[1994]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 08:58:36.384645 kubelet[1994]: E0116 08:58:36.384289 1994 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 08:58:36.390010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 08:58:36.390460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 08:58:36.417075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3284123902.mount: Deactivated successfully. Jan 16 08:58:36.422065 containerd[1470]: time="2025-01-16T08:58:36.422000862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:36.422780 containerd[1470]: time="2025-01-16T08:58:36.422519748Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 16 08:58:36.425217 containerd[1470]: time="2025-01-16T08:58:36.424841127Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:36.429059 containerd[1470]: time="2025-01-16T08:58:36.429000537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:36.430827 containerd[1470]: time="2025-01-16T08:58:36.430169501Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 533.261928ms" Jan 16 08:58:36.430827 containerd[1470]: time="2025-01-16T08:58:36.430248408Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 16 08:58:36.468437 containerd[1470]: time="2025-01-16T08:58:36.468379477Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 16 08:58:37.062141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2810155147.mount: Deactivated successfully. Jan 16 08:58:39.955748 containerd[1470]: time="2025-01-16T08:58:39.955634779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:39.958124 containerd[1470]: time="2025-01-16T08:58:39.958032122Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 16 08:58:39.960247 containerd[1470]: time="2025-01-16T08:58:39.959710423Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:39.965655 containerd[1470]: time="2025-01-16T08:58:39.965572151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:58:39.968311 containerd[1470]: time="2025-01-16T08:58:39.967740184Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.499300914s" Jan 16 08:58:39.968311 containerd[1470]: time="2025-01-16T08:58:39.967815558Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 16 08:58:43.670654 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:43.679083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:58:43.713529 systemd[1]: Reloading requested from client PID 2118 ('systemctl') (unit session-9.scope)... Jan 16 08:58:43.713556 systemd[1]: Reloading... Jan 16 08:58:43.874217 zram_generator::config[2157]: No configuration found. Jan 16 08:58:44.024450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 08:58:44.124965 systemd[1]: Reloading finished in 410 ms. Jan 16 08:58:44.195118 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 08:58:44.195315 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 08:58:44.195709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:44.204104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:58:44.339429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:44.352828 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 08:58:44.423949 kubelet[2211]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 08:58:44.423949 kubelet[2211]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 08:58:44.423949 kubelet[2211]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 08:58:44.426975 kubelet[2211]: I0116 08:58:44.426846 2211 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 08:58:44.848217 kubelet[2211]: I0116 08:58:44.847037 2211 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 16 08:58:44.848217 kubelet[2211]: I0116 08:58:44.847074 2211 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 08:58:44.848217 kubelet[2211]: I0116 08:58:44.847400 2211 server.go:919] "Client rotation is on, will bootstrap in background" Jan 16 08:58:44.883537 kubelet[2211]: E0116 08:58:44.882617 2211 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://144.126.217.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:44.883537 kubelet[2211]: I0116 08:58:44.882867 2211 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 08:58:44.899906 kubelet[2211]: I0116 08:58:44.899849 2211 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 08:58:44.900248 kubelet[2211]: I0116 08:58:44.900225 2211 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 08:58:44.901806 kubelet[2211]: I0116 08:58:44.901698 2211 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 16 08:58:44.901806 kubelet[2211]: I0116 08:58:44.901787 2211 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 08:58:44.901806 kubelet[2211]: I0116 08:58:44.901806 2211 container_manager_linux.go:301] "Creating device plugin manager" Jan 16 08:58:44.903466 kubelet[2211]: I0116 08:58:44.903380 2211 state_mem.go:36] "Initialized new in-memory state store" Jan 16 08:58:44.903681 kubelet[2211]: I0116 08:58:44.903651 2211 kubelet.go:396] "Attempting to sync node with API server" Jan 16 08:58:44.905363 kubelet[2211]: I0116 08:58:44.903687 2211 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 08:58:44.905363 kubelet[2211]: I0116 08:58:44.903729 2211 kubelet.go:312] "Adding apiserver pod source" Jan 16 08:58:44.905363 kubelet[2211]: I0116 08:58:44.903760 2211 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 08:58:44.906203 kubelet[2211]: I0116 08:58:44.906158 2211 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 08:58:44.911082 kubelet[2211]: I0116 08:58:44.911040 2211 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 08:58:44.911345 kubelet[2211]: W0116 08:58:44.911284 2211 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://144.126.217.85:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:44.911392 kubelet[2211]: E0116 08:58:44.911368 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://144.126.217.85:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:44.911514 kubelet[2211]: W0116 08:58:44.911472 2211 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://144.126.217.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-f-6fcf2fe32d&limit=500&resourceVersion=0": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:44.911729 kubelet[2211]: E0116 08:58:44.911517 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://144.126.217.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-f-6fcf2fe32d&limit=500&resourceVersion=0": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:44.914219 kubelet[2211]: W0116 08:58:44.912579 2211 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 08:58:44.914219 kubelet[2211]: I0116 08:58:44.913356 2211 server.go:1256] "Started kubelet" Jan 16 08:58:44.914219 kubelet[2211]: I0116 08:58:44.913574 2211 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 08:58:44.914219 kubelet[2211]: I0116 08:58:44.913719 2211 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 08:58:44.914219 kubelet[2211]: I0116 08:58:44.914047 2211 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 08:58:44.914827 kubelet[2211]: I0116 08:58:44.914785 2211 server.go:461] "Adding debug handlers to kubelet server" Jan 16 08:58:44.917849 kubelet[2211]: I0116 08:58:44.917577 2211 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 08:58:44.923813 kubelet[2211]: E0116 08:58:44.922262 2211 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://144.126.217.85:6443/api/v1/namespaces/default/events\": dial tcp 144.126.217.85:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-f-6fcf2fe32d.181b20997ef1e900 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-f-6fcf2fe32d,UID:ci-4081.3.0-f-6fcf2fe32d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-f-6fcf2fe32d,},FirstTimestamp:2025-01-16 08:58:44.913326336 +0000 UTC m=+0.553428741,LastTimestamp:2025-01-16 08:58:44.913326336 +0000 UTC m=+0.553428741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-f-6fcf2fe32d,}" Jan 16 08:58:44.925916 kubelet[2211]: I0116 08:58:44.925454 2211 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 16 08:58:44.929234 kubelet[2211]: E0116 08:58:44.929205 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.217.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-f-6fcf2fe32d?timeout=10s\": dial tcp 144.126.217.85:6443: connect: connection refused" interval="200ms" Jan 16 08:58:44.929634 kubelet[2211]: I0116 08:58:44.929612 2211 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 08:58:44.930498 kubelet[2211]: I0116 08:58:44.930469 2211 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 16 08:58:44.930596 kubelet[2211]: I0116 08:58:44.930561 2211 reconciler_new.go:29] "Reconciler: start to sync state" Jan 16 08:58:44.933530 kubelet[2211]: W0116 08:58:44.933467 2211 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://144.126.217.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:44.933700 kubelet[2211]: E0116 08:58:44.933689 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://144.126.217.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:44.934084 kubelet[2211]: I0116 08:58:44.934064 2211 factory.go:221] Registration of the containerd container factory successfully Jan 16 08:58:44.934164 kubelet[2211]: I0116 08:58:44.934157 2211 factory.go:221] Registration of the systemd container factory successfully Jan 16 08:58:44.950636 kubelet[2211]: I0116 08:58:44.950569 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 08:58:44.952516 kubelet[2211]: I0116 08:58:44.952463 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 08:58:44.952516 kubelet[2211]: I0116 08:58:44.952525 2211 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 16 08:58:44.952698 kubelet[2211]: I0116 08:58:44.952557 2211 kubelet.go:2329] "Starting kubelet main sync loop" Jan 16 08:58:44.952698 kubelet[2211]: E0116 08:58:44.952628 2211 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 08:58:44.953001 kubelet[2211]: E0116 08:58:44.952977 2211 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 08:58:44.971141 kubelet[2211]: W0116 08:58:44.971087 2211 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://144.126.217.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:44.971141 kubelet[2211]: E0116 08:58:44.971130 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://144.126.217.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:44.973754 kubelet[2211]: I0116 08:58:44.973714 2211 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 16 08:58:44.973754 kubelet[2211]: I0116 08:58:44.973741 2211 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 16 08:58:44.973963 kubelet[2211]: I0116 08:58:44.973800 2211 state_mem.go:36] "Initialized new in-memory state store" Jan 16 08:58:44.975560 kubelet[2211]: I0116 08:58:44.975521 2211 policy_none.go:49] "None policy: Start" Jan 16 08:58:44.976375 kubelet[2211]: I0116 08:58:44.976346 2211 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 16 08:58:44.976498 kubelet[2211]: I0116 08:58:44.976388 2211 state_mem.go:35] "Initializing new in-memory state store" Jan 16 08:58:44.985986 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 08:58:45.003398 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 08:58:45.009124 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 08:58:45.027279 kubelet[2211]: I0116 08:58:45.026649 2211 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 08:58:45.027279 kubelet[2211]: I0116 08:58:45.027146 2211 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 08:58:45.029503 kubelet[2211]: I0116 08:58:45.029397 2211 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.030635 kubelet[2211]: E0116 08:58:45.030605 2211 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-f-6fcf2fe32d\" not found" Jan 16 08:58:45.031888 kubelet[2211]: E0116 08:58:45.031858 2211 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://144.126.217.85:6443/api/v1/nodes\": dial tcp 144.126.217.85:6443: connect: connection refused" node="ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.053209 kubelet[2211]: I0116 08:58:45.053132 2211 topology_manager.go:215] "Topology Admit Handler" podUID="e71e74c94efcb0be6001a72fd841d715" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.054535 kubelet[2211]: I0116 08:58:45.054446 2211 topology_manager.go:215] "Topology Admit Handler" podUID="3f0245b274d8bf22744eafe0c34886d0" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.056340 kubelet[2211]: I0116 08:58:45.056278 2211 topology_manager.go:215] "Topology Admit Handler" podUID="c11fb1ed4ca5729f7b1804dc4771ba97" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.064927 systemd[1]: Created slice kubepods-burstable-pode71e74c94efcb0be6001a72fd841d715.slice - libcontainer container kubepods-burstable-pode71e74c94efcb0be6001a72fd841d715.slice. Jan 16 08:58:45.088002 systemd[1]: Created slice kubepods-burstable-podc11fb1ed4ca5729f7b1804dc4771ba97.slice - libcontainer container kubepods-burstable-podc11fb1ed4ca5729f7b1804dc4771ba97.slice. Jan 16 08:58:45.095056 systemd[1]: Created slice kubepods-burstable-pod3f0245b274d8bf22744eafe0c34886d0.slice - libcontainer container kubepods-burstable-pod3f0245b274d8bf22744eafe0c34886d0.slice. Jan 16 08:58:45.130827 kubelet[2211]: E0116 08:58:45.130501 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.217.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-f-6fcf2fe32d?timeout=10s\": dial tcp 144.126.217.85:6443: connect: connection refused" interval="400ms" Jan 16 08:58:45.131596 kubelet[2211]: I0116 08:58:45.131114 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e71e74c94efcb0be6001a72fd841d715-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"e71e74c94efcb0be6001a72fd841d715\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.131596 kubelet[2211]: I0116 08:58:45.131156 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e71e74c94efcb0be6001a72fd841d715-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"e71e74c94efcb0be6001a72fd841d715\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.131596 kubelet[2211]: I0116 08:58:45.131210 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f0245b274d8bf22744eafe0c34886d0-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"3f0245b274d8bf22744eafe0c34886d0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.131596 kubelet[2211]: I0116 08:58:45.131244 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f0245b274d8bf22744eafe0c34886d0-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"3f0245b274d8bf22744eafe0c34886d0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.131596 kubelet[2211]: I0116 08:58:45.131281 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f0245b274d8bf22744eafe0c34886d0-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"3f0245b274d8bf22744eafe0c34886d0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.131877 kubelet[2211]: I0116 08:58:45.131313 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c11fb1ed4ca5729f7b1804dc4771ba97-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"c11fb1ed4ca5729f7b1804dc4771ba97\") " pod="kube-system/kube-scheduler-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.131877 kubelet[2211]: I0116 08:58:45.131350 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e71e74c94efcb0be6001a72fd841d715-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"e71e74c94efcb0be6001a72fd841d715\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.131877 kubelet[2211]: I0116 08:58:45.131380 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f0245b274d8bf22744eafe0c34886d0-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"3f0245b274d8bf22744eafe0c34886d0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.131877 kubelet[2211]: I0116 08:58:45.131413 2211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f0245b274d8bf22744eafe0c34886d0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"3f0245b274d8bf22744eafe0c34886d0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.233661 kubelet[2211]: I0116 08:58:45.233608 2211 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.234154 kubelet[2211]: E0116 08:58:45.234116 2211 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://144.126.217.85:6443/api/v1/nodes\": dial tcp 144.126.217.85:6443: connect: connection refused" node="ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.386629 kubelet[2211]: E0116 08:58:45.386483 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:45.387778 containerd[1470]: time="2025-01-16T08:58:45.387443183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-f-6fcf2fe32d,Uid:e71e74c94efcb0be6001a72fd841d715,Namespace:kube-system,Attempt:0,}" Jan 16 08:58:45.390318 systemd-resolved[1326]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 16 08:58:45.392758 kubelet[2211]: E0116 08:58:45.392365 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:45.397548 containerd[1470]: time="2025-01-16T08:58:45.397472209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-f-6fcf2fe32d,Uid:c11fb1ed4ca5729f7b1804dc4771ba97,Namespace:kube-system,Attempt:0,}" Jan 16 08:58:45.399029 kubelet[2211]: E0116 08:58:45.398540 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:45.399473 containerd[1470]: time="2025-01-16T08:58:45.399435072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d,Uid:3f0245b274d8bf22744eafe0c34886d0,Namespace:kube-system,Attempt:0,}" Jan 16 08:58:45.532402 kubelet[2211]: E0116 08:58:45.531820 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.217.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-f-6fcf2fe32d?timeout=10s\": dial tcp 144.126.217.85:6443: connect: connection refused" interval="800ms" Jan 16 08:58:45.636914 kubelet[2211]: I0116 08:58:45.636680 2211 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.637547 kubelet[2211]: E0116 08:58:45.637474 2211 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://144.126.217.85:6443/api/v1/nodes\": dial tcp 144.126.217.85:6443: connect: connection refused" node="ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:45.839552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1095331491.mount: Deactivated successfully. Jan 16 08:58:45.845377 containerd[1470]: time="2025-01-16T08:58:45.845308144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:58:45.846891 containerd[1470]: time="2025-01-16T08:58:45.846825472Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 08:58:45.847369 containerd[1470]: time="2025-01-16T08:58:45.847330969Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:58:45.847855 containerd[1470]: time="2025-01-16T08:58:45.847810968Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 16 08:58:45.848817 containerd[1470]: time="2025-01-16T08:58:45.848778006Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:58:45.849747 containerd[1470]: time="2025-01-16T08:58:45.849702875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 08:58:45.853724 containerd[1470]: time="2025-01-16T08:58:45.853659326Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:58:45.857905 containerd[1470]: time="2025-01-16T08:58:45.857428264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 08:58:45.860507 containerd[1470]: time="2025-01-16T08:58:45.859512768Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 459.86988ms" Jan 16 08:58:45.864027 containerd[1470]: time="2025-01-16T08:58:45.863951686Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 476.416061ms" Jan 16 08:58:45.865074 containerd[1470]: time="2025-01-16T08:58:45.865016947Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 467.418008ms" Jan 16 08:58:46.078039 kubelet[2211]: W0116 08:58:46.077834 2211 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://144.126.217.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-f-6fcf2fe32d&limit=500&resourceVersion=0": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:46.078039 kubelet[2211]: E0116 08:58:46.077929 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://144.126.217.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-f-6fcf2fe32d&limit=500&resourceVersion=0": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:46.083753 containerd[1470]: time="2025-01-16T08:58:46.083134198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:58:46.083753 containerd[1470]: time="2025-01-16T08:58:46.083255468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:58:46.083753 containerd[1470]: time="2025-01-16T08:58:46.083300302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:46.083753 containerd[1470]: time="2025-01-16T08:58:46.083465821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:46.089617 containerd[1470]: time="2025-01-16T08:58:46.089102787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:58:46.089617 containerd[1470]: time="2025-01-16T08:58:46.089210797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:58:46.089617 containerd[1470]: time="2025-01-16T08:58:46.089234068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:46.089617 containerd[1470]: time="2025-01-16T08:58:46.089377378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:46.107503 containerd[1470]: time="2025-01-16T08:58:46.107122613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:58:46.107503 containerd[1470]: time="2025-01-16T08:58:46.107213041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:58:46.107503 containerd[1470]: time="2025-01-16T08:58:46.107227107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:46.107503 containerd[1470]: time="2025-01-16T08:58:46.107346009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:58:46.124540 systemd[1]: Started cri-containerd-1c5206439367fea86a3b62b652cd6bae40ff7d4d0a4de2167199630fc96d84cc.scope - libcontainer container 1c5206439367fea86a3b62b652cd6bae40ff7d4d0a4de2167199630fc96d84cc. Jan 16 08:58:46.137378 systemd[1]: Started cri-containerd-cfe94ae6b17f852dd2d86e801e45800fec66d6fa6f24cb945bd784314536eff1.scope - libcontainer container cfe94ae6b17f852dd2d86e801e45800fec66d6fa6f24cb945bd784314536eff1. Jan 16 08:58:46.142117 kubelet[2211]: E0116 08:58:46.142066 2211 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://144.126.217.85:6443/api/v1/namespaces/default/events\": dial tcp 144.126.217.85:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-f-6fcf2fe32d.181b20997ef1e900 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-f-6fcf2fe32d,UID:ci-4081.3.0-f-6fcf2fe32d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-f-6fcf2fe32d,},FirstTimestamp:2025-01-16 08:58:44.913326336 +0000 UTC m=+0.553428741,LastTimestamp:2025-01-16 08:58:44.913326336 +0000 UTC m=+0.553428741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-f-6fcf2fe32d,}" Jan 16 08:58:46.167459 systemd[1]: Started cri-containerd-c23ae841b440a25dee8272f6100668382c62757226844df6d0ef1a23c82861a7.scope - libcontainer container c23ae841b440a25dee8272f6100668382c62757226844df6d0ef1a23c82861a7. Jan 16 08:58:46.258510 containerd[1470]: time="2025-01-16T08:58:46.254403671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-f-6fcf2fe32d,Uid:c11fb1ed4ca5729f7b1804dc4771ba97,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c5206439367fea86a3b62b652cd6bae40ff7d4d0a4de2167199630fc96d84cc\"" Jan 16 08:58:46.261884 containerd[1470]: time="2025-01-16T08:58:46.261829087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-f-6fcf2fe32d,Uid:e71e74c94efcb0be6001a72fd841d715,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfe94ae6b17f852dd2d86e801e45800fec66d6fa6f24cb945bd784314536eff1\"" Jan 16 08:58:46.267794 kubelet[2211]: E0116 08:58:46.267754 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:46.270969 kubelet[2211]: E0116 08:58:46.270928 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:46.277698 containerd[1470]: time="2025-01-16T08:58:46.277640811Z" level=info msg="CreateContainer within sandbox \"1c5206439367fea86a3b62b652cd6bae40ff7d4d0a4de2167199630fc96d84cc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 08:58:46.279421 containerd[1470]: time="2025-01-16T08:58:46.278781372Z" level=info msg="CreateContainer within sandbox \"cfe94ae6b17f852dd2d86e801e45800fec66d6fa6f24cb945bd784314536eff1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 08:58:46.281291 kubelet[2211]: W0116 08:58:46.281079 2211 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://144.126.217.85:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:46.281624 kubelet[2211]: E0116 08:58:46.281483 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://144.126.217.85:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:46.294050 containerd[1470]: time="2025-01-16T08:58:46.293851094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d,Uid:3f0245b274d8bf22744eafe0c34886d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c23ae841b440a25dee8272f6100668382c62757226844df6d0ef1a23c82861a7\"" Jan 16 08:58:46.296793 kubelet[2211]: E0116 08:58:46.296636 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:46.296987 containerd[1470]: time="2025-01-16T08:58:46.296755528Z" level=info msg="CreateContainer within sandbox \"1c5206439367fea86a3b62b652cd6bae40ff7d4d0a4de2167199630fc96d84cc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3bbe87316ea774448c987f36415f91e9fe391b57cf68464f44b993b0c89f0758\"" Jan 16 08:58:46.300058 containerd[1470]: time="2025-01-16T08:58:46.299536776Z" level=info msg="StartContainer for \"3bbe87316ea774448c987f36415f91e9fe391b57cf68464f44b993b0c89f0758\"" Jan 16 08:58:46.301541 containerd[1470]: time="2025-01-16T08:58:46.301365778Z" level=info msg="CreateContainer within sandbox \"c23ae841b440a25dee8272f6100668382c62757226844df6d0ef1a23c82861a7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 08:58:46.312442 containerd[1470]: time="2025-01-16T08:58:46.312319514Z" level=info msg="CreateContainer within sandbox \"cfe94ae6b17f852dd2d86e801e45800fec66d6fa6f24cb945bd784314536eff1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ed2b88a70a350b62a69ff676df395870c24bfe4f38ee5d87fdd78bd43eef2401\"" Jan 16 08:58:46.314596 containerd[1470]: time="2025-01-16T08:58:46.314355698Z" level=info msg="StartContainer for \"ed2b88a70a350b62a69ff676df395870c24bfe4f38ee5d87fdd78bd43eef2401\"" Jan 16 08:58:46.320042 kubelet[2211]: W0116 08:58:46.319989 2211 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://144.126.217.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:46.320042 kubelet[2211]: E0116 08:58:46.320056 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://144.126.217.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:46.325035 containerd[1470]: time="2025-01-16T08:58:46.324983965Z" level=info msg="CreateContainer within sandbox \"c23ae841b440a25dee8272f6100668382c62757226844df6d0ef1a23c82861a7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3452e17b49a741a01f75faacffe97958ea6ea98816d3084617bec0a3bb50a882\"" Jan 16 08:58:46.326084 containerd[1470]: time="2025-01-16T08:58:46.326037942Z" level=info msg="StartContainer for \"3452e17b49a741a01f75faacffe97958ea6ea98816d3084617bec0a3bb50a882\"" Jan 16 08:58:46.334784 kubelet[2211]: E0116 08:58:46.333401 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.217.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-f-6fcf2fe32d?timeout=10s\": dial tcp 144.126.217.85:6443: connect: connection refused" interval="1.6s" Jan 16 08:58:46.375525 systemd[1]: Started cri-containerd-3bbe87316ea774448c987f36415f91e9fe391b57cf68464f44b993b0c89f0758.scope - libcontainer container 3bbe87316ea774448c987f36415f91e9fe391b57cf68464f44b993b0c89f0758. Jan 16 08:58:46.385538 systemd[1]: Started cri-containerd-ed2b88a70a350b62a69ff676df395870c24bfe4f38ee5d87fdd78bd43eef2401.scope - libcontainer container ed2b88a70a350b62a69ff676df395870c24bfe4f38ee5d87fdd78bd43eef2401. Jan 16 08:58:46.410556 systemd[1]: Started cri-containerd-3452e17b49a741a01f75faacffe97958ea6ea98816d3084617bec0a3bb50a882.scope - libcontainer container 3452e17b49a741a01f75faacffe97958ea6ea98816d3084617bec0a3bb50a882. Jan 16 08:58:46.440603 kubelet[2211]: I0116 08:58:46.440383 2211 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:46.442485 kubelet[2211]: E0116 08:58:46.442368 2211 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://144.126.217.85:6443/api/v1/nodes\": dial tcp 144.126.217.85:6443: connect: connection refused" node="ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:46.484005 kubelet[2211]: W0116 08:58:46.483927 2211 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://144.126.217.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:46.485406 kubelet[2211]: E0116 08:58:46.484805 2211 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://144.126.217.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 144.126.217.85:6443: connect: connection refused Jan 16 08:58:46.492404 containerd[1470]: time="2025-01-16T08:58:46.491873292Z" level=info msg="StartContainer for \"ed2b88a70a350b62a69ff676df395870c24bfe4f38ee5d87fdd78bd43eef2401\" returns successfully" Jan 16 08:58:46.536343 containerd[1470]: time="2025-01-16T08:58:46.535769606Z" level=info msg="StartContainer for \"3452e17b49a741a01f75faacffe97958ea6ea98816d3084617bec0a3bb50a882\" returns successfully" Jan 16 08:58:46.543771 containerd[1470]: time="2025-01-16T08:58:46.543111193Z" level=info msg="StartContainer for \"3bbe87316ea774448c987f36415f91e9fe391b57cf68464f44b993b0c89f0758\" returns successfully" Jan 16 08:58:46.981702 kubelet[2211]: E0116 08:58:46.981281 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:46.988439 kubelet[2211]: E0116 08:58:46.988245 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:46.991766 kubelet[2211]: E0116 08:58:46.991673 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:47.992037 kubelet[2211]: E0116 08:58:47.991945 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:48.044237 kubelet[2211]: I0116 08:58:48.044175 2211 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:48.833649 kubelet[2211]: E0116 08:58:48.833594 2211 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-f-6fcf2fe32d\" not found" node="ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:48.912225 kubelet[2211]: I0116 08:58:48.910278 2211 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:48.912225 kubelet[2211]: I0116 08:58:48.910572 2211 apiserver.go:52] "Watching apiserver" Jan 16 08:58:48.931079 kubelet[2211]: I0116 08:58:48.931014 2211 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 16 08:58:50.799804 kubelet[2211]: W0116 08:58:50.799752 2211 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 08:58:50.803223 kubelet[2211]: E0116 08:58:50.801585 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:51.000011 kubelet[2211]: E0116 08:58:50.999968 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:52.019992 systemd[1]: Reloading requested from client PID 2486 ('systemctl') (unit session-9.scope)... Jan 16 08:58:52.020016 systemd[1]: Reloading... Jan 16 08:58:52.165260 zram_generator::config[2530]: No configuration found. Jan 16 08:58:52.326939 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 08:58:52.452364 systemd[1]: Reloading finished in 431 ms. Jan 16 08:58:52.503857 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:58:52.518969 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 08:58:52.519338 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:52.519410 systemd[1]: kubelet.service: Consumed 1.041s CPU time, 108.5M memory peak, 0B memory swap peak. Jan 16 08:58:52.532224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 08:58:52.689058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 08:58:52.705824 (kubelet)[2576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 08:58:52.819307 kubelet[2576]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 08:58:52.819307 kubelet[2576]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 08:58:52.819307 kubelet[2576]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 08:58:52.819784 kubelet[2576]: I0116 08:58:52.819424 2576 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 08:58:52.829273 kubelet[2576]: I0116 08:58:52.826805 2576 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 16 08:58:52.829273 kubelet[2576]: I0116 08:58:52.826854 2576 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 08:58:52.829273 kubelet[2576]: I0116 08:58:52.827169 2576 server.go:919] "Client rotation is on, will bootstrap in background" Jan 16 08:58:52.832465 kubelet[2576]: I0116 08:58:52.829980 2576 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 16 08:58:52.836836 kubelet[2576]: I0116 08:58:52.836777 2576 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 08:58:52.837459 sudo[2588]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 16 08:58:52.839826 sudo[2588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 16 08:58:52.855769 kubelet[2576]: I0116 08:58:52.855368 2576 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 08:58:52.855769 kubelet[2576]: I0116 08:58:52.855683 2576 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 08:58:52.856015 kubelet[2576]: I0116 08:58:52.855899 2576 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 16 08:58:52.856015 kubelet[2576]: I0116 08:58:52.855930 2576 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 08:58:52.856015 kubelet[2576]: I0116 08:58:52.855942 2576 container_manager_linux.go:301] "Creating device plugin manager" Jan 16 08:58:52.856015 kubelet[2576]: I0116 08:58:52.855979 2576 state_mem.go:36] "Initialized new in-memory state store" Jan 16 08:58:52.856364 kubelet[2576]: I0116 08:58:52.856164 2576 kubelet.go:396] "Attempting to sync node with API server" Jan 16 08:58:52.856364 kubelet[2576]: I0116 08:58:52.856204 2576 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 08:58:52.856364 kubelet[2576]: I0116 08:58:52.856234 2576 kubelet.go:312] "Adding apiserver pod source" Jan 16 08:58:52.856364 kubelet[2576]: I0116 08:58:52.856250 2576 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 08:58:52.861270 kubelet[2576]: I0116 08:58:52.860390 2576 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 08:58:52.861270 kubelet[2576]: I0116 08:58:52.860684 2576 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 08:58:52.861270 kubelet[2576]: I0116 08:58:52.861266 2576 server.go:1256] "Started kubelet" Jan 16 08:58:52.866088 kubelet[2576]: I0116 08:58:52.866046 2576 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 08:58:52.887564 kubelet[2576]: I0116 08:58:52.887523 2576 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 08:58:52.890224 kubelet[2576]: I0116 08:58:52.888717 2576 server.go:461] "Adding debug handlers to kubelet server" Jan 16 08:58:52.890394 kubelet[2576]: I0116 08:58:52.890345 2576 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 08:58:52.890650 kubelet[2576]: I0116 08:58:52.890609 2576 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 08:58:52.898078 kubelet[2576]: I0116 08:58:52.898028 2576 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 16 08:58:52.898975 kubelet[2576]: I0116 08:58:52.898931 2576 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 16 08:58:52.899213 kubelet[2576]: I0116 08:58:52.899171 2576 reconciler_new.go:29] "Reconciler: start to sync state" Jan 16 08:58:52.917393 kubelet[2576]: I0116 08:58:52.915523 2576 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 08:58:52.917570 kubelet[2576]: I0116 08:58:52.917530 2576 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 08:58:52.917570 kubelet[2576]: I0116 08:58:52.917570 2576 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 16 08:58:52.917672 kubelet[2576]: I0116 08:58:52.917592 2576 kubelet.go:2329] "Starting kubelet main sync loop" Jan 16 08:58:52.917672 kubelet[2576]: E0116 08:58:52.917662 2576 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 08:58:52.929384 kubelet[2576]: I0116 08:58:52.926992 2576 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 08:58:52.938245 kubelet[2576]: E0116 08:58:52.938206 2576 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 08:58:52.942548 kubelet[2576]: I0116 08:58:52.940678 2576 factory.go:221] Registration of the containerd container factory successfully Jan 16 08:58:52.942890 kubelet[2576]: I0116 08:58:52.942747 2576 factory.go:221] Registration of the systemd container factory successfully Jan 16 08:58:53.002234 kubelet[2576]: I0116 08:58:53.001111 2576 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.013715 kubelet[2576]: I0116 08:58:53.013414 2576 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.013715 kubelet[2576]: I0116 08:58:53.013530 2576 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.018313 kubelet[2576]: E0116 08:58:53.018222 2576 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 16 08:58:53.049103 kubelet[2576]: I0116 08:58:53.048527 2576 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 16 08:58:53.049103 kubelet[2576]: I0116 08:58:53.048553 2576 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 16 08:58:53.049103 kubelet[2576]: I0116 08:58:53.048585 2576 state_mem.go:36] "Initialized new in-memory state store" Jan 16 08:58:53.049103 kubelet[2576]: I0116 08:58:53.048776 2576 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 08:58:53.049103 kubelet[2576]: I0116 08:58:53.048803 2576 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 08:58:53.049103 kubelet[2576]: I0116 08:58:53.048811 2576 policy_none.go:49] "None policy: Start" Jan 16 08:58:53.053501 kubelet[2576]: I0116 08:58:53.051917 2576 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 16 08:58:53.053501 kubelet[2576]: I0116 08:58:53.051960 2576 state_mem.go:35] "Initializing new in-memory state store" Jan 16 08:58:53.053501 kubelet[2576]: I0116 08:58:53.052200 2576 state_mem.go:75] "Updated machine memory state" Jan 16 08:58:53.069552 kubelet[2576]: I0116 08:58:53.069517 2576 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 08:58:53.073408 kubelet[2576]: I0116 08:58:53.073374 2576 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 08:58:53.219767 kubelet[2576]: I0116 08:58:53.219562 2576 topology_manager.go:215] "Topology Admit Handler" podUID="e71e74c94efcb0be6001a72fd841d715" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.223303 kubelet[2576]: I0116 08:58:53.220532 2576 topology_manager.go:215] "Topology Admit Handler" podUID="3f0245b274d8bf22744eafe0c34886d0" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.223303 kubelet[2576]: I0116 08:58:53.220617 2576 topology_manager.go:215] "Topology Admit Handler" podUID="c11fb1ed4ca5729f7b1804dc4771ba97" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.230752 kubelet[2576]: W0116 08:58:53.230711 2576 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 08:58:53.233863 kubelet[2576]: W0116 08:58:53.233789 2576 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 08:58:53.237517 kubelet[2576]: W0116 08:58:53.237474 2576 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 08:58:53.237641 kubelet[2576]: E0116 08:58:53.237569 2576 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.300561 kubelet[2576]: I0116 08:58:53.300421 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f0245b274d8bf22744eafe0c34886d0-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"3f0245b274d8bf22744eafe0c34886d0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.300561 kubelet[2576]: I0116 08:58:53.300482 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f0245b274d8bf22744eafe0c34886d0-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"3f0245b274d8bf22744eafe0c34886d0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.300561 kubelet[2576]: I0116 08:58:53.300527 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c11fb1ed4ca5729f7b1804dc4771ba97-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"c11fb1ed4ca5729f7b1804dc4771ba97\") " pod="kube-system/kube-scheduler-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.300561 kubelet[2576]: I0116 08:58:53.300564 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e71e74c94efcb0be6001a72fd841d715-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"e71e74c94efcb0be6001a72fd841d715\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.300935 kubelet[2576]: I0116 08:58:53.300595 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e71e74c94efcb0be6001a72fd841d715-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"e71e74c94efcb0be6001a72fd841d715\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.300935 kubelet[2576]: I0116 08:58:53.300629 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f0245b274d8bf22744eafe0c34886d0-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"3f0245b274d8bf22744eafe0c34886d0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.300935 kubelet[2576]: I0116 08:58:53.300666 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f0245b274d8bf22744eafe0c34886d0-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"3f0245b274d8bf22744eafe0c34886d0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.300935 kubelet[2576]: I0116 08:58:53.300700 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f0245b274d8bf22744eafe0c34886d0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"3f0245b274d8bf22744eafe0c34886d0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.300935 kubelet[2576]: I0116 08:58:53.300749 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e71e74c94efcb0be6001a72fd841d715-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-f-6fcf2fe32d\" (UID: \"e71e74c94efcb0be6001a72fd841d715\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.536109 kubelet[2576]: E0116 08:58:53.534346 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:53.538731 kubelet[2576]: E0116 08:58:53.538682 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:53.538981 kubelet[2576]: E0116 08:58:53.538866 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:53.707458 sudo[2588]: pam_unix(sudo:session): session closed for user root Jan 16 08:58:53.872036 kubelet[2576]: I0116 08:58:53.871833 2576 apiserver.go:52] "Watching apiserver" Jan 16 08:58:53.899364 kubelet[2576]: I0116 08:58:53.899301 2576 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 16 08:58:53.981360 kubelet[2576]: E0116 08:58:53.981245 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:53.983608 kubelet[2576]: E0116 08:58:53.982240 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:53.995803 kubelet[2576]: W0116 08:58:53.993486 2576 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 08:58:53.995803 kubelet[2576]: E0116 08:58:53.993623 2576 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-f-6fcf2fe32d\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-f-6fcf2fe32d" Jan 16 08:58:53.995803 kubelet[2576]: E0116 08:58:53.994169 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:54.052489 kubelet[2576]: I0116 08:58:54.052220 2576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-f-6fcf2fe32d" podStartSLOduration=1.052124659 podStartE2EDuration="1.052124659s" podCreationTimestamp="2025-01-16 08:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:58:54.03998125 +0000 UTC m=+1.326215948" watchObservedRunningTime="2025-01-16 08:58:54.052124659 +0000 UTC m=+1.338359361" Jan 16 08:58:54.067138 kubelet[2576]: I0116 08:58:54.067098 2576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-f-6fcf2fe32d" podStartSLOduration=4.066385696 podStartE2EDuration="4.066385696s" podCreationTimestamp="2025-01-16 08:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:58:54.054031374 +0000 UTC m=+1.340266072" watchObservedRunningTime="2025-01-16 08:58:54.066385696 +0000 UTC m=+1.352620395" Jan 16 08:58:54.985280 kubelet[2576]: E0116 08:58:54.984615 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:55.617823 sudo[1669]: pam_unix(sudo:session): session closed for user root Jan 16 08:58:55.622016 sshd[1666]: pam_unix(sshd:session): session closed for user core Jan 16 08:58:55.626214 systemd[1]: sshd@8-144.126.217.85:22-139.178.68.195:39910.service: Deactivated successfully. Jan 16 08:58:55.629066 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 08:58:55.629609 systemd[1]: session-9.scope: Consumed 6.748s CPU time, 188.0M memory peak, 0B memory swap peak. Jan 16 08:58:55.631852 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Jan 16 08:58:55.633800 systemd-logind[1447]: Removed session 9. Jan 16 08:58:55.987532 kubelet[2576]: E0116 08:58:55.986990 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:58:56.677263 update_engine[1448]: I20250116 08:58:56.676748 1448 update_attempter.cc:509] Updating boot flags... Jan 16 08:58:56.724238 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2653) Jan 16 08:58:56.814353 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2657) Jan 16 08:58:56.860285 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2657) Jan 16 08:59:00.875540 kubelet[2576]: E0116 08:59:00.871927 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:00.900120 kubelet[2576]: I0116 08:59:00.900025 2576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-f-6fcf2fe32d" podStartSLOduration=7.899544312 podStartE2EDuration="7.899544312s" podCreationTimestamp="2025-01-16 08:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:58:54.06747018 +0000 UTC m=+1.353704870" watchObservedRunningTime="2025-01-16 08:59:00.899544312 +0000 UTC m=+8.185779019" Jan 16 08:59:01.003389 kubelet[2576]: E0116 08:59:01.003324 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:01.075223 kubelet[2576]: E0116 08:59:01.075162 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:02.005842 kubelet[2576]: E0116 08:59:02.005765 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:03.011843 kubelet[2576]: E0116 08:59:03.011502 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:03.391028 kubelet[2576]: E0116 08:59:03.390942 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:04.012865 kubelet[2576]: E0116 08:59:04.012768 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:04.767254 kubelet[2576]: I0116 08:59:04.767074 2576 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 08:59:04.767712 containerd[1470]: time="2025-01-16T08:59:04.767658732Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 08:59:04.768473 kubelet[2576]: I0116 08:59:04.768443 2576 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 08:59:05.531074 kubelet[2576]: I0116 08:59:05.531018 2576 topology_manager.go:215] "Topology Admit Handler" podUID="b1650402-842d-4c2d-bfb7-c2f87362d0c1" podNamespace="kube-system" podName="kube-proxy-j65hq" Jan 16 08:59:05.537873 kubelet[2576]: I0116 08:59:05.537235 2576 topology_manager.go:215] "Topology Admit Handler" podUID="938451e7-e794-44ec-960d-dec7c3802882" podNamespace="kube-system" podName="cilium-858px" Jan 16 08:59:05.552972 systemd[1]: Created slice kubepods-besteffort-podb1650402_842d_4c2d_bfb7_c2f87362d0c1.slice - libcontainer container kubepods-besteffort-podb1650402_842d_4c2d_bfb7_c2f87362d0c1.slice. Jan 16 08:59:05.572879 systemd[1]: Created slice kubepods-burstable-pod938451e7_e794_44ec_960d_dec7c3802882.slice - libcontainer container kubepods-burstable-pod938451e7_e794_44ec_960d_dec7c3802882.slice. Jan 16 08:59:05.700900 kubelet[2576]: I0116 08:59:05.700706 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-bpf-maps\") pod \"cilium-858px\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " pod="kube-system/cilium-858px" Jan 16 08:59:05.700900 kubelet[2576]: I0116 08:59:05.700752 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-cni-path\") pod \"cilium-858px\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " pod="kube-system/cilium-858px" Jan 16 08:59:05.700900 kubelet[2576]: I0116 08:59:05.700774 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-lib-modules\") pod \"cilium-858px\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " pod="kube-system/cilium-858px" Jan 16 08:59:05.700900 kubelet[2576]: I0116 08:59:05.700865 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-xtables-lock\") pod \"cilium-858px\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " pod="kube-system/cilium-858px" Jan 16 08:59:05.702100 kubelet[2576]: I0116 08:59:05.701133 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b1650402-842d-4c2d-bfb7-c2f87362d0c1-kube-proxy\") pod \"kube-proxy-j65hq\" (UID: \"b1650402-842d-4c2d-bfb7-c2f87362d0c1\") " pod="kube-system/kube-proxy-j65hq" Jan 16 08:59:05.702100 kubelet[2576]: I0116 08:59:05.701231 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1650402-842d-4c2d-bfb7-c2f87362d0c1-xtables-lock\") pod \"kube-proxy-j65hq\" (UID: \"b1650402-842d-4c2d-bfb7-c2f87362d0c1\") " pod="kube-system/kube-proxy-j65hq" Jan 16 08:59:05.702100 kubelet[2576]: I0116 08:59:05.701269 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1650402-842d-4c2d-bfb7-c2f87362d0c1-lib-modules\") pod \"kube-proxy-j65hq\" (UID: \"b1650402-842d-4c2d-bfb7-c2f87362d0c1\") " pod="kube-system/kube-proxy-j65hq" Jan 16 08:59:05.702100 kubelet[2576]: I0116 08:59:05.701300 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-cilium-run\") pod \"cilium-858px\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " pod="kube-system/cilium-858px" Jan 16 08:59:05.702100 kubelet[2576]: I0116 08:59:05.701333 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-host-proc-sys-net\") pod \"cilium-858px\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " pod="kube-system/cilium-858px" Jan 16 08:59:05.702292 kubelet[2576]: I0116 08:59:05.701393 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsgvd\" (UniqueName: \"kubernetes.io/projected/b1650402-842d-4c2d-bfb7-c2f87362d0c1-kube-api-access-tsgvd\") pod \"kube-proxy-j65hq\" (UID: \"b1650402-842d-4c2d-bfb7-c2f87362d0c1\") " pod="kube-system/kube-proxy-j65hq" Jan 16 08:59:05.702292 kubelet[2576]: I0116 08:59:05.701427 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-hostproc\") pod \"cilium-858px\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " pod="kube-system/cilium-858px" Jan 16 08:59:05.702292 kubelet[2576]: I0116 08:59:05.701457 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-cilium-cgroup\") pod \"cilium-858px\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " pod="kube-system/cilium-858px" Jan 16 08:59:05.702292 kubelet[2576]: I0116 08:59:05.701487 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-host-proc-sys-kernel\") pod \"cilium-858px\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " pod="kube-system/cilium-858px" Jan 16 08:59:05.702292 kubelet[2576]: I0116 08:59:05.701525 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-etc-cni-netd\") pod \"cilium-858px\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " pod="kube-system/cilium-858px" Jan 16 08:59:05.702586 kubelet[2576]: I0116 08:59:05.701561 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/938451e7-e794-44ec-960d-dec7c3802882-clustermesh-secrets\") pod \"cilium-858px\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " pod="kube-system/cilium-858px" Jan 16 08:59:05.702586 kubelet[2576]: I0116 08:59:05.701598 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/938451e7-e794-44ec-960d-dec7c3802882-hubble-tls\") pod \"cilium-858px\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " pod="kube-system/cilium-858px" Jan 16 08:59:05.702586 kubelet[2576]: I0116 08:59:05.701634 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/938451e7-e794-44ec-960d-dec7c3802882-cilium-config-path\") pod \"cilium-858px\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " pod="kube-system/cilium-858px" Jan 16 08:59:05.702586 kubelet[2576]: I0116 08:59:05.701684 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9wgj\" (UniqueName: \"kubernetes.io/projected/938451e7-e794-44ec-960d-dec7c3802882-kube-api-access-r9wgj\") pod \"cilium-858px\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " pod="kube-system/cilium-858px" Jan 16 08:59:05.834394 kubelet[2576]: I0116 08:59:05.830759 2576 topology_manager.go:215] "Topology Admit Handler" podUID="6c7a480d-afae-4554-a827-993537b8fd59" podNamespace="kube-system" podName="cilium-operator-5cc964979-ng4mb" Jan 16 08:59:05.852413 systemd[1]: Created slice kubepods-besteffort-pod6c7a480d_afae_4554_a827_993537b8fd59.slice - libcontainer container kubepods-besteffort-pod6c7a480d_afae_4554_a827_993537b8fd59.slice. Jan 16 08:59:05.903872 kubelet[2576]: I0116 08:59:05.903828 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbp59\" (UniqueName: \"kubernetes.io/projected/6c7a480d-afae-4554-a827-993537b8fd59-kube-api-access-qbp59\") pod \"cilium-operator-5cc964979-ng4mb\" (UID: \"6c7a480d-afae-4554-a827-993537b8fd59\") " pod="kube-system/cilium-operator-5cc964979-ng4mb" Jan 16 08:59:05.904057 kubelet[2576]: I0116 08:59:05.903890 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c7a480d-afae-4554-a827-993537b8fd59-cilium-config-path\") pod \"cilium-operator-5cc964979-ng4mb\" (UID: \"6c7a480d-afae-4554-a827-993537b8fd59\") " pod="kube-system/cilium-operator-5cc964979-ng4mb" Jan 16 08:59:06.160036 kubelet[2576]: E0116 08:59:06.159656 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:06.160893 containerd[1470]: time="2025-01-16T08:59:06.160833478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-ng4mb,Uid:6c7a480d-afae-4554-a827-993537b8fd59,Namespace:kube-system,Attempt:0,}" Jan 16 08:59:06.166245 kubelet[2576]: E0116 08:59:06.165853 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:06.167640 containerd[1470]: time="2025-01-16T08:59:06.167575128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j65hq,Uid:b1650402-842d-4c2d-bfb7-c2f87362d0c1,Namespace:kube-system,Attempt:0,}" Jan 16 08:59:06.180716 kubelet[2576]: E0116 08:59:06.180522 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:06.184123 containerd[1470]: time="2025-01-16T08:59:06.182652321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-858px,Uid:938451e7-e794-44ec-960d-dec7c3802882,Namespace:kube-system,Attempt:0,}" Jan 16 08:59:06.205325 containerd[1470]: time="2025-01-16T08:59:06.204515742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:59:06.205325 containerd[1470]: time="2025-01-16T08:59:06.204815690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:59:06.205325 containerd[1470]: time="2025-01-16T08:59:06.204931672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:06.207455 containerd[1470]: time="2025-01-16T08:59:06.207339757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:06.241815 containerd[1470]: time="2025-01-16T08:59:06.241647189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:59:06.242128 containerd[1470]: time="2025-01-16T08:59:06.241743911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:59:06.242128 containerd[1470]: time="2025-01-16T08:59:06.241876618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:06.242916 containerd[1470]: time="2025-01-16T08:59:06.242569589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:06.245517 systemd[1]: Started cri-containerd-e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b.scope - libcontainer container e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b. Jan 16 08:59:06.263738 containerd[1470]: time="2025-01-16T08:59:06.262248704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:59:06.265157 containerd[1470]: time="2025-01-16T08:59:06.264773688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:59:06.265157 containerd[1470]: time="2025-01-16T08:59:06.264909972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:06.267502 containerd[1470]: time="2025-01-16T08:59:06.267101724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:06.289957 systemd[1]: Started cri-containerd-aaf57d3fd22aff4db096ab305c22c61f22cd64bebe7e1ec1a73460146de9f2d5.scope - libcontainer container aaf57d3fd22aff4db096ab305c22c61f22cd64bebe7e1ec1a73460146de9f2d5. Jan 16 08:59:06.329639 systemd[1]: Started cri-containerd-5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d.scope - libcontainer container 5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d. Jan 16 08:59:06.351812 containerd[1470]: time="2025-01-16T08:59:06.351747277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j65hq,Uid:b1650402-842d-4c2d-bfb7-c2f87362d0c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaf57d3fd22aff4db096ab305c22c61f22cd64bebe7e1ec1a73460146de9f2d5\"" Jan 16 08:59:06.356316 kubelet[2576]: E0116 08:59:06.355965 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:06.364412 containerd[1470]: time="2025-01-16T08:59:06.363980287Z" level=info msg="CreateContainer within sandbox \"aaf57d3fd22aff4db096ab305c22c61f22cd64bebe7e1ec1a73460146de9f2d5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 08:59:06.389243 containerd[1470]: time="2025-01-16T08:59:06.389112841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-ng4mb,Uid:6c7a480d-afae-4554-a827-993537b8fd59,Namespace:kube-system,Attempt:0,} returns sandbox id \"e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b\"" Jan 16 08:59:06.393887 kubelet[2576]: E0116 08:59:06.393765 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:06.411289 containerd[1470]: time="2025-01-16T08:59:06.410906067Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 16 08:59:06.430084 containerd[1470]: time="2025-01-16T08:59:06.429934444Z" level=info msg="CreateContainer within sandbox \"aaf57d3fd22aff4db096ab305c22c61f22cd64bebe7e1ec1a73460146de9f2d5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"51e92f896d52ccd07a73082b11ef5229a23008a2534eb546a9c6dde42dd7f808\"" Jan 16 08:59:06.438806 containerd[1470]: time="2025-01-16T08:59:06.438292687Z" level=info msg="StartContainer for \"51e92f896d52ccd07a73082b11ef5229a23008a2534eb546a9c6dde42dd7f808\"" Jan 16 08:59:06.455399 containerd[1470]: time="2025-01-16T08:59:06.455354536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-858px,Uid:938451e7-e794-44ec-960d-dec7c3802882,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\"" Jan 16 08:59:06.456174 kubelet[2576]: E0116 08:59:06.456134 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:06.489573 systemd[1]: Started cri-containerd-51e92f896d52ccd07a73082b11ef5229a23008a2534eb546a9c6dde42dd7f808.scope - libcontainer container 51e92f896d52ccd07a73082b11ef5229a23008a2534eb546a9c6dde42dd7f808. Jan 16 08:59:06.533253 containerd[1470]: time="2025-01-16T08:59:06.533204138Z" level=info msg="StartContainer for \"51e92f896d52ccd07a73082b11ef5229a23008a2534eb546a9c6dde42dd7f808\" returns successfully" Jan 16 08:59:07.031668 kubelet[2576]: E0116 08:59:07.031623 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:07.050314 kubelet[2576]: I0116 08:59:07.050140 2576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-j65hq" podStartSLOduration=2.049970564 podStartE2EDuration="2.049970564s" podCreationTimestamp="2025-01-16 08:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:59:07.049530497 +0000 UTC m=+14.335765214" watchObservedRunningTime="2025-01-16 08:59:07.049970564 +0000 UTC m=+14.336205264" Jan 16 08:59:09.102096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2553528715.mount: Deactivated successfully. Jan 16 08:59:11.210290 containerd[1470]: time="2025-01-16T08:59:11.210213807Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:11.211543 containerd[1470]: time="2025-01-16T08:59:11.211306300Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907237" Jan 16 08:59:11.213237 containerd[1470]: time="2025-01-16T08:59:11.212084696Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:11.215207 containerd[1470]: time="2025-01-16T08:59:11.215059020Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.804082641s" Jan 16 08:59:11.215463 containerd[1470]: time="2025-01-16T08:59:11.215428349Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 16 08:59:11.217236 containerd[1470]: time="2025-01-16T08:59:11.217048868Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 16 08:59:11.219089 containerd[1470]: time="2025-01-16T08:59:11.218783235Z" level=info msg="CreateContainer within sandbox \"e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 16 08:59:11.243553 containerd[1470]: time="2025-01-16T08:59:11.243379199Z" level=info msg="CreateContainer within sandbox \"e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3\"" Jan 16 08:59:11.247001 containerd[1470]: time="2025-01-16T08:59:11.244589133Z" level=info msg="StartContainer for \"aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3\"" Jan 16 08:59:11.295524 systemd[1]: Started cri-containerd-aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3.scope - libcontainer container aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3. Jan 16 08:59:11.344982 containerd[1470]: time="2025-01-16T08:59:11.344793122Z" level=info msg="StartContainer for \"aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3\" returns successfully" Jan 16 08:59:12.087218 kubelet[2576]: E0116 08:59:12.084886 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:12.974967 kubelet[2576]: I0116 08:59:12.974919 2576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-ng4mb" podStartSLOduration=3.165706193 podStartE2EDuration="7.974863549s" podCreationTimestamp="2025-01-16 08:59:05 +0000 UTC" firstStartedPulling="2025-01-16 08:59:06.406967259 +0000 UTC m=+13.693201954" lastFinishedPulling="2025-01-16 08:59:11.216124626 +0000 UTC m=+18.502359310" observedRunningTime="2025-01-16 08:59:12.162461555 +0000 UTC m=+19.448696255" watchObservedRunningTime="2025-01-16 08:59:12.974863549 +0000 UTC m=+20.261098248" Jan 16 08:59:13.089033 kubelet[2576]: E0116 08:59:13.087817 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:16.702557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2168752556.mount: Deactivated successfully. Jan 16 08:59:20.765222 containerd[1470]: time="2025-01-16T08:59:20.764299574Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735383" Jan 16 08:59:20.766701 containerd[1470]: time="2025-01-16T08:59:20.766627170Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:20.768420 containerd[1470]: time="2025-01-16T08:59:20.768359727Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.550864745s" Jan 16 08:59:20.768420 containerd[1470]: time="2025-01-16T08:59:20.768418138Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 16 08:59:20.771830 containerd[1470]: time="2025-01-16T08:59:20.770597030Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 08:59:20.778612 containerd[1470]: time="2025-01-16T08:59:20.778524953Z" level=info msg="CreateContainer within sandbox \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 16 08:59:20.900384 containerd[1470]: time="2025-01-16T08:59:20.900320292Z" level=info msg="CreateContainer within sandbox \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb\"" Jan 16 08:59:20.901274 containerd[1470]: time="2025-01-16T08:59:20.901226964Z" level=info msg="StartContainer for \"2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb\"" Jan 16 08:59:21.100483 systemd[1]: Started cri-containerd-2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb.scope - libcontainer container 2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb. Jan 16 08:59:21.150318 containerd[1470]: time="2025-01-16T08:59:21.150231716Z" level=info msg="StartContainer for \"2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb\" returns successfully" Jan 16 08:59:21.166981 systemd[1]: cri-containerd-2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb.scope: Deactivated successfully. Jan 16 08:59:21.346145 containerd[1470]: time="2025-01-16T08:59:21.327918410Z" level=info msg="shim disconnected" id=2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb namespace=k8s.io Jan 16 08:59:21.346145 containerd[1470]: time="2025-01-16T08:59:21.346116154Z" level=warning msg="cleaning up after shim disconnected" id=2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb namespace=k8s.io Jan 16 08:59:21.346145 containerd[1470]: time="2025-01-16T08:59:21.346140208Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:21.366570 containerd[1470]: time="2025-01-16T08:59:21.366084589Z" level=warning msg="cleanup warnings time=\"2025-01-16T08:59:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 16 08:59:21.888970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb-rootfs.mount: Deactivated successfully. Jan 16 08:59:22.129018 kubelet[2576]: E0116 08:59:22.128979 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:22.136215 containerd[1470]: time="2025-01-16T08:59:22.135051132Z" level=info msg="CreateContainer within sandbox \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 16 08:59:22.162242 containerd[1470]: time="2025-01-16T08:59:22.161545730Z" level=info msg="CreateContainer within sandbox \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679\"" Jan 16 08:59:22.164223 containerd[1470]: time="2025-01-16T08:59:22.163382856Z" level=info msg="StartContainer for \"88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679\"" Jan 16 08:59:22.217528 systemd[1]: Started cri-containerd-88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679.scope - libcontainer container 88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679. Jan 16 08:59:22.267896 containerd[1470]: time="2025-01-16T08:59:22.267560144Z" level=info msg="StartContainer for \"88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679\" returns successfully" Jan 16 08:59:22.289563 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 08:59:22.290835 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:59:22.291704 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 16 08:59:22.298760 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 08:59:22.299753 systemd[1]: cri-containerd-88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679.scope: Deactivated successfully. Jan 16 08:59:22.343530 containerd[1470]: time="2025-01-16T08:59:22.343342712Z" level=info msg="shim disconnected" id=88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679 namespace=k8s.io Jan 16 08:59:22.343832 containerd[1470]: time="2025-01-16T08:59:22.343660936Z" level=warning msg="cleaning up after shim disconnected" id=88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679 namespace=k8s.io Jan 16 08:59:22.343832 containerd[1470]: time="2025-01-16T08:59:22.343746093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:22.357489 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 08:59:22.889499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679-rootfs.mount: Deactivated successfully. Jan 16 08:59:23.133945 kubelet[2576]: E0116 08:59:23.133889 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:23.143337 containerd[1470]: time="2025-01-16T08:59:23.141855519Z" level=info msg="CreateContainer within sandbox \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 16 08:59:23.200572 containerd[1470]: time="2025-01-16T08:59:23.199825975Z" level=info msg="CreateContainer within sandbox \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5\"" Jan 16 08:59:23.202310 containerd[1470]: time="2025-01-16T08:59:23.201734641Z" level=info msg="StartContainer for \"b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5\"" Jan 16 08:59:23.258562 systemd[1]: Started cri-containerd-b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5.scope - libcontainer container b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5. Jan 16 08:59:23.303099 containerd[1470]: time="2025-01-16T08:59:23.302832886Z" level=info msg="StartContainer for \"b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5\" returns successfully" Jan 16 08:59:23.304000 systemd[1]: cri-containerd-b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5.scope: Deactivated successfully. Jan 16 08:59:23.343117 containerd[1470]: time="2025-01-16T08:59:23.342831289Z" level=info msg="shim disconnected" id=b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5 namespace=k8s.io Jan 16 08:59:23.343117 containerd[1470]: time="2025-01-16T08:59:23.342929298Z" level=warning msg="cleaning up after shim disconnected" id=b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5 namespace=k8s.io Jan 16 08:59:23.343117 containerd[1470]: time="2025-01-16T08:59:23.342955785Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:23.889836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5-rootfs.mount: Deactivated successfully. Jan 16 08:59:24.143571 kubelet[2576]: E0116 08:59:24.142631 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:24.159283 containerd[1470]: time="2025-01-16T08:59:24.156654837Z" level=info msg="CreateContainer within sandbox \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 16 08:59:24.196730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3469628955.mount: Deactivated successfully. Jan 16 08:59:24.198446 containerd[1470]: time="2025-01-16T08:59:24.198224384Z" level=info msg="CreateContainer within sandbox \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f\"" Jan 16 08:59:24.201285 containerd[1470]: time="2025-01-16T08:59:24.200033717Z" level=info msg="StartContainer for \"39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f\"" Jan 16 08:59:24.252454 systemd[1]: Started cri-containerd-39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f.scope - libcontainer container 39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f. Jan 16 08:59:24.291903 systemd[1]: cri-containerd-39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f.scope: Deactivated successfully. Jan 16 08:59:24.301669 containerd[1470]: time="2025-01-16T08:59:24.301112512Z" level=info msg="StartContainer for \"39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f\" returns successfully" Jan 16 08:59:24.342009 containerd[1470]: time="2025-01-16T08:59:24.341883218Z" level=info msg="shim disconnected" id=39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f namespace=k8s.io Jan 16 08:59:24.342576 containerd[1470]: time="2025-01-16T08:59:24.342274368Z" level=warning msg="cleaning up after shim disconnected" id=39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f namespace=k8s.io Jan 16 08:59:24.342576 containerd[1470]: time="2025-01-16T08:59:24.342300261Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 08:59:24.892112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f-rootfs.mount: Deactivated successfully. Jan 16 08:59:25.149594 kubelet[2576]: E0116 08:59:25.149434 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:25.157477 containerd[1470]: time="2025-01-16T08:59:25.157223429Z" level=info msg="CreateContainer within sandbox \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 16 08:59:25.209134 containerd[1470]: time="2025-01-16T08:59:25.208571641Z" level=info msg="CreateContainer within sandbox \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11\"" Jan 16 08:59:25.213124 containerd[1470]: time="2025-01-16T08:59:25.211351055Z" level=info msg="StartContainer for \"f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11\"" Jan 16 08:59:25.288887 systemd[1]: Started cri-containerd-f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11.scope - libcontainer container f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11. Jan 16 08:59:25.345700 containerd[1470]: time="2025-01-16T08:59:25.344851009Z" level=info msg="StartContainer for \"f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11\" returns successfully" Jan 16 08:59:25.571161 kubelet[2576]: I0116 08:59:25.571011 2576 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 16 08:59:25.622887 kubelet[2576]: I0116 08:59:25.622822 2576 topology_manager.go:215] "Topology Admit Handler" podUID="659fc13c-7f86-42f5-8086-17b840e6d5aa" podNamespace="kube-system" podName="coredns-76f75df574-jt2j7" Jan 16 08:59:25.638416 kubelet[2576]: I0116 08:59:25.638357 2576 topology_manager.go:215] "Topology Admit Handler" podUID="037bcb7b-4210-4681-b908-3380895e3143" podNamespace="kube-system" podName="coredns-76f75df574-qhh7q" Jan 16 08:59:25.644405 systemd[1]: Created slice kubepods-burstable-pod659fc13c_7f86_42f5_8086_17b840e6d5aa.slice - libcontainer container kubepods-burstable-pod659fc13c_7f86_42f5_8086_17b840e6d5aa.slice. Jan 16 08:59:25.664057 systemd[1]: Created slice kubepods-burstable-pod037bcb7b_4210_4681_b908_3380895e3143.slice - libcontainer container kubepods-burstable-pod037bcb7b_4210_4681_b908_3380895e3143.slice. Jan 16 08:59:25.745701 kubelet[2576]: I0116 08:59:25.745644 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5n48\" (UniqueName: \"kubernetes.io/projected/037bcb7b-4210-4681-b908-3380895e3143-kube-api-access-w5n48\") pod \"coredns-76f75df574-qhh7q\" (UID: \"037bcb7b-4210-4681-b908-3380895e3143\") " pod="kube-system/coredns-76f75df574-qhh7q" Jan 16 08:59:25.745701 kubelet[2576]: I0116 08:59:25.745709 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/037bcb7b-4210-4681-b908-3380895e3143-config-volume\") pod \"coredns-76f75df574-qhh7q\" (UID: \"037bcb7b-4210-4681-b908-3380895e3143\") " pod="kube-system/coredns-76f75df574-qhh7q" Jan 16 08:59:25.745990 kubelet[2576]: I0116 08:59:25.745747 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/659fc13c-7f86-42f5-8086-17b840e6d5aa-config-volume\") pod \"coredns-76f75df574-jt2j7\" (UID: \"659fc13c-7f86-42f5-8086-17b840e6d5aa\") " pod="kube-system/coredns-76f75df574-jt2j7" Jan 16 08:59:25.745990 kubelet[2576]: I0116 08:59:25.745800 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2wtp\" (UniqueName: \"kubernetes.io/projected/659fc13c-7f86-42f5-8086-17b840e6d5aa-kube-api-access-l2wtp\") pod \"coredns-76f75df574-jt2j7\" (UID: \"659fc13c-7f86-42f5-8086-17b840e6d5aa\") " pod="kube-system/coredns-76f75df574-jt2j7" Jan 16 08:59:25.957375 kubelet[2576]: E0116 08:59:25.955752 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:25.972237 kubelet[2576]: E0116 08:59:25.971126 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:25.972420 containerd[1470]: time="2025-01-16T08:59:25.971652977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jt2j7,Uid:659fc13c-7f86-42f5-8086-17b840e6d5aa,Namespace:kube-system,Attempt:0,}" Jan 16 08:59:25.975453 containerd[1470]: time="2025-01-16T08:59:25.974882443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qhh7q,Uid:037bcb7b-4210-4681-b908-3380895e3143,Namespace:kube-system,Attempt:0,}" Jan 16 08:59:26.179390 kubelet[2576]: E0116 08:59:26.178042 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:26.231710 kubelet[2576]: I0116 08:59:26.231521 2576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-858px" podStartSLOduration=6.921269987 podStartE2EDuration="21.231462812s" podCreationTimestamp="2025-01-16 08:59:05 +0000 UTC" firstStartedPulling="2025-01-16 08:59:06.458635643 +0000 UTC m=+13.744870371" lastFinishedPulling="2025-01-16 08:59:20.768828515 +0000 UTC m=+28.055063196" observedRunningTime="2025-01-16 08:59:26.228555687 +0000 UTC m=+33.514790430" watchObservedRunningTime="2025-01-16 08:59:26.231462812 +0000 UTC m=+33.517697509" Jan 16 08:59:27.178472 kubelet[2576]: E0116 08:59:27.178438 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:28.013863 systemd-networkd[1372]: cilium_host: Link UP Jan 16 08:59:28.014008 systemd-networkd[1372]: cilium_net: Link UP Jan 16 08:59:28.014012 systemd-networkd[1372]: cilium_net: Gained carrier Jan 16 08:59:28.014901 systemd-networkd[1372]: cilium_host: Gained carrier Jan 16 08:59:28.015415 systemd-networkd[1372]: cilium_host: Gained IPv6LL Jan 16 08:59:28.079432 systemd-networkd[1372]: cilium_net: Gained IPv6LL Jan 16 08:59:28.184340 systemd-networkd[1372]: cilium_vxlan: Link UP Jan 16 08:59:28.184350 systemd-networkd[1372]: cilium_vxlan: Gained carrier Jan 16 08:59:28.233984 kubelet[2576]: E0116 08:59:28.233223 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:28.595561 kernel: NET: Registered PF_ALG protocol family Jan 16 08:59:29.602541 systemd-networkd[1372]: lxc_health: Link UP Jan 16 08:59:29.613483 systemd-networkd[1372]: lxc_health: Gained carrier Jan 16 08:59:29.927722 systemd-networkd[1372]: cilium_vxlan: Gained IPv6LL Jan 16 08:59:30.103340 systemd-networkd[1372]: lxc635b3ed0910c: Link UP Jan 16 08:59:30.119225 kernel: eth0: renamed from tmp4dc16 Jan 16 08:59:30.126933 systemd-networkd[1372]: lxc635b3ed0910c: Gained carrier Jan 16 08:59:30.168311 systemd-networkd[1372]: lxc857344b74de9: Link UP Jan 16 08:59:30.177229 kernel: eth0: renamed from tmp85128 Jan 16 08:59:30.186903 systemd-networkd[1372]: lxc857344b74de9: Gained carrier Jan 16 08:59:30.191729 kubelet[2576]: E0116 08:59:30.190062 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:30.243418 kubelet[2576]: E0116 08:59:30.241795 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:30.696067 systemd-networkd[1372]: lxc_health: Gained IPv6LL Jan 16 08:59:31.237630 kubelet[2576]: E0116 08:59:31.237527 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:31.847678 systemd-networkd[1372]: lxc635b3ed0910c: Gained IPv6LL Jan 16 08:59:32.039568 systemd-networkd[1372]: lxc857344b74de9: Gained IPv6LL Jan 16 08:59:35.955217 containerd[1470]: time="2025-01-16T08:59:35.952788840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:59:35.955217 containerd[1470]: time="2025-01-16T08:59:35.952878830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:59:35.955217 containerd[1470]: time="2025-01-16T08:59:35.952966479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:35.955217 containerd[1470]: time="2025-01-16T08:59:35.953109844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:36.013970 containerd[1470]: time="2025-01-16T08:59:36.013052854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 08:59:36.013970 containerd[1470]: time="2025-01-16T08:59:36.013159749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 08:59:36.015225 containerd[1470]: time="2025-01-16T08:59:36.013471022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:36.015792 containerd[1470]: time="2025-01-16T08:59:36.014600332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 08:59:36.042274 systemd[1]: Started cri-containerd-85128c0baea3e741d248d008fc2a587c54637fbd274bd222f19ed8ebad17c805.scope - libcontainer container 85128c0baea3e741d248d008fc2a587c54637fbd274bd222f19ed8ebad17c805. Jan 16 08:59:36.061952 systemd[1]: run-containerd-runc-k8s.io-4dc16c310e7421f6a1f9c2c1be645b320b94c93a18b35bd0467d1f4e6a55122c-runc.r3cnVZ.mount: Deactivated successfully. Jan 16 08:59:36.072491 systemd[1]: Started cri-containerd-4dc16c310e7421f6a1f9c2c1be645b320b94c93a18b35bd0467d1f4e6a55122c.scope - libcontainer container 4dc16c310e7421f6a1f9c2c1be645b320b94c93a18b35bd0467d1f4e6a55122c. Jan 16 08:59:36.168494 containerd[1470]: time="2025-01-16T08:59:36.168059209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qhh7q,Uid:037bcb7b-4210-4681-b908-3380895e3143,Namespace:kube-system,Attempt:0,} returns sandbox id \"85128c0baea3e741d248d008fc2a587c54637fbd274bd222f19ed8ebad17c805\"" Jan 16 08:59:36.170387 kubelet[2576]: E0116 08:59:36.169762 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:36.174957 containerd[1470]: time="2025-01-16T08:59:36.174697793Z" level=info msg="CreateContainer within sandbox \"85128c0baea3e741d248d008fc2a587c54637fbd274bd222f19ed8ebad17c805\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 08:59:36.196594 containerd[1470]: time="2025-01-16T08:59:36.196533632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jt2j7,Uid:659fc13c-7f86-42f5-8086-17b840e6d5aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dc16c310e7421f6a1f9c2c1be645b320b94c93a18b35bd0467d1f4e6a55122c\"" Jan 16 08:59:36.197441 kubelet[2576]: E0116 08:59:36.197409 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:36.202482 containerd[1470]: time="2025-01-16T08:59:36.202031222Z" level=info msg="CreateContainer within sandbox \"4dc16c310e7421f6a1f9c2c1be645b320b94c93a18b35bd0467d1f4e6a55122c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 08:59:36.214733 containerd[1470]: time="2025-01-16T08:59:36.212145795Z" level=info msg="CreateContainer within sandbox \"85128c0baea3e741d248d008fc2a587c54637fbd274bd222f19ed8ebad17c805\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5669138814c994d2137a4c8986c36824abd948e70b14e3404f7a1bbb21be5c4c\"" Jan 16 08:59:36.221520 containerd[1470]: time="2025-01-16T08:59:36.221212128Z" level=info msg="StartContainer for \"5669138814c994d2137a4c8986c36824abd948e70b14e3404f7a1bbb21be5c4c\"" Jan 16 08:59:36.245476 containerd[1470]: time="2025-01-16T08:59:36.245395932Z" level=info msg="CreateContainer within sandbox \"4dc16c310e7421f6a1f9c2c1be645b320b94c93a18b35bd0467d1f4e6a55122c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4147a5e537022423ea3ca8d01d14585db72f40a8a4bf8e2a00921fb3ec42085\"" Jan 16 08:59:36.248051 containerd[1470]: time="2025-01-16T08:59:36.247600130Z" level=info msg="StartContainer for \"d4147a5e537022423ea3ca8d01d14585db72f40a8a4bf8e2a00921fb3ec42085\"" Jan 16 08:59:36.311524 systemd[1]: Started cri-containerd-5669138814c994d2137a4c8986c36824abd948e70b14e3404f7a1bbb21be5c4c.scope - libcontainer container 5669138814c994d2137a4c8986c36824abd948e70b14e3404f7a1bbb21be5c4c. Jan 16 08:59:36.335740 systemd[1]: Started cri-containerd-d4147a5e537022423ea3ca8d01d14585db72f40a8a4bf8e2a00921fb3ec42085.scope - libcontainer container d4147a5e537022423ea3ca8d01d14585db72f40a8a4bf8e2a00921fb3ec42085. Jan 16 08:59:36.388936 containerd[1470]: time="2025-01-16T08:59:36.388332811Z" level=info msg="StartContainer for \"5669138814c994d2137a4c8986c36824abd948e70b14e3404f7a1bbb21be5c4c\" returns successfully" Jan 16 08:59:36.404834 containerd[1470]: time="2025-01-16T08:59:36.404781770Z" level=info msg="StartContainer for \"d4147a5e537022423ea3ca8d01d14585db72f40a8a4bf8e2a00921fb3ec42085\" returns successfully" Jan 16 08:59:37.275152 kubelet[2576]: E0116 08:59:37.275107 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:37.280090 kubelet[2576]: E0116 08:59:37.280049 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:37.296768 kubelet[2576]: I0116 08:59:37.296121 2576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qhh7q" podStartSLOduration=32.296064389 podStartE2EDuration="32.296064389s" podCreationTimestamp="2025-01-16 08:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:59:37.295979466 +0000 UTC m=+44.582214168" watchObservedRunningTime="2025-01-16 08:59:37.296064389 +0000 UTC m=+44.582299090" Jan 16 08:59:37.317380 kubelet[2576]: I0116 08:59:37.317328 2576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-jt2j7" podStartSLOduration=32.317268929 podStartE2EDuration="32.317268929s" podCreationTimestamp="2025-01-16 08:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 08:59:37.31580499 +0000 UTC m=+44.602039693" watchObservedRunningTime="2025-01-16 08:59:37.317268929 +0000 UTC m=+44.603503629" Jan 16 08:59:38.282579 kubelet[2576]: E0116 08:59:38.282455 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:38.282579 kubelet[2576]: E0116 08:59:38.282470 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:39.285074 kubelet[2576]: E0116 08:59:39.284665 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 08:59:39.285074 kubelet[2576]: E0116 08:59:39.284741 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:00:08.919599 kubelet[2576]: E0116 09:00:08.919554 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:00:23.920102 kubelet[2576]: E0116 09:00:23.920048 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:00:27.919348 kubelet[2576]: E0116 09:00:27.919025 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:00:27.920738 kubelet[2576]: E0116 09:00:27.920575 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:00:37.919083 kubelet[2576]: E0116 09:00:37.918839 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:00:39.920543 kubelet[2576]: E0116 09:00:39.920455 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:00:54.919312 kubelet[2576]: E0116 09:00:54.918788 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:01:07.919925 kubelet[2576]: E0116 09:01:07.919369 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:01:20.340886 systemd[1]: Started sshd@9-144.126.217.85:22-139.178.68.195:50224.service - OpenSSH per-connection server daemon (139.178.68.195:50224). Jan 16 09:01:20.502333 sshd[3980]: Accepted publickey for core from 139.178.68.195 port 50224 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:20.504790 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:20.520701 systemd-logind[1447]: New session 10 of user core. Jan 16 09:01:20.533111 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 09:01:21.492677 sshd[3980]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:21.499331 systemd[1]: sshd@9-144.126.217.85:22-139.178.68.195:50224.service: Deactivated successfully. Jan 16 09:01:21.503422 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 09:01:21.504816 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Jan 16 09:01:21.512420 systemd-logind[1447]: Removed session 10. Jan 16 09:01:22.920271 kubelet[2576]: E0116 09:01:22.919573 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:01:26.505279 systemd[1]: Started sshd@10-144.126.217.85:22-139.178.68.195:44730.service - OpenSSH per-connection server daemon (139.178.68.195:44730). Jan 16 09:01:26.578280 sshd[3994]: Accepted publickey for core from 139.178.68.195 port 44730 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:26.580721 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:26.596492 systemd-logind[1447]: New session 11 of user core. Jan 16 09:01:26.603641 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 09:01:26.819988 sshd[3994]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:26.836930 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Jan 16 09:01:26.839053 systemd[1]: sshd@10-144.126.217.85:22-139.178.68.195:44730.service: Deactivated successfully. Jan 16 09:01:26.843629 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 09:01:26.847268 systemd-logind[1447]: Removed session 11. Jan 16 09:01:31.843777 systemd[1]: Started sshd@11-144.126.217.85:22-139.178.68.195:44738.service - OpenSSH per-connection server daemon (139.178.68.195:44738). Jan 16 09:01:31.909711 sshd[4008]: Accepted publickey for core from 139.178.68.195 port 44738 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:31.912147 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:31.920790 systemd-logind[1447]: New session 12 of user core. Jan 16 09:01:31.927586 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 09:01:32.137690 sshd[4008]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:32.147943 systemd[1]: sshd@11-144.126.217.85:22-139.178.68.195:44738.service: Deactivated successfully. Jan 16 09:01:32.152980 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 09:01:32.156358 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Jan 16 09:01:32.158799 systemd-logind[1447]: Removed session 12. Jan 16 09:01:32.920985 kubelet[2576]: E0116 09:01:32.920445 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:01:37.163679 systemd[1]: Started sshd@12-144.126.217.85:22-139.178.68.195:48918.service - OpenSSH per-connection server daemon (139.178.68.195:48918). Jan 16 09:01:37.275643 sshd[4024]: Accepted publickey for core from 139.178.68.195 port 48918 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:37.279373 sshd[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:37.287380 systemd-logind[1447]: New session 13 of user core. Jan 16 09:01:37.292516 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 16 09:01:37.489356 sshd[4024]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:37.497640 systemd[1]: sshd@12-144.126.217.85:22-139.178.68.195:48918.service: Deactivated successfully. Jan 16 09:01:37.500699 systemd[1]: session-13.scope: Deactivated successfully. Jan 16 09:01:37.503131 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Jan 16 09:01:37.505254 systemd-logind[1447]: Removed session 13. Jan 16 09:01:38.920218 kubelet[2576]: E0116 09:01:38.919906 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:01:42.512746 systemd[1]: Started sshd@13-144.126.217.85:22-139.178.68.195:48922.service - OpenSSH per-connection server daemon (139.178.68.195:48922). Jan 16 09:01:42.577259 sshd[4038]: Accepted publickey for core from 139.178.68.195 port 48922 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:42.579905 sshd[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:42.589855 systemd-logind[1447]: New session 14 of user core. Jan 16 09:01:42.600635 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 16 09:01:42.780932 sshd[4038]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:42.797625 systemd[1]: sshd@13-144.126.217.85:22-139.178.68.195:48922.service: Deactivated successfully. Jan 16 09:01:42.800804 systemd[1]: session-14.scope: Deactivated successfully. Jan 16 09:01:42.803348 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Jan 16 09:01:42.814592 systemd[1]: Started sshd@14-144.126.217.85:22-139.178.68.195:48936.service - OpenSSH per-connection server daemon (139.178.68.195:48936). Jan 16 09:01:42.816640 systemd-logind[1447]: Removed session 14. Jan 16 09:01:42.870659 sshd[4052]: Accepted publickey for core from 139.178.68.195 port 48936 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:42.873383 sshd[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:42.881342 systemd-logind[1447]: New session 15 of user core. Jan 16 09:01:42.893708 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 16 09:01:43.171449 sshd[4052]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:43.189927 systemd[1]: sshd@14-144.126.217.85:22-139.178.68.195:48936.service: Deactivated successfully. Jan 16 09:01:43.196924 systemd[1]: session-15.scope: Deactivated successfully. Jan 16 09:01:43.199257 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Jan 16 09:01:43.219375 systemd[1]: Started sshd@15-144.126.217.85:22-139.178.68.195:48944.service - OpenSSH per-connection server daemon (139.178.68.195:48944). Jan 16 09:01:43.221116 systemd-logind[1447]: Removed session 15. Jan 16 09:01:43.310250 sshd[4063]: Accepted publickey for core from 139.178.68.195 port 48944 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:43.311761 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:43.323347 systemd-logind[1447]: New session 16 of user core. Jan 16 09:01:43.332512 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 16 09:01:43.499776 sshd[4063]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:43.506059 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Jan 16 09:01:43.506407 systemd[1]: sshd@15-144.126.217.85:22-139.178.68.195:48944.service: Deactivated successfully. Jan 16 09:01:43.509093 systemd[1]: session-16.scope: Deactivated successfully. Jan 16 09:01:43.513142 systemd-logind[1447]: Removed session 16. Jan 16 09:01:46.920743 kubelet[2576]: E0116 09:01:46.920544 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:01:48.517354 systemd[1]: Started sshd@16-144.126.217.85:22-139.178.68.195:47604.service - OpenSSH per-connection server daemon (139.178.68.195:47604). Jan 16 09:01:48.623474 sshd[4075]: Accepted publickey for core from 139.178.68.195 port 47604 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:48.628950 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:48.642455 systemd-logind[1447]: New session 17 of user core. Jan 16 09:01:48.648725 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 16 09:01:48.906578 sshd[4075]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:48.912599 systemd[1]: sshd@16-144.126.217.85:22-139.178.68.195:47604.service: Deactivated successfully. Jan 16 09:01:48.923046 systemd[1]: session-17.scope: Deactivated successfully. Jan 16 09:01:48.930586 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Jan 16 09:01:48.932892 systemd-logind[1447]: Removed session 17. Jan 16 09:01:53.932740 systemd[1]: Started sshd@17-144.126.217.85:22-139.178.68.195:47614.service - OpenSSH per-connection server daemon (139.178.68.195:47614). Jan 16 09:01:53.996378 sshd[4090]: Accepted publickey for core from 139.178.68.195 port 47614 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:53.997677 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:54.007560 systemd-logind[1447]: New session 18 of user core. Jan 16 09:01:54.014690 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 16 09:01:54.198720 sshd[4090]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:54.203860 systemd[1]: sshd@17-144.126.217.85:22-139.178.68.195:47614.service: Deactivated successfully. Jan 16 09:01:54.206792 systemd[1]: session-18.scope: Deactivated successfully. Jan 16 09:01:54.211843 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Jan 16 09:01:54.214391 systemd-logind[1447]: Removed session 18. Jan 16 09:01:56.920067 kubelet[2576]: E0116 09:01:56.919962 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:01:59.222746 systemd[1]: Started sshd@18-144.126.217.85:22-139.178.68.195:38208.service - OpenSSH per-connection server daemon (139.178.68.195:38208). Jan 16 09:01:59.296966 sshd[4103]: Accepted publickey for core from 139.178.68.195 port 38208 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:01:59.299045 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:01:59.306904 systemd-logind[1447]: New session 19 of user core. Jan 16 09:01:59.316654 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 16 09:01:59.516616 sshd[4103]: pam_unix(sshd:session): session closed for user core Jan 16 09:01:59.524667 systemd[1]: sshd@18-144.126.217.85:22-139.178.68.195:38208.service: Deactivated successfully. Jan 16 09:01:59.529032 systemd[1]: session-19.scope: Deactivated successfully. Jan 16 09:01:59.530601 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Jan 16 09:01:59.532315 systemd-logind[1447]: Removed session 19. Jan 16 09:02:04.546702 systemd[1]: Started sshd@19-144.126.217.85:22-139.178.68.195:38210.service - OpenSSH per-connection server daemon (139.178.68.195:38210). Jan 16 09:02:04.597668 sshd[4116]: Accepted publickey for core from 139.178.68.195 port 38210 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:02:04.600640 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:04.614775 systemd-logind[1447]: New session 20 of user core. Jan 16 09:02:04.624624 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 16 09:02:04.814870 sshd[4116]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:04.830202 systemd[1]: sshd@19-144.126.217.85:22-139.178.68.195:38210.service: Deactivated successfully. Jan 16 09:02:04.834960 systemd[1]: session-20.scope: Deactivated successfully. Jan 16 09:02:04.839139 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Jan 16 09:02:04.843822 systemd[1]: Started sshd@20-144.126.217.85:22-139.178.68.195:42628.service - OpenSSH per-connection server daemon (139.178.68.195:42628). Jan 16 09:02:04.847494 systemd-logind[1447]: Removed session 20. Jan 16 09:02:04.915065 sshd[4129]: Accepted publickey for core from 139.178.68.195 port 42628 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:02:04.919923 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:04.921082 kubelet[2576]: E0116 09:02:04.920296 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:04.931129 systemd-logind[1447]: New session 21 of user core. Jan 16 09:02:04.941694 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 16 09:02:05.496476 sshd[4129]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:05.518589 systemd[1]: sshd@20-144.126.217.85:22-139.178.68.195:42628.service: Deactivated successfully. Jan 16 09:02:05.523852 systemd[1]: session-21.scope: Deactivated successfully. Jan 16 09:02:05.525590 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Jan 16 09:02:05.540053 systemd[1]: Started sshd@21-144.126.217.85:22-139.178.68.195:42642.service - OpenSSH per-connection server daemon (139.178.68.195:42642). Jan 16 09:02:05.542665 systemd-logind[1447]: Removed session 21. Jan 16 09:02:05.674681 sshd[4140]: Accepted publickey for core from 139.178.68.195 port 42642 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:02:05.677606 sshd[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:05.686247 systemd-logind[1447]: New session 22 of user core. Jan 16 09:02:05.693647 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 16 09:02:08.387836 sshd[4140]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:08.412775 systemd[1]: Started sshd@22-144.126.217.85:22-139.178.68.195:42650.service - OpenSSH per-connection server daemon (139.178.68.195:42650). Jan 16 09:02:08.416105 systemd[1]: sshd@21-144.126.217.85:22-139.178.68.195:42642.service: Deactivated successfully. Jan 16 09:02:08.442774 systemd[1]: session-22.scope: Deactivated successfully. Jan 16 09:02:08.453985 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Jan 16 09:02:08.467043 systemd-logind[1447]: Removed session 22. Jan 16 09:02:08.581628 sshd[4157]: Accepted publickey for core from 139.178.68.195 port 42650 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:02:08.588819 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:08.606035 systemd-logind[1447]: New session 23 of user core. Jan 16 09:02:08.625865 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 16 09:02:09.314891 sshd[4157]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:09.330713 systemd[1]: sshd@22-144.126.217.85:22-139.178.68.195:42650.service: Deactivated successfully. Jan 16 09:02:09.337421 systemd[1]: session-23.scope: Deactivated successfully. Jan 16 09:02:09.342867 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Jan 16 09:02:09.355908 systemd[1]: Started sshd@23-144.126.217.85:22-139.178.68.195:42666.service - OpenSSH per-connection server daemon (139.178.68.195:42666). Jan 16 09:02:09.358844 systemd-logind[1447]: Removed session 23. Jan 16 09:02:09.442460 sshd[4171]: Accepted publickey for core from 139.178.68.195 port 42666 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:02:09.446531 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:09.456890 systemd-logind[1447]: New session 24 of user core. Jan 16 09:02:09.463692 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 16 09:02:09.730683 sshd[4171]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:09.738479 systemd[1]: sshd@23-144.126.217.85:22-139.178.68.195:42666.service: Deactivated successfully. Jan 16 09:02:09.743656 systemd[1]: session-24.scope: Deactivated successfully. Jan 16 09:02:09.746524 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Jan 16 09:02:09.751610 systemd-logind[1447]: Removed session 24. Jan 16 09:02:14.752756 systemd[1]: Started sshd@24-144.126.217.85:22-139.178.68.195:50646.service - OpenSSH per-connection server daemon (139.178.68.195:50646). Jan 16 09:02:14.818925 sshd[4184]: Accepted publickey for core from 139.178.68.195 port 50646 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:02:14.822947 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:14.833043 systemd-logind[1447]: New session 25 of user core. Jan 16 09:02:14.840585 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 16 09:02:14.919908 kubelet[2576]: E0116 09:02:14.919813 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:15.022606 sshd[4184]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:15.028639 systemd[1]: sshd@24-144.126.217.85:22-139.178.68.195:50646.service: Deactivated successfully. Jan 16 09:02:15.032867 systemd[1]: session-25.scope: Deactivated successfully. Jan 16 09:02:15.036425 systemd-logind[1447]: Session 25 logged out. Waiting for processes to exit. Jan 16 09:02:15.039301 systemd-logind[1447]: Removed session 25. Jan 16 09:02:18.921353 kubelet[2576]: E0116 09:02:18.919754 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:20.045922 systemd[1]: Started sshd@25-144.126.217.85:22-139.178.68.195:50658.service - OpenSSH per-connection server daemon (139.178.68.195:50658). Jan 16 09:02:20.143263 sshd[4200]: Accepted publickey for core from 139.178.68.195 port 50658 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:02:20.146923 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:20.157840 systemd-logind[1447]: New session 26 of user core. Jan 16 09:02:20.166223 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 16 09:02:20.351882 sshd[4200]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:20.360490 systemd[1]: sshd@25-144.126.217.85:22-139.178.68.195:50658.service: Deactivated successfully. Jan 16 09:02:20.364345 systemd[1]: session-26.scope: Deactivated successfully. Jan 16 09:02:20.366345 systemd-logind[1447]: Session 26 logged out. Waiting for processes to exit. Jan 16 09:02:20.368000 systemd-logind[1447]: Removed session 26. Jan 16 09:02:23.920954 kubelet[2576]: E0116 09:02:23.919013 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:25.390524 systemd[1]: Started sshd@26-144.126.217.85:22-139.178.68.195:41606.service - OpenSSH per-connection server daemon (139.178.68.195:41606). Jan 16 09:02:25.450032 sshd[4213]: Accepted publickey for core from 139.178.68.195 port 41606 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:02:25.458151 sshd[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:25.471935 systemd-logind[1447]: New session 27 of user core. Jan 16 09:02:25.486813 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 16 09:02:25.673647 sshd[4213]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:25.681698 systemd[1]: sshd@26-144.126.217.85:22-139.178.68.195:41606.service: Deactivated successfully. Jan 16 09:02:25.687408 systemd[1]: session-27.scope: Deactivated successfully. Jan 16 09:02:25.688610 systemd-logind[1447]: Session 27 logged out. Waiting for processes to exit. Jan 16 09:02:25.690716 systemd-logind[1447]: Removed session 27. Jan 16 09:02:30.698131 systemd[1]: Started sshd@27-144.126.217.85:22-139.178.68.195:41612.service - OpenSSH per-connection server daemon (139.178.68.195:41612). Jan 16 09:02:30.766280 sshd[4225]: Accepted publickey for core from 139.178.68.195 port 41612 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:02:30.769628 sshd[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:30.778126 systemd-logind[1447]: New session 28 of user core. Jan 16 09:02:30.786834 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 16 09:02:30.964533 sshd[4225]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:30.979546 systemd[1]: sshd@27-144.126.217.85:22-139.178.68.195:41612.service: Deactivated successfully. Jan 16 09:02:30.983425 systemd[1]: session-28.scope: Deactivated successfully. Jan 16 09:02:30.986907 systemd-logind[1447]: Session 28 logged out. Waiting for processes to exit. Jan 16 09:02:30.993849 systemd[1]: Started sshd@28-144.126.217.85:22-139.178.68.195:41622.service - OpenSSH per-connection server daemon (139.178.68.195:41622). Jan 16 09:02:30.994900 systemd-logind[1447]: Removed session 28. Jan 16 09:02:31.061154 sshd[4238]: Accepted publickey for core from 139.178.68.195 port 41622 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:02:31.064050 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:31.072686 systemd-logind[1447]: New session 29 of user core. Jan 16 09:02:31.078791 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 16 09:02:32.798485 containerd[1470]: time="2025-01-16T09:02:32.796384693Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 09:02:32.999014 containerd[1470]: time="2025-01-16T09:02:32.995545676Z" level=info msg="StopContainer for \"aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3\" with timeout 30 (s)" Jan 16 09:02:33.001441 containerd[1470]: time="2025-01-16T09:02:32.999530334Z" level=info msg="StopContainer for \"f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11\" with timeout 2 (s)" Jan 16 09:02:33.004541 containerd[1470]: time="2025-01-16T09:02:33.003267923Z" level=info msg="Stop container \"aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3\" with signal terminated" Jan 16 09:02:33.005140 containerd[1470]: time="2025-01-16T09:02:33.005073822Z" level=info msg="Stop container \"f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11\" with signal terminated" Jan 16 09:02:33.033969 systemd-networkd[1372]: lxc_health: Link DOWN Jan 16 09:02:33.034988 systemd-networkd[1372]: lxc_health: Lost carrier Jan 16 09:02:33.086935 systemd[1]: cri-containerd-aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3.scope: Deactivated successfully. Jan 16 09:02:33.089592 systemd[1]: cri-containerd-f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11.scope: Deactivated successfully. Jan 16 09:02:33.089956 systemd[1]: cri-containerd-f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11.scope: Consumed 10.599s CPU time. Jan 16 09:02:33.165665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11-rootfs.mount: Deactivated successfully. Jan 16 09:02:33.174051 containerd[1470]: time="2025-01-16T09:02:33.173628364Z" level=info msg="shim disconnected" id=f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11 namespace=k8s.io Jan 16 09:02:33.174051 containerd[1470]: time="2025-01-16T09:02:33.173721557Z" level=warning msg="cleaning up after shim disconnected" id=f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11 namespace=k8s.io Jan 16 09:02:33.174051 containerd[1470]: time="2025-01-16T09:02:33.173734062Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:33.175232 containerd[1470]: time="2025-01-16T09:02:33.174916237Z" level=info msg="shim disconnected" id=aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3 namespace=k8s.io Jan 16 09:02:33.175232 containerd[1470]: time="2025-01-16T09:02:33.175038573Z" level=warning msg="cleaning up after shim disconnected" id=aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3 namespace=k8s.io Jan 16 09:02:33.175232 containerd[1470]: time="2025-01-16T09:02:33.175055246Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:33.180787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3-rootfs.mount: Deactivated successfully. Jan 16 09:02:33.197355 kubelet[2576]: E0116 09:02:33.197154 2576 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 16 09:02:33.252711 containerd[1470]: time="2025-01-16T09:02:33.250079923Z" level=info msg="StopContainer for \"aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3\" returns successfully" Jan 16 09:02:33.254782 containerd[1470]: time="2025-01-16T09:02:33.253718081Z" level=info msg="StopContainer for \"f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11\" returns successfully" Jan 16 09:02:33.256591 containerd[1470]: time="2025-01-16T09:02:33.254153660Z" level=info msg="StopPodSandbox for \"e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b\"" Jan 16 09:02:33.256591 containerd[1470]: time="2025-01-16T09:02:33.255757874Z" level=info msg="Container to stop \"aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:33.257545 containerd[1470]: time="2025-01-16T09:02:33.257090004Z" level=info msg="StopPodSandbox for \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\"" Jan 16 09:02:33.257545 containerd[1470]: time="2025-01-16T09:02:33.257167755Z" level=info msg="Container to stop \"b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:33.258117 containerd[1470]: time="2025-01-16T09:02:33.257898015Z" level=info msg="Container to stop \"39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:33.258117 containerd[1470]: time="2025-01-16T09:02:33.257945950Z" level=info msg="Container to stop \"f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:33.258117 containerd[1470]: time="2025-01-16T09:02:33.257963943Z" level=info msg="Container to stop \"2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:33.258117 containerd[1470]: time="2025-01-16T09:02:33.257983869Z" level=info msg="Container to stop \"88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:02:33.266617 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d-shm.mount: Deactivated successfully. Jan 16 09:02:33.266853 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b-shm.mount: Deactivated successfully. Jan 16 09:02:33.286803 systemd[1]: cri-containerd-5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d.scope: Deactivated successfully. Jan 16 09:02:33.293522 systemd[1]: cri-containerd-e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b.scope: Deactivated successfully. Jan 16 09:02:33.353421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d-rootfs.mount: Deactivated successfully. Jan 16 09:02:33.367696 containerd[1470]: time="2025-01-16T09:02:33.367575129Z" level=info msg="shim disconnected" id=5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d namespace=k8s.io Jan 16 09:02:33.367696 containerd[1470]: time="2025-01-16T09:02:33.367662679Z" level=warning msg="cleaning up after shim disconnected" id=5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d namespace=k8s.io Jan 16 09:02:33.367696 containerd[1470]: time="2025-01-16T09:02:33.367676608Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:33.382162 containerd[1470]: time="2025-01-16T09:02:33.381641716Z" level=info msg="shim disconnected" id=e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b namespace=k8s.io Jan 16 09:02:33.382162 containerd[1470]: time="2025-01-16T09:02:33.381989547Z" level=warning msg="cleaning up after shim disconnected" id=e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b namespace=k8s.io Jan 16 09:02:33.382162 containerd[1470]: time="2025-01-16T09:02:33.382009583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:33.407636 containerd[1470]: time="2025-01-16T09:02:33.406455729Z" level=warning msg="cleanup warnings time=\"2025-01-16T09:02:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 16 09:02:33.419617 containerd[1470]: time="2025-01-16T09:02:33.419544136Z" level=warning msg="cleanup warnings time=\"2025-01-16T09:02:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 16 09:02:33.422899 containerd[1470]: time="2025-01-16T09:02:33.422849239Z" level=info msg="TearDown network for sandbox \"e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b\" successfully" Jan 16 09:02:33.423114 containerd[1470]: time="2025-01-16T09:02:33.423094662Z" level=info msg="StopPodSandbox for \"e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b\" returns successfully" Jan 16 09:02:33.424942 containerd[1470]: time="2025-01-16T09:02:33.424874601Z" level=info msg="TearDown network for sandbox \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\" successfully" Jan 16 09:02:33.424942 containerd[1470]: time="2025-01-16T09:02:33.424926976Z" level=info msg="StopPodSandbox for \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\" returns successfully" Jan 16 09:02:33.587021 kubelet[2576]: I0116 09:02:33.586972 2576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9wgj\" (UniqueName: \"kubernetes.io/projected/938451e7-e794-44ec-960d-dec7c3802882-kube-api-access-r9wgj\") pod \"938451e7-e794-44ec-960d-dec7c3802882\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " Jan 16 09:02:33.587419 kubelet[2576]: I0116 09:02:33.587393 2576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c7a480d-afae-4554-a827-993537b8fd59-cilium-config-path\") pod \"6c7a480d-afae-4554-a827-993537b8fd59\" (UID: \"6c7a480d-afae-4554-a827-993537b8fd59\") " Jan 16 09:02:33.587561 kubelet[2576]: I0116 09:02:33.587549 2576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-host-proc-sys-net\") pod \"938451e7-e794-44ec-960d-dec7c3802882\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " Jan 16 09:02:33.588436 kubelet[2576]: I0116 09:02:33.587648 2576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-cilium-cgroup\") pod \"938451e7-e794-44ec-960d-dec7c3802882\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " Jan 16 09:02:33.588436 kubelet[2576]: I0116 09:02:33.587685 2576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-hostproc\") pod \"938451e7-e794-44ec-960d-dec7c3802882\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " Jan 16 09:02:33.588436 kubelet[2576]: I0116 09:02:33.587711 2576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbp59\" (UniqueName: \"kubernetes.io/projected/6c7a480d-afae-4554-a827-993537b8fd59-kube-api-access-qbp59\") pod \"6c7a480d-afae-4554-a827-993537b8fd59\" (UID: \"6c7a480d-afae-4554-a827-993537b8fd59\") " Jan 16 09:02:33.588436 kubelet[2576]: I0116 09:02:33.587730 2576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-lib-modules\") pod \"938451e7-e794-44ec-960d-dec7c3802882\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " Jan 16 09:02:33.588436 kubelet[2576]: I0116 09:02:33.587762 2576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/938451e7-e794-44ec-960d-dec7c3802882-clustermesh-secrets\") pod \"938451e7-e794-44ec-960d-dec7c3802882\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " Jan 16 09:02:33.588436 kubelet[2576]: I0116 09:02:33.587805 2576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/938451e7-e794-44ec-960d-dec7c3802882-hubble-tls\") pod \"938451e7-e794-44ec-960d-dec7c3802882\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " Jan 16 09:02:33.588951 kubelet[2576]: I0116 09:02:33.587832 2576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-cni-path\") pod \"938451e7-e794-44ec-960d-dec7c3802882\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " Jan 16 09:02:33.588951 kubelet[2576]: I0116 09:02:33.587870 2576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/938451e7-e794-44ec-960d-dec7c3802882-cilium-config-path\") pod \"938451e7-e794-44ec-960d-dec7c3802882\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " Jan 16 09:02:33.588951 kubelet[2576]: I0116 09:02:33.587903 2576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-bpf-maps\") pod \"938451e7-e794-44ec-960d-dec7c3802882\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " Jan 16 09:02:33.588951 kubelet[2576]: I0116 09:02:33.587929 2576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-xtables-lock\") pod \"938451e7-e794-44ec-960d-dec7c3802882\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " Jan 16 09:02:33.588951 kubelet[2576]: I0116 09:02:33.587958 2576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-cilium-run\") pod \"938451e7-e794-44ec-960d-dec7c3802882\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " Jan 16 09:02:33.588951 kubelet[2576]: I0116 09:02:33.587987 2576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-host-proc-sys-kernel\") pod \"938451e7-e794-44ec-960d-dec7c3802882\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " Jan 16 09:02:33.589603 kubelet[2576]: I0116 09:02:33.588045 2576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-etc-cni-netd\") pod \"938451e7-e794-44ec-960d-dec7c3802882\" (UID: \"938451e7-e794-44ec-960d-dec7c3802882\") " Jan 16 09:02:33.592221 kubelet[2576]: I0116 09:02:33.588163 2576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "938451e7-e794-44ec-960d-dec7c3802882" (UID: "938451e7-e794-44ec-960d-dec7c3802882"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:33.592763 kubelet[2576]: I0116 09:02:33.592685 2576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/938451e7-e794-44ec-960d-dec7c3802882-kube-api-access-r9wgj" (OuterVolumeSpecName: "kube-api-access-r9wgj") pod "938451e7-e794-44ec-960d-dec7c3802882" (UID: "938451e7-e794-44ec-960d-dec7c3802882"). InnerVolumeSpecName "kube-api-access-r9wgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 16 09:02:33.595739 kubelet[2576]: I0116 09:02:33.595658 2576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c7a480d-afae-4554-a827-993537b8fd59-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6c7a480d-afae-4554-a827-993537b8fd59" (UID: "6c7a480d-afae-4554-a827-993537b8fd59"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 16 09:02:33.595901 kubelet[2576]: I0116 09:02:33.595790 2576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-cni-path" (OuterVolumeSpecName: "cni-path") pod "938451e7-e794-44ec-960d-dec7c3802882" (UID: "938451e7-e794-44ec-960d-dec7c3802882"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:33.597965 kubelet[2576]: I0116 09:02:33.597893 2576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/938451e7-e794-44ec-960d-dec7c3802882-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "938451e7-e794-44ec-960d-dec7c3802882" (UID: "938451e7-e794-44ec-960d-dec7c3802882"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 16 09:02:33.598313 kubelet[2576]: I0116 09:02:33.598277 2576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "938451e7-e794-44ec-960d-dec7c3802882" (UID: "938451e7-e794-44ec-960d-dec7c3802882"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:33.598465 kubelet[2576]: I0116 09:02:33.598444 2576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "938451e7-e794-44ec-960d-dec7c3802882" (UID: "938451e7-e794-44ec-960d-dec7c3802882"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:33.598581 kubelet[2576]: I0116 09:02:33.598560 2576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-hostproc" (OuterVolumeSpecName: "hostproc") pod "938451e7-e794-44ec-960d-dec7c3802882" (UID: "938451e7-e794-44ec-960d-dec7c3802882"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:33.600993 kubelet[2576]: I0116 09:02:33.600911 2576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/938451e7-e794-44ec-960d-dec7c3802882-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "938451e7-e794-44ec-960d-dec7c3802882" (UID: "938451e7-e794-44ec-960d-dec7c3802882"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 16 09:02:33.615114 kubelet[2576]: I0116 09:02:33.610909 2576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "938451e7-e794-44ec-960d-dec7c3802882" (UID: "938451e7-e794-44ec-960d-dec7c3802882"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:33.615114 kubelet[2576]: I0116 09:02:33.610973 2576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "938451e7-e794-44ec-960d-dec7c3802882" (UID: "938451e7-e794-44ec-960d-dec7c3802882"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:33.615114 kubelet[2576]: I0116 09:02:33.610997 2576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "938451e7-e794-44ec-960d-dec7c3802882" (UID: "938451e7-e794-44ec-960d-dec7c3802882"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:33.615114 kubelet[2576]: I0116 09:02:33.611019 2576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "938451e7-e794-44ec-960d-dec7c3802882" (UID: "938451e7-e794-44ec-960d-dec7c3802882"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:33.615114 kubelet[2576]: I0116 09:02:33.611046 2576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "938451e7-e794-44ec-960d-dec7c3802882" (UID: "938451e7-e794-44ec-960d-dec7c3802882"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:02:33.618966 kubelet[2576]: I0116 09:02:33.618901 2576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c7a480d-afae-4554-a827-993537b8fd59-kube-api-access-qbp59" (OuterVolumeSpecName: "kube-api-access-qbp59") pod "6c7a480d-afae-4554-a827-993537b8fd59" (UID: "6c7a480d-afae-4554-a827-993537b8fd59"). InnerVolumeSpecName "kube-api-access-qbp59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 16 09:02:33.619178 kubelet[2576]: I0116 09:02:33.618998 2576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938451e7-e794-44ec-960d-dec7c3802882-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "938451e7-e794-44ec-960d-dec7c3802882" (UID: "938451e7-e794-44ec-960d-dec7c3802882"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 16 09:02:33.688451 kubelet[2576]: I0116 09:02:33.688369 2576 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-host-proc-sys-net\") on node \"ci-4081.3.0-f-6fcf2fe32d\" DevicePath \"\"" Jan 16 09:02:33.688451 kubelet[2576]: I0116 09:02:33.688446 2576 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-cilium-cgroup\") on node \"ci-4081.3.0-f-6fcf2fe32d\" DevicePath \"\"" Jan 16 09:02:33.688451 kubelet[2576]: I0116 09:02:33.688473 2576 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-lib-modules\") on node \"ci-4081.3.0-f-6fcf2fe32d\" DevicePath \"\"" Jan 16 09:02:33.688451 kubelet[2576]: I0116 09:02:33.688492 2576 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-hostproc\") on node \"ci-4081.3.0-f-6fcf2fe32d\" DevicePath \"\"" Jan 16 09:02:33.688846 kubelet[2576]: I0116 09:02:33.688509 2576 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qbp59\" (UniqueName: \"kubernetes.io/projected/6c7a480d-afae-4554-a827-993537b8fd59-kube-api-access-qbp59\") on node \"ci-4081.3.0-f-6fcf2fe32d\" DevicePath \"\"" Jan 16 09:02:33.688846 kubelet[2576]: I0116 09:02:33.688525 2576 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/938451e7-e794-44ec-960d-dec7c3802882-clustermesh-secrets\") on node \"ci-4081.3.0-f-6fcf2fe32d\" DevicePath \"\"" Jan 16 09:02:33.688846 kubelet[2576]: I0116 09:02:33.688540 2576 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/938451e7-e794-44ec-960d-dec7c3802882-hubble-tls\") on node \"ci-4081.3.0-f-6fcf2fe32d\" DevicePath \"\"" Jan 16 09:02:33.688846 kubelet[2576]: I0116 09:02:33.688556 2576 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-cni-path\") on node \"ci-4081.3.0-f-6fcf2fe32d\" DevicePath \"\"" Jan 16 09:02:33.688846 kubelet[2576]: I0116 09:02:33.688572 2576 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-bpf-maps\") on node \"ci-4081.3.0-f-6fcf2fe32d\" DevicePath \"\"" Jan 16 09:02:33.688846 kubelet[2576]: I0116 09:02:33.688587 2576 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-xtables-lock\") on node \"ci-4081.3.0-f-6fcf2fe32d\" DevicePath \"\"" Jan 16 09:02:33.688846 kubelet[2576]: I0116 09:02:33.688602 2576 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-cilium-run\") on node \"ci-4081.3.0-f-6fcf2fe32d\" DevicePath \"\"" Jan 16 09:02:33.688846 kubelet[2576]: I0116 09:02:33.688614 2576 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-host-proc-sys-kernel\") on node \"ci-4081.3.0-f-6fcf2fe32d\" DevicePath \"\"" Jan 16 09:02:33.689382 kubelet[2576]: I0116 09:02:33.688879 2576 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/938451e7-e794-44ec-960d-dec7c3802882-etc-cni-netd\") on node \"ci-4081.3.0-f-6fcf2fe32d\" DevicePath \"\"" Jan 16 09:02:33.689382 kubelet[2576]: I0116 09:02:33.688902 2576 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/938451e7-e794-44ec-960d-dec7c3802882-cilium-config-path\") on node \"ci-4081.3.0-f-6fcf2fe32d\" DevicePath \"\"" Jan 16 09:02:33.689382 kubelet[2576]: I0116 09:02:33.688935 2576 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r9wgj\" (UniqueName: \"kubernetes.io/projected/938451e7-e794-44ec-960d-dec7c3802882-kube-api-access-r9wgj\") on node \"ci-4081.3.0-f-6fcf2fe32d\" DevicePath \"\"" Jan 16 09:02:33.689382 kubelet[2576]: I0116 09:02:33.688952 2576 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c7a480d-afae-4554-a827-993537b8fd59-cilium-config-path\") on node \"ci-4081.3.0-f-6fcf2fe32d\" DevicePath \"\"" Jan 16 09:02:33.734537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b-rootfs.mount: Deactivated successfully. Jan 16 09:02:33.734696 systemd[1]: var-lib-kubelet-pods-6c7a480d\x2dafae\x2d4554\x2da827\x2d993537b8fd59-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqbp59.mount: Deactivated successfully. Jan 16 09:02:33.734789 systemd[1]: var-lib-kubelet-pods-938451e7\x2de794\x2d44ec\x2d960d\x2ddec7c3802882-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr9wgj.mount: Deactivated successfully. Jan 16 09:02:33.734902 systemd[1]: var-lib-kubelet-pods-938451e7\x2de794\x2d44ec\x2d960d\x2ddec7c3802882-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 16 09:02:33.734990 systemd[1]: var-lib-kubelet-pods-938451e7\x2de794\x2d44ec\x2d960d\x2ddec7c3802882-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 16 09:02:33.927197 kubelet[2576]: I0116 09:02:33.926164 2576 scope.go:117] "RemoveContainer" containerID="f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11" Jan 16 09:02:33.934988 systemd[1]: Removed slice kubepods-burstable-pod938451e7_e794_44ec_960d_dec7c3802882.slice - libcontainer container kubepods-burstable-pod938451e7_e794_44ec_960d_dec7c3802882.slice. Jan 16 09:02:33.935119 systemd[1]: kubepods-burstable-pod938451e7_e794_44ec_960d_dec7c3802882.slice: Consumed 10.715s CPU time. Jan 16 09:02:33.953681 containerd[1470]: time="2025-01-16T09:02:33.953578598Z" level=info msg="RemoveContainer for \"f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11\"" Jan 16 09:02:33.965805 containerd[1470]: time="2025-01-16T09:02:33.965734049Z" level=info msg="RemoveContainer for \"f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11\" returns successfully" Jan 16 09:02:33.972309 systemd[1]: Removed slice kubepods-besteffort-pod6c7a480d_afae_4554_a827_993537b8fd59.slice - libcontainer container kubepods-besteffort-pod6c7a480d_afae_4554_a827_993537b8fd59.slice. Jan 16 09:02:33.987782 kubelet[2576]: I0116 09:02:33.987726 2576 scope.go:117] "RemoveContainer" containerID="39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f" Jan 16 09:02:33.992736 containerd[1470]: time="2025-01-16T09:02:33.992622641Z" level=info msg="RemoveContainer for \"39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f\"" Jan 16 09:02:34.001383 containerd[1470]: time="2025-01-16T09:02:34.001269294Z" level=info msg="RemoveContainer for \"39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f\" returns successfully" Jan 16 09:02:34.004081 kubelet[2576]: I0116 09:02:34.004004 2576 scope.go:117] "RemoveContainer" containerID="b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5" Jan 16 09:02:34.015039 containerd[1470]: time="2025-01-16T09:02:34.014110607Z" level=info msg="RemoveContainer for \"b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5\"" Jan 16 09:02:34.025553 containerd[1470]: time="2025-01-16T09:02:34.025473530Z" level=info msg="RemoveContainer for \"b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5\" returns successfully" Jan 16 09:02:34.027342 kubelet[2576]: I0116 09:02:34.027275 2576 scope.go:117] "RemoveContainer" containerID="88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679" Jan 16 09:02:34.032239 containerd[1470]: time="2025-01-16T09:02:34.032030660Z" level=info msg="RemoveContainer for \"88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679\"" Jan 16 09:02:34.038315 containerd[1470]: time="2025-01-16T09:02:34.037734046Z" level=info msg="RemoveContainer for \"88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679\" returns successfully" Jan 16 09:02:34.041636 kubelet[2576]: I0116 09:02:34.041226 2576 scope.go:117] "RemoveContainer" containerID="2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb" Jan 16 09:02:34.048409 containerd[1470]: time="2025-01-16T09:02:34.048268961Z" level=info msg="RemoveContainer for \"2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb\"" Jan 16 09:02:34.052997 containerd[1470]: time="2025-01-16T09:02:34.052707417Z" level=info msg="RemoveContainer for \"2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb\" returns successfully" Jan 16 09:02:34.053511 kubelet[2576]: I0116 09:02:34.053461 2576 scope.go:117] "RemoveContainer" containerID="f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11" Jan 16 09:02:34.083225 containerd[1470]: time="2025-01-16T09:02:34.058327443Z" level=error msg="ContainerStatus for \"f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11\": not found" Jan 16 09:02:34.083483 kubelet[2576]: E0116 09:02:34.083371 2576 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11\": not found" containerID="f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11" Jan 16 09:02:34.105086 kubelet[2576]: I0116 09:02:34.102638 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11"} err="failed to get container status \"f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8562926a369dcda7e549cccbaac7655c43511cf0be07bd12ff3a87a4b1fcc11\": not found" Jan 16 09:02:34.105086 kubelet[2576]: I0116 09:02:34.102696 2576 scope.go:117] "RemoveContainer" containerID="39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f" Jan 16 09:02:34.108684 containerd[1470]: time="2025-01-16T09:02:34.108487124Z" level=error msg="ContainerStatus for \"39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f\": not found" Jan 16 09:02:34.108952 kubelet[2576]: E0116 09:02:34.108827 2576 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f\": not found" containerID="39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f" Jan 16 09:02:34.108952 kubelet[2576]: I0116 09:02:34.108904 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f"} err="failed to get container status \"39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"39e0e6dae4ed32b3569b19f5542d6d3d3486662e178bf1b804ec0e14a9bd1c3f\": not found" Jan 16 09:02:34.108952 kubelet[2576]: I0116 09:02:34.108950 2576 scope.go:117] "RemoveContainer" containerID="b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5" Jan 16 09:02:34.109654 containerd[1470]: time="2025-01-16T09:02:34.109563490Z" level=error msg="ContainerStatus for \"b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5\": not found" Jan 16 09:02:34.110266 kubelet[2576]: E0116 09:02:34.109963 2576 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5\": not found" containerID="b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5" Jan 16 09:02:34.110266 kubelet[2576]: I0116 09:02:34.110033 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5"} err="failed to get container status \"b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9476dd0ec03f0f429c15888a9670efba20f84caee2973c4f59a2a31cdd519c5\": not found" Jan 16 09:02:34.110266 kubelet[2576]: I0116 09:02:34.110057 2576 scope.go:117] "RemoveContainer" containerID="88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679" Jan 16 09:02:34.110581 containerd[1470]: time="2025-01-16T09:02:34.110462040Z" level=error msg="ContainerStatus for \"88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679\": not found" Jan 16 09:02:34.110779 kubelet[2576]: E0116 09:02:34.110752 2576 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679\": not found" containerID="88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679" Jan 16 09:02:34.110849 kubelet[2576]: I0116 09:02:34.110808 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679"} err="failed to get container status \"88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679\": rpc error: code = NotFound desc = an error occurred when try to find container \"88c6513cefbeb7b12b724027dff9b0d0b6cad0a71f6bd794ba5bc1de21b34679\": not found" Jan 16 09:02:34.110849 kubelet[2576]: I0116 09:02:34.110829 2576 scope.go:117] "RemoveContainer" containerID="2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb" Jan 16 09:02:34.111272 containerd[1470]: time="2025-01-16T09:02:34.111124194Z" level=error msg="ContainerStatus for \"2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb\": not found" Jan 16 09:02:34.111368 kubelet[2576]: E0116 09:02:34.111345 2576 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb\": not found" containerID="2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb" Jan 16 09:02:34.111434 kubelet[2576]: I0116 09:02:34.111386 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb"} err="failed to get container status \"2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb\": rpc error: code = NotFound desc = an error occurred when try to find container \"2106d9ced44bf11f07161e0f79d6f885d9d647c51b79621bc387a4f7b48b7acb\": not found" Jan 16 09:02:34.111434 kubelet[2576]: I0116 09:02:34.111404 2576 scope.go:117] "RemoveContainer" containerID="aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3" Jan 16 09:02:34.113828 containerd[1470]: time="2025-01-16T09:02:34.113333810Z" level=info msg="RemoveContainer for \"aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3\"" Jan 16 09:02:34.117519 containerd[1470]: time="2025-01-16T09:02:34.117460860Z" level=info msg="RemoveContainer for \"aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3\" returns successfully" Jan 16 09:02:34.118855 kubelet[2576]: I0116 09:02:34.118816 2576 scope.go:117] "RemoveContainer" containerID="aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3" Jan 16 09:02:34.119929 containerd[1470]: time="2025-01-16T09:02:34.119842648Z" level=error msg="ContainerStatus for \"aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3\": not found" Jan 16 09:02:34.120254 kubelet[2576]: E0116 09:02:34.120217 2576 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3\": not found" containerID="aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3" Jan 16 09:02:34.120381 kubelet[2576]: I0116 09:02:34.120283 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3"} err="failed to get container status \"aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa1efd5d86f7d0c80d577d123b2c8cbf816633ce2b3d7c033475b32f4ce0a9d3\": not found" Jan 16 09:02:34.605689 sshd[4238]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:34.621547 systemd[1]: sshd@28-144.126.217.85:22-139.178.68.195:41622.service: Deactivated successfully. Jan 16 09:02:34.630209 systemd[1]: session-29.scope: Deactivated successfully. Jan 16 09:02:34.637909 systemd-logind[1447]: Session 29 logged out. Waiting for processes to exit. Jan 16 09:02:34.648664 systemd[1]: Started sshd@29-144.126.217.85:22-139.178.68.195:41634.service - OpenSSH per-connection server daemon (139.178.68.195:41634). Jan 16 09:02:34.651120 systemd-logind[1447]: Removed session 29. Jan 16 09:02:34.718222 sshd[4398]: Accepted publickey for core from 139.178.68.195 port 41634 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:02:34.720046 sshd[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:34.732318 systemd-logind[1447]: New session 30 of user core. Jan 16 09:02:34.735692 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 16 09:02:34.927396 kubelet[2576]: I0116 09:02:34.927347 2576 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6c7a480d-afae-4554-a827-993537b8fd59" path="/var/lib/kubelet/pods/6c7a480d-afae-4554-a827-993537b8fd59/volumes" Jan 16 09:02:34.928049 kubelet[2576]: I0116 09:02:34.927973 2576 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="938451e7-e794-44ec-960d-dec7c3802882" path="/var/lib/kubelet/pods/938451e7-e794-44ec-960d-dec7c3802882/volumes" Jan 16 09:02:35.992761 sshd[4398]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:36.017134 systemd[1]: sshd@29-144.126.217.85:22-139.178.68.195:41634.service: Deactivated successfully. Jan 16 09:02:36.023688 systemd[1]: session-30.scope: Deactivated successfully. Jan 16 09:02:36.026398 systemd-logind[1447]: Session 30 logged out. Waiting for processes to exit. Jan 16 09:02:36.041162 systemd[1]: Started sshd@30-144.126.217.85:22-139.178.68.195:34950.service - OpenSSH per-connection server daemon (139.178.68.195:34950). Jan 16 09:02:36.046312 systemd-logind[1447]: Removed session 30. Jan 16 09:02:36.114950 kubelet[2576]: I0116 09:02:36.114467 2576 topology_manager.go:215] "Topology Admit Handler" podUID="2373b100-ae8b-49ad-919b-691d97c4a9ff" podNamespace="kube-system" podName="cilium-brrsk" Jan 16 09:02:36.123719 kubelet[2576]: E0116 09:02:36.123063 2576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="938451e7-e794-44ec-960d-dec7c3802882" containerName="mount-cgroup" Jan 16 09:02:36.123719 kubelet[2576]: E0116 09:02:36.123120 2576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="938451e7-e794-44ec-960d-dec7c3802882" containerName="mount-bpf-fs" Jan 16 09:02:36.123719 kubelet[2576]: E0116 09:02:36.123135 2576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="938451e7-e794-44ec-960d-dec7c3802882" containerName="cilium-agent" Jan 16 09:02:36.123719 kubelet[2576]: E0116 09:02:36.123148 2576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="938451e7-e794-44ec-960d-dec7c3802882" containerName="clean-cilium-state" Jan 16 09:02:36.123719 kubelet[2576]: E0116 09:02:36.123160 2576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c7a480d-afae-4554-a827-993537b8fd59" containerName="cilium-operator" Jan 16 09:02:36.123719 kubelet[2576]: E0116 09:02:36.123171 2576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="938451e7-e794-44ec-960d-dec7c3802882" containerName="apply-sysctl-overwrites" Jan 16 09:02:36.123719 kubelet[2576]: I0116 09:02:36.123272 2576 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c7a480d-afae-4554-a827-993537b8fd59" containerName="cilium-operator" Jan 16 09:02:36.123719 kubelet[2576]: I0116 09:02:36.123285 2576 memory_manager.go:354] "RemoveStaleState removing state" podUID="938451e7-e794-44ec-960d-dec7c3802882" containerName="cilium-agent" Jan 16 09:02:36.132212 sshd[4412]: Accepted publickey for core from 139.178.68.195 port 34950 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:02:36.136445 sshd[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:36.164409 systemd-logind[1447]: New session 31 of user core. Jan 16 09:02:36.174018 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 16 09:02:36.226849 systemd[1]: Created slice kubepods-burstable-pod2373b100_ae8b_49ad_919b_691d97c4a9ff.slice - libcontainer container kubepods-burstable-pod2373b100_ae8b_49ad_919b_691d97c4a9ff.slice. Jan 16 09:02:36.258613 sshd[4412]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:36.274257 systemd[1]: sshd@30-144.126.217.85:22-139.178.68.195:34950.service: Deactivated successfully. Jan 16 09:02:36.283059 systemd[1]: session-31.scope: Deactivated successfully. Jan 16 09:02:36.289605 systemd-logind[1447]: Session 31 logged out. Waiting for processes to exit. Jan 16 09:02:36.297965 systemd[1]: Started sshd@31-144.126.217.85:22-139.178.68.195:34960.service - OpenSSH per-connection server daemon (139.178.68.195:34960). Jan 16 09:02:36.301608 systemd-logind[1447]: Removed session 31. Jan 16 09:02:36.317719 kubelet[2576]: I0116 09:02:36.317632 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2373b100-ae8b-49ad-919b-691d97c4a9ff-cilium-config-path\") pod \"cilium-brrsk\" (UID: \"2373b100-ae8b-49ad-919b-691d97c4a9ff\") " pod="kube-system/cilium-brrsk" Jan 16 09:02:36.323080 kubelet[2576]: I0116 09:02:36.322901 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2373b100-ae8b-49ad-919b-691d97c4a9ff-host-proc-sys-net\") pod \"cilium-brrsk\" (UID: \"2373b100-ae8b-49ad-919b-691d97c4a9ff\") " pod="kube-system/cilium-brrsk" Jan 16 09:02:36.323080 kubelet[2576]: I0116 09:02:36.323027 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2373b100-ae8b-49ad-919b-691d97c4a9ff-xtables-lock\") pod \"cilium-brrsk\" (UID: \"2373b100-ae8b-49ad-919b-691d97c4a9ff\") " pod="kube-system/cilium-brrsk" Jan 16 09:02:36.323080 kubelet[2576]: I0116 09:02:36.323065 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2373b100-ae8b-49ad-919b-691d97c4a9ff-clustermesh-secrets\") pod \"cilium-brrsk\" (UID: \"2373b100-ae8b-49ad-919b-691d97c4a9ff\") " pod="kube-system/cilium-brrsk" Jan 16 09:02:36.323080 kubelet[2576]: I0116 09:02:36.323104 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2373b100-ae8b-49ad-919b-691d97c4a9ff-cni-path\") pod \"cilium-brrsk\" (UID: \"2373b100-ae8b-49ad-919b-691d97c4a9ff\") " pod="kube-system/cilium-brrsk" Jan 16 09:02:36.323585 kubelet[2576]: I0116 09:02:36.323135 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2373b100-ae8b-49ad-919b-691d97c4a9ff-cilium-ipsec-secrets\") pod \"cilium-brrsk\" (UID: \"2373b100-ae8b-49ad-919b-691d97c4a9ff\") " pod="kube-system/cilium-brrsk" Jan 16 09:02:36.323585 kubelet[2576]: I0116 09:02:36.323164 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2373b100-ae8b-49ad-919b-691d97c4a9ff-host-proc-sys-kernel\") pod \"cilium-brrsk\" (UID: \"2373b100-ae8b-49ad-919b-691d97c4a9ff\") " pod="kube-system/cilium-brrsk" Jan 16 09:02:36.323585 kubelet[2576]: I0116 09:02:36.323220 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cspqd\" (UniqueName: \"kubernetes.io/projected/2373b100-ae8b-49ad-919b-691d97c4a9ff-kube-api-access-cspqd\") pod \"cilium-brrsk\" (UID: \"2373b100-ae8b-49ad-919b-691d97c4a9ff\") " pod="kube-system/cilium-brrsk" Jan 16 09:02:36.323585 kubelet[2576]: I0116 09:02:36.323253 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2373b100-ae8b-49ad-919b-691d97c4a9ff-hostproc\") pod \"cilium-brrsk\" (UID: \"2373b100-ae8b-49ad-919b-691d97c4a9ff\") " pod="kube-system/cilium-brrsk" Jan 16 09:02:36.323585 kubelet[2576]: I0116 09:02:36.323283 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2373b100-ae8b-49ad-919b-691d97c4a9ff-cilium-cgroup\") pod \"cilium-brrsk\" (UID: \"2373b100-ae8b-49ad-919b-691d97c4a9ff\") " pod="kube-system/cilium-brrsk" Jan 16 09:02:36.323585 kubelet[2576]: I0116 09:02:36.323314 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2373b100-ae8b-49ad-919b-691d97c4a9ff-bpf-maps\") pod \"cilium-brrsk\" (UID: \"2373b100-ae8b-49ad-919b-691d97c4a9ff\") " pod="kube-system/cilium-brrsk" Jan 16 09:02:36.323925 kubelet[2576]: I0116 09:02:36.323345 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2373b100-ae8b-49ad-919b-691d97c4a9ff-cilium-run\") pod \"cilium-brrsk\" (UID: \"2373b100-ae8b-49ad-919b-691d97c4a9ff\") " pod="kube-system/cilium-brrsk" Jan 16 09:02:36.323925 kubelet[2576]: I0116 09:02:36.323373 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2373b100-ae8b-49ad-919b-691d97c4a9ff-etc-cni-netd\") pod \"cilium-brrsk\" (UID: \"2373b100-ae8b-49ad-919b-691d97c4a9ff\") " pod="kube-system/cilium-brrsk" Jan 16 09:02:36.323925 kubelet[2576]: I0116 09:02:36.323400 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2373b100-ae8b-49ad-919b-691d97c4a9ff-lib-modules\") pod \"cilium-brrsk\" (UID: \"2373b100-ae8b-49ad-919b-691d97c4a9ff\") " pod="kube-system/cilium-brrsk" Jan 16 09:02:36.323925 kubelet[2576]: I0116 09:02:36.323431 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2373b100-ae8b-49ad-919b-691d97c4a9ff-hubble-tls\") pod \"cilium-brrsk\" (UID: \"2373b100-ae8b-49ad-919b-691d97c4a9ff\") " pod="kube-system/cilium-brrsk" Jan 16 09:02:36.392927 sshd[4420]: Accepted publickey for core from 139.178.68.195 port 34960 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:02:36.394590 sshd[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:02:36.404713 systemd-logind[1447]: New session 32 of user core. Jan 16 09:02:36.417815 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 16 09:02:36.557772 kubelet[2576]: E0116 09:02:36.557586 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:36.563238 containerd[1470]: time="2025-01-16T09:02:36.562368076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-brrsk,Uid:2373b100-ae8b-49ad-919b-691d97c4a9ff,Namespace:kube-system,Attempt:0,}" Jan 16 09:02:36.634010 containerd[1470]: time="2025-01-16T09:02:36.633739970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:02:36.634010 containerd[1470]: time="2025-01-16T09:02:36.633870325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:02:36.636994 containerd[1470]: time="2025-01-16T09:02:36.633946417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:02:36.636994 containerd[1470]: time="2025-01-16T09:02:36.634718765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:02:36.688131 systemd[1]: Started cri-containerd-e1ba6f9b8de8567bccc86acbb53618d5e67d6c796c4934a5a60c100c1267b62c.scope - libcontainer container e1ba6f9b8de8567bccc86acbb53618d5e67d6c796c4934a5a60c100c1267b62c. Jan 16 09:02:36.752166 containerd[1470]: time="2025-01-16T09:02:36.751786283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-brrsk,Uid:2373b100-ae8b-49ad-919b-691d97c4a9ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1ba6f9b8de8567bccc86acbb53618d5e67d6c796c4934a5a60c100c1267b62c\"" Jan 16 09:02:36.754276 kubelet[2576]: E0116 09:02:36.754069 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:36.767128 containerd[1470]: time="2025-01-16T09:02:36.766949270Z" level=info msg="CreateContainer within sandbox \"e1ba6f9b8de8567bccc86acbb53618d5e67d6c796c4934a5a60c100c1267b62c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 16 09:02:36.793827 containerd[1470]: time="2025-01-16T09:02:36.793639810Z" level=info msg="CreateContainer within sandbox \"e1ba6f9b8de8567bccc86acbb53618d5e67d6c796c4934a5a60c100c1267b62c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"52d25fb6653c2d0b10675c8e3c6462fe60571d52268fdc675be5a0ca607f7df5\"" Jan 16 09:02:36.795757 containerd[1470]: time="2025-01-16T09:02:36.794958268Z" level=info msg="StartContainer for \"52d25fb6653c2d0b10675c8e3c6462fe60571d52268fdc675be5a0ca607f7df5\"" Jan 16 09:02:36.864584 systemd[1]: Started cri-containerd-52d25fb6653c2d0b10675c8e3c6462fe60571d52268fdc675be5a0ca607f7df5.scope - libcontainer container 52d25fb6653c2d0b10675c8e3c6462fe60571d52268fdc675be5a0ca607f7df5. Jan 16 09:02:36.943869 containerd[1470]: time="2025-01-16T09:02:36.943340788Z" level=info msg="StartContainer for \"52d25fb6653c2d0b10675c8e3c6462fe60571d52268fdc675be5a0ca607f7df5\" returns successfully" Jan 16 09:02:36.977074 kubelet[2576]: E0116 09:02:36.977028 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:36.997570 systemd[1]: cri-containerd-52d25fb6653c2d0b10675c8e3c6462fe60571d52268fdc675be5a0ca607f7df5.scope: Deactivated successfully. Jan 16 09:02:37.073906 containerd[1470]: time="2025-01-16T09:02:37.073757186Z" level=info msg="shim disconnected" id=52d25fb6653c2d0b10675c8e3c6462fe60571d52268fdc675be5a0ca607f7df5 namespace=k8s.io Jan 16 09:02:37.074799 containerd[1470]: time="2025-01-16T09:02:37.074419932Z" level=warning msg="cleaning up after shim disconnected" id=52d25fb6653c2d0b10675c8e3c6462fe60571d52268fdc675be5a0ca607f7df5 namespace=k8s.io Jan 16 09:02:37.074799 containerd[1470]: time="2025-01-16T09:02:37.074461327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:37.982754 kubelet[2576]: E0116 09:02:37.982717 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:37.988668 containerd[1470]: time="2025-01-16T09:02:37.988593848Z" level=info msg="CreateContainer within sandbox \"e1ba6f9b8de8567bccc86acbb53618d5e67d6c796c4934a5a60c100c1267b62c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 16 09:02:38.022458 containerd[1470]: time="2025-01-16T09:02:38.022381905Z" level=info msg="CreateContainer within sandbox \"e1ba6f9b8de8567bccc86acbb53618d5e67d6c796c4934a5a60c100c1267b62c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5a44648b560d489f52168a71db988ea3eed0dc6083a637e053ad72eb5635d3f9\"" Jan 16 09:02:38.025412 containerd[1470]: time="2025-01-16T09:02:38.024656091Z" level=info msg="StartContainer for \"5a44648b560d489f52168a71db988ea3eed0dc6083a637e053ad72eb5635d3f9\"" Jan 16 09:02:38.121236 systemd[1]: Started cri-containerd-5a44648b560d489f52168a71db988ea3eed0dc6083a637e053ad72eb5635d3f9.scope - libcontainer container 5a44648b560d489f52168a71db988ea3eed0dc6083a637e053ad72eb5635d3f9. Jan 16 09:02:38.199864 kubelet[2576]: E0116 09:02:38.199766 2576 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 16 09:02:38.258004 containerd[1470]: time="2025-01-16T09:02:38.256685972Z" level=info msg="StartContainer for \"5a44648b560d489f52168a71db988ea3eed0dc6083a637e053ad72eb5635d3f9\" returns successfully" Jan 16 09:02:38.281671 systemd[1]: cri-containerd-5a44648b560d489f52168a71db988ea3eed0dc6083a637e053ad72eb5635d3f9.scope: Deactivated successfully. Jan 16 09:02:38.326836 containerd[1470]: time="2025-01-16T09:02:38.326483331Z" level=info msg="shim disconnected" id=5a44648b560d489f52168a71db988ea3eed0dc6083a637e053ad72eb5635d3f9 namespace=k8s.io Jan 16 09:02:38.326836 containerd[1470]: time="2025-01-16T09:02:38.326567149Z" level=warning msg="cleaning up after shim disconnected" id=5a44648b560d489f52168a71db988ea3eed0dc6083a637e053ad72eb5635d3f9 namespace=k8s.io Jan 16 09:02:38.326836 containerd[1470]: time="2025-01-16T09:02:38.326580208Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:38.433565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a44648b560d489f52168a71db988ea3eed0dc6083a637e053ad72eb5635d3f9-rootfs.mount: Deactivated successfully. Jan 16 09:02:38.466975 kubelet[2576]: I0116 09:02:38.466886 2576 setters.go:568] "Node became not ready" node="ci-4081.3.0-f-6fcf2fe32d" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-16T09:02:38Z","lastTransitionTime":"2025-01-16T09:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 16 09:02:38.988510 kubelet[2576]: E0116 09:02:38.987383 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:38.994504 containerd[1470]: time="2025-01-16T09:02:38.994435183Z" level=info msg="CreateContainer within sandbox \"e1ba6f9b8de8567bccc86acbb53618d5e67d6c796c4934a5a60c100c1267b62c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 16 09:02:39.033426 containerd[1470]: time="2025-01-16T09:02:39.031877879Z" level=info msg="CreateContainer within sandbox \"e1ba6f9b8de8567bccc86acbb53618d5e67d6c796c4934a5a60c100c1267b62c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1072795df2f548819d342af2ee1c00c2b0a1fd8693a5b724bb1325cbcb9964f7\"" Jan 16 09:02:39.039370 containerd[1470]: time="2025-01-16T09:02:39.035972238Z" level=info msg="StartContainer for \"1072795df2f548819d342af2ee1c00c2b0a1fd8693a5b724bb1325cbcb9964f7\"" Jan 16 09:02:39.040415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835833488.mount: Deactivated successfully. Jan 16 09:02:39.099804 systemd[1]: Started cri-containerd-1072795df2f548819d342af2ee1c00c2b0a1fd8693a5b724bb1325cbcb9964f7.scope - libcontainer container 1072795df2f548819d342af2ee1c00c2b0a1fd8693a5b724bb1325cbcb9964f7. Jan 16 09:02:39.145048 containerd[1470]: time="2025-01-16T09:02:39.144955123Z" level=info msg="StartContainer for \"1072795df2f548819d342af2ee1c00c2b0a1fd8693a5b724bb1325cbcb9964f7\" returns successfully" Jan 16 09:02:39.151630 systemd[1]: cri-containerd-1072795df2f548819d342af2ee1c00c2b0a1fd8693a5b724bb1325cbcb9964f7.scope: Deactivated successfully. Jan 16 09:02:39.191584 containerd[1470]: time="2025-01-16T09:02:39.191046145Z" level=info msg="shim disconnected" id=1072795df2f548819d342af2ee1c00c2b0a1fd8693a5b724bb1325cbcb9964f7 namespace=k8s.io Jan 16 09:02:39.191584 containerd[1470]: time="2025-01-16T09:02:39.191149020Z" level=warning msg="cleaning up after shim disconnected" id=1072795df2f548819d342af2ee1c00c2b0a1fd8693a5b724bb1325cbcb9964f7 namespace=k8s.io Jan 16 09:02:39.191584 containerd[1470]: time="2025-01-16T09:02:39.191162958Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:39.432910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1072795df2f548819d342af2ee1c00c2b0a1fd8693a5b724bb1325cbcb9964f7-rootfs.mount: Deactivated successfully. Jan 16 09:02:39.994565 kubelet[2576]: E0116 09:02:39.994504 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:40.001658 containerd[1470]: time="2025-01-16T09:02:40.000339043Z" level=info msg="CreateContainer within sandbox \"e1ba6f9b8de8567bccc86acbb53618d5e67d6c796c4934a5a60c100c1267b62c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 16 09:02:40.032014 containerd[1470]: time="2025-01-16T09:02:40.031843513Z" level=info msg="CreateContainer within sandbox \"e1ba6f9b8de8567bccc86acbb53618d5e67d6c796c4934a5a60c100c1267b62c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a0a01ba10a4ee08aefc7df42601c9fb77ed10d0751f7f9e9e2bdeba28875ceda\"" Jan 16 09:02:40.034571 containerd[1470]: time="2025-01-16T09:02:40.034372271Z" level=info msg="StartContainer for \"a0a01ba10a4ee08aefc7df42601c9fb77ed10d0751f7f9e9e2bdeba28875ceda\"" Jan 16 09:02:40.097676 systemd[1]: Started cri-containerd-a0a01ba10a4ee08aefc7df42601c9fb77ed10d0751f7f9e9e2bdeba28875ceda.scope - libcontainer container a0a01ba10a4ee08aefc7df42601c9fb77ed10d0751f7f9e9e2bdeba28875ceda. Jan 16 09:02:40.142180 systemd[1]: cri-containerd-a0a01ba10a4ee08aefc7df42601c9fb77ed10d0751f7f9e9e2bdeba28875ceda.scope: Deactivated successfully. Jan 16 09:02:40.148111 containerd[1470]: time="2025-01-16T09:02:40.147754930Z" level=info msg="StartContainer for \"a0a01ba10a4ee08aefc7df42601c9fb77ed10d0751f7f9e9e2bdeba28875ceda\" returns successfully" Jan 16 09:02:40.183325 containerd[1470]: time="2025-01-16T09:02:40.183104378Z" level=info msg="shim disconnected" id=a0a01ba10a4ee08aefc7df42601c9fb77ed10d0751f7f9e9e2bdeba28875ceda namespace=k8s.io Jan 16 09:02:40.183325 containerd[1470]: time="2025-01-16T09:02:40.183217882Z" level=warning msg="cleaning up after shim disconnected" id=a0a01ba10a4ee08aefc7df42601c9fb77ed10d0751f7f9e9e2bdeba28875ceda namespace=k8s.io Jan 16 09:02:40.183325 containerd[1470]: time="2025-01-16T09:02:40.183230815Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:02:40.433408 systemd[1]: run-containerd-runc-k8s.io-a0a01ba10a4ee08aefc7df42601c9fb77ed10d0751f7f9e9e2bdeba28875ceda-runc.lg0c5W.mount: Deactivated successfully. Jan 16 09:02:40.433592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0a01ba10a4ee08aefc7df42601c9fb77ed10d0751f7f9e9e2bdeba28875ceda-rootfs.mount: Deactivated successfully. Jan 16 09:02:41.003288 kubelet[2576]: E0116 09:02:41.002547 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:41.010858 containerd[1470]: time="2025-01-16T09:02:41.010274488Z" level=info msg="CreateContainer within sandbox \"e1ba6f9b8de8567bccc86acbb53618d5e67d6c796c4934a5a60c100c1267b62c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 16 09:02:41.038476 containerd[1470]: time="2025-01-16T09:02:41.038163230Z" level=info msg="CreateContainer within sandbox \"e1ba6f9b8de8567bccc86acbb53618d5e67d6c796c4934a5a60c100c1267b62c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"501bc1eef020884441d6482461175abc1101c21564dff1e403b656658e49795b\"" Jan 16 09:02:41.042254 containerd[1470]: time="2025-01-16T09:02:41.040398968Z" level=info msg="StartContainer for \"501bc1eef020884441d6482461175abc1101c21564dff1e403b656658e49795b\"" Jan 16 09:02:41.108155 systemd[1]: Started cri-containerd-501bc1eef020884441d6482461175abc1101c21564dff1e403b656658e49795b.scope - libcontainer container 501bc1eef020884441d6482461175abc1101c21564dff1e403b656658e49795b. Jan 16 09:02:41.157501 containerd[1470]: time="2025-01-16T09:02:41.155657324Z" level=info msg="StartContainer for \"501bc1eef020884441d6482461175abc1101c21564dff1e403b656658e49795b\" returns successfully" Jan 16 09:02:41.824268 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 16 09:02:42.014296 kubelet[2576]: E0116 09:02:42.013919 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:42.043748 kubelet[2576]: I0116 09:02:42.042355 2576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-brrsk" podStartSLOduration=6.042306171 podStartE2EDuration="6.042306171s" podCreationTimestamp="2025-01-16 09:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:02:42.040312021 +0000 UTC m=+229.326546719" watchObservedRunningTime="2025-01-16 09:02:42.042306171 +0000 UTC m=+229.328540978" Jan 16 09:02:43.039789 kubelet[2576]: E0116 09:02:43.039747 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:45.828174 systemd-networkd[1372]: lxc_health: Link UP Jan 16 09:02:45.849781 systemd-networkd[1372]: lxc_health: Gained carrier Jan 16 09:02:45.920235 kubelet[2576]: E0116 09:02:45.919688 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:46.564314 kubelet[2576]: E0116 09:02:46.564258 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:47.061992 kubelet[2576]: E0116 09:02:47.061929 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:47.499165 systemd-networkd[1372]: lxc_health: Gained IPv6LL Jan 16 09:02:48.057087 kubelet[2576]: E0116 09:02:48.057028 2576 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:02:49.970106 systemd[1]: run-containerd-runc-k8s.io-501bc1eef020884441d6482461175abc1101c21564dff1e403b656658e49795b-runc.Od2hiS.mount: Deactivated successfully. Jan 16 09:02:52.223633 systemd[1]: run-containerd-runc-k8s.io-501bc1eef020884441d6482461175abc1101c21564dff1e403b656658e49795b-runc.Mt0NBU.mount: Deactivated successfully. Jan 16 09:02:52.333019 kubelet[2576]: E0116 09:02:52.332917 2576 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52576->127.0.0.1:41451: write tcp 127.0.0.1:52576->127.0.0.1:41451: write: broken pipe Jan 16 09:02:52.434474 sshd[4420]: pam_unix(sshd:session): session closed for user core Jan 16 09:02:52.443960 systemd[1]: sshd@31-144.126.217.85:22-139.178.68.195:34960.service: Deactivated successfully. Jan 16 09:02:52.451471 systemd[1]: session-32.scope: Deactivated successfully. Jan 16 09:02:52.457212 systemd-logind[1447]: Session 32 logged out. Waiting for processes to exit. Jan 16 09:02:52.459919 systemd-logind[1447]: Removed session 32. Jan 16 09:02:52.957743 containerd[1470]: time="2025-01-16T09:02:52.957654661Z" level=info msg="StopPodSandbox for \"e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b\"" Jan 16 09:02:52.958355 containerd[1470]: time="2025-01-16T09:02:52.957833000Z" level=info msg="TearDown network for sandbox \"e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b\" successfully" Jan 16 09:02:52.958355 containerd[1470]: time="2025-01-16T09:02:52.957857774Z" level=info msg="StopPodSandbox for \"e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b\" returns successfully" Jan 16 09:02:52.959480 containerd[1470]: time="2025-01-16T09:02:52.959408481Z" level=info msg="RemovePodSandbox for \"e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b\"" Jan 16 09:02:52.964680 containerd[1470]: time="2025-01-16T09:02:52.963735031Z" level=info msg="Forcibly stopping sandbox \"e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b\"" Jan 16 09:02:52.964680 containerd[1470]: time="2025-01-16T09:02:52.964017233Z" level=info msg="TearDown network for sandbox \"e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b\" successfully" Jan 16 09:02:52.970142 containerd[1470]: time="2025-01-16T09:02:52.970057596Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:02:52.970427 containerd[1470]: time="2025-01-16T09:02:52.970176752Z" level=info msg="RemovePodSandbox \"e03f0af34cdd019bbe3c3c6dcc38c0480e9c54585b9cf926acb93a4e9886655b\" returns successfully" Jan 16 09:02:52.972132 containerd[1470]: time="2025-01-16T09:02:52.972072582Z" level=info msg="StopPodSandbox for \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\"" Jan 16 09:02:52.972339 containerd[1470]: time="2025-01-16T09:02:52.972265184Z" level=info msg="TearDown network for sandbox \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\" successfully" Jan 16 09:02:52.972339 containerd[1470]: time="2025-01-16T09:02:52.972293270Z" level=info msg="StopPodSandbox for \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\" returns successfully" Jan 16 09:02:52.974706 containerd[1470]: time="2025-01-16T09:02:52.974643510Z" level=info msg="RemovePodSandbox for \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\"" Jan 16 09:02:52.974706 containerd[1470]: time="2025-01-16T09:02:52.974698512Z" level=info msg="Forcibly stopping sandbox \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\"" Jan 16 09:02:52.974974 containerd[1470]: time="2025-01-16T09:02:52.974802682Z" level=info msg="TearDown network for sandbox \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\" successfully" Jan 16 09:02:52.979654 containerd[1470]: time="2025-01-16T09:02:52.979583560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 16 09:02:52.979833 containerd[1470]: time="2025-01-16T09:02:52.979686054Z" level=info msg="RemovePodSandbox \"5e21a5399a7900a138f1c1b50c77fe2b59dbe22ffbd4d06b09c916107ebefd0d\" returns successfully"