Jan 30 12:55:21.015986 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 12:55:21.016013 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 12:55:21.016027 kernel: BIOS-provided physical RAM map: Jan 30 12:55:21.016035 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 12:55:21.016041 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 12:55:21.016048 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 12:55:21.016057 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 30 12:55:21.016064 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 30 12:55:21.016072 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 12:55:21.016079 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 12:55:21.016090 kernel: NX (Execute Disable) protection: active Jan 30 12:55:21.016097 kernel: APIC: Static calls initialized Jan 30 12:55:21.016107 kernel: SMBIOS 2.8 present. Jan 30 12:55:21.016116 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 30 12:55:21.016125 kernel: Hypervisor detected: KVM Jan 30 12:55:21.016133 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 12:55:21.016147 kernel: kvm-clock: using sched offset of 3965689512 cycles Jan 30 12:55:21.016156 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 12:55:21.016164 kernel: tsc: Detected 2294.606 MHz processor Jan 30 12:55:21.016173 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 12:55:21.016182 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 12:55:21.016190 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 30 12:55:21.016198 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 12:55:21.016207 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 12:55:21.016219 kernel: ACPI: Early table checksum verification disabled Jan 30 12:55:21.016227 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 30 12:55:21.016236 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:21.016244 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:21.016253 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:21.016261 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 30 12:55:21.016270 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:21.016278 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:21.016286 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:21.016298 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:55:21.016306 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 30 12:55:21.016315 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 30 12:55:21.016323 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 30 12:55:21.016331 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 30 12:55:21.016339 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 30 12:55:21.016348 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 30 12:55:21.016364 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 30 12:55:21.016373 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 12:55:21.016381 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 12:55:21.016390 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 12:55:21.016399 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 12:55:21.016410 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 30 12:55:21.016419 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 30 12:55:21.016431 kernel: Zone ranges: Jan 30 12:55:21.016440 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 12:55:21.016449 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 30 12:55:21.016458 kernel: Normal empty Jan 30 12:55:21.016466 kernel: Movable zone start for each node Jan 30 12:55:21.016475 kernel: Early memory node ranges Jan 30 12:55:21.016483 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 12:55:21.016492 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 30 12:55:21.016501 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 30 12:55:21.016509 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 12:55:21.016521 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 12:55:21.016532 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 30 12:55:21.016541 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 12:55:21.016550 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 12:55:21.016558 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 12:55:21.016567 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 12:55:21.016576 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 12:55:21.016584 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 12:55:21.016593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 12:55:21.016606 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 12:55:21.016614 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 12:55:21.016623 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 12:55:21.016632 kernel: TSC deadline timer available Jan 30 12:55:21.016640 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 12:55:21.016649 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 12:55:21.016658 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 30 12:55:21.016669 kernel: Booting paravirtualized kernel on KVM Jan 30 12:55:21.016678 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 12:55:21.016690 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 12:55:21.016699 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 12:55:21.016708 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 12:55:21.016716 kernel: pcpu-alloc: [0] 0 1 Jan 30 12:55:21.016725 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 12:55:21.016735 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 12:55:21.016744 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 12:55:21.016752 kernel: random: crng init done Jan 30 12:55:21.016763 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 12:55:21.016826 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 12:55:21.016835 kernel: Fallback order for Node 0: 0 Jan 30 12:55:21.016844 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 30 12:55:21.016852 kernel: Policy zone: DMA32 Jan 30 12:55:21.016861 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 12:55:21.016870 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 127196K reserved, 0K cma-reserved) Jan 30 12:55:21.016879 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 12:55:21.016888 kernel: Kernel/User page tables isolation: enabled Jan 30 12:55:21.016900 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 12:55:21.016908 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 12:55:21.016917 kernel: Dynamic Preempt: voluntary Jan 30 12:55:21.016926 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 12:55:21.016936 kernel: rcu: RCU event tracing is enabled. Jan 30 12:55:21.016945 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 12:55:21.016954 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 12:55:21.016962 kernel: Rude variant of Tasks RCU enabled. Jan 30 12:55:21.016971 kernel: Tracing variant of Tasks RCU enabled. Jan 30 12:55:21.016984 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 12:55:21.016993 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 12:55:21.017001 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 12:55:21.017010 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 12:55:21.017022 kernel: Console: colour VGA+ 80x25 Jan 30 12:55:21.017030 kernel: printk: console [tty0] enabled Jan 30 12:55:21.017039 kernel: printk: console [ttyS0] enabled Jan 30 12:55:21.017048 kernel: ACPI: Core revision 20230628 Jan 30 12:55:21.017057 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 12:55:21.017069 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 12:55:21.017078 kernel: x2apic enabled Jan 30 12:55:21.017087 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 12:55:21.017095 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 12:55:21.017104 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134dbeb26, max_idle_ns: 440795298546 ns Jan 30 12:55:21.017113 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294606) Jan 30 12:55:21.017122 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 12:55:21.017131 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 12:55:21.017155 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 12:55:21.017164 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 12:55:21.017173 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 12:55:21.017185 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 12:55:21.017199 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 12:55:21.017226 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 12:55:21.017241 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 12:55:21.017253 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 12:55:21.017267 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 12:55:21.017293 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 12:55:21.017316 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 12:55:21.017337 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 12:55:21.017357 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 12:55:21.017378 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 12:55:21.017399 kernel: Freeing SMP alternatives memory: 32K Jan 30 12:55:21.017419 kernel: pid_max: default: 32768 minimum: 301 Jan 30 12:55:21.017440 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 12:55:21.017465 kernel: landlock: Up and running. Jan 30 12:55:21.017485 kernel: SELinux: Initializing. Jan 30 12:55:21.017505 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 12:55:21.017526 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 12:55:21.017547 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 30 12:55:21.017567 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 12:55:21.017588 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 12:55:21.017608 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 12:55:21.017629 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 30 12:55:21.017653 kernel: signal: max sigframe size: 1776 Jan 30 12:55:21.017674 kernel: rcu: Hierarchical SRCU implementation. Jan 30 12:55:21.017694 kernel: rcu: Max phase no-delay instances is 400. Jan 30 12:55:21.017715 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 12:55:21.017735 kernel: smp: Bringing up secondary CPUs ... Jan 30 12:55:21.017756 kernel: smpboot: x86: Booting SMP configuration: Jan 30 12:55:21.017791 kernel: .... node #0, CPUs: #1 Jan 30 12:55:21.017825 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 12:55:21.017838 kernel: smpboot: Max logical packages: 1 Jan 30 12:55:21.017851 kernel: smpboot: Total of 2 processors activated (9178.42 BogoMIPS) Jan 30 12:55:21.017861 kernel: devtmpfs: initialized Jan 30 12:55:21.017870 kernel: x86/mm: Memory block size: 128MB Jan 30 12:55:21.017880 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 12:55:21.017890 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 12:55:21.017899 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 12:55:21.017909 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 12:55:21.017918 kernel: audit: initializing netlink subsys (disabled) Jan 30 12:55:21.017928 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 12:55:21.017941 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 12:55:21.017950 kernel: audit: type=2000 audit(1738241718.944:1): state=initialized audit_enabled=0 res=1 Jan 30 12:55:21.017960 kernel: cpuidle: using governor menu Jan 30 12:55:21.017969 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 12:55:21.017978 kernel: dca service started, version 1.12.1 Jan 30 12:55:21.017988 kernel: PCI: Using configuration type 1 for base access Jan 30 12:55:21.017997 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 12:55:21.018007 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 12:55:21.018016 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 12:55:21.018029 kernel: ACPI: Added _OSI(Module Device) Jan 30 12:55:21.018038 kernel: ACPI: Added _OSI(Processor Device) Jan 30 12:55:21.018048 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 12:55:21.018057 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 12:55:21.018067 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 12:55:21.018076 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 12:55:21.018085 kernel: ACPI: Interpreter enabled Jan 30 12:55:21.018095 kernel: ACPI: PM: (supports S0 S5) Jan 30 12:55:21.018104 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 12:55:21.018116 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 12:55:21.018126 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 12:55:21.018135 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 12:55:21.018144 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 12:55:21.018334 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 12:55:21.018449 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 12:55:21.018564 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 12:55:21.018582 kernel: acpiphp: Slot [3] registered Jan 30 12:55:21.018592 kernel: acpiphp: Slot [4] registered Jan 30 12:55:21.018602 kernel: acpiphp: Slot [5] registered Jan 30 12:55:21.018611 kernel: acpiphp: Slot [6] registered Jan 30 12:55:21.018620 kernel: acpiphp: Slot [7] registered Jan 30 12:55:21.018629 kernel: acpiphp: Slot [8] registered Jan 30 12:55:21.018639 kernel: acpiphp: Slot [9] registered Jan 30 12:55:21.018648 kernel: acpiphp: Slot [10] registered Jan 30 12:55:21.018658 kernel: acpiphp: Slot [11] registered Jan 30 12:55:21.018670 kernel: acpiphp: Slot [12] registered Jan 30 12:55:21.018680 kernel: acpiphp: Slot [13] registered Jan 30 12:55:21.018689 kernel: acpiphp: Slot [14] registered Jan 30 12:55:21.018699 kernel: acpiphp: Slot [15] registered Jan 30 12:55:21.018708 kernel: acpiphp: Slot [16] registered Jan 30 12:55:21.020816 kernel: acpiphp: Slot [17] registered Jan 30 12:55:21.020852 kernel: acpiphp: Slot [18] registered Jan 30 12:55:21.020874 kernel: acpiphp: Slot [19] registered Jan 30 12:55:21.020895 kernel: acpiphp: Slot [20] registered Jan 30 12:55:21.020915 kernel: acpiphp: Slot [21] registered Jan 30 12:55:21.020942 kernel: acpiphp: Slot [22] registered Jan 30 12:55:21.020962 kernel: acpiphp: Slot [23] registered Jan 30 12:55:21.020983 kernel: acpiphp: Slot [24] registered Jan 30 12:55:21.021006 kernel: acpiphp: Slot [25] registered Jan 30 12:55:21.021021 kernel: acpiphp: Slot [26] registered Jan 30 12:55:21.021034 kernel: acpiphp: Slot [27] registered Jan 30 12:55:21.021056 kernel: acpiphp: Slot [28] registered Jan 30 12:55:21.021103 kernel: acpiphp: Slot [29] registered Jan 30 12:55:21.021124 kernel: acpiphp: Slot [30] registered Jan 30 12:55:21.021148 kernel: acpiphp: Slot [31] registered Jan 30 12:55:21.021169 kernel: PCI host bridge to bus 0000:00 Jan 30 12:55:21.021380 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 12:55:21.021516 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 12:55:21.021653 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 12:55:21.021806 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 12:55:21.021905 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 30 12:55:21.021993 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 12:55:21.022123 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 12:55:21.022248 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 12:55:21.022359 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 12:55:21.022457 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 30 12:55:21.022555 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 12:55:21.022651 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 12:55:21.022753 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 12:55:21.023950 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 12:55:21.024083 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 30 12:55:21.024187 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 30 12:55:21.024293 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 12:55:21.024393 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 12:55:21.024497 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 12:55:21.024608 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 12:55:21.024713 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 12:55:21.024909 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 30 12:55:21.025056 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 30 12:55:21.025201 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 12:55:21.025345 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 12:55:21.025511 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 12:55:21.025658 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 30 12:55:21.027005 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 30 12:55:21.027160 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 30 12:55:21.027294 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 12:55:21.027396 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 30 12:55:21.027513 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 30 12:55:21.027724 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 30 12:55:21.027914 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 30 12:55:21.028075 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 30 12:55:21.028225 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 30 12:55:21.028370 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 30 12:55:21.028552 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 30 12:55:21.028702 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 12:55:21.030962 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 30 12:55:21.031163 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 30 12:55:21.031326 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 30 12:55:21.031483 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 30 12:55:21.031630 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 30 12:55:21.031786 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 30 12:55:21.031959 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 12:55:21.032116 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 30 12:55:21.032276 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 30 12:55:21.032302 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 12:55:21.032323 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 12:55:21.032344 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 12:55:21.032365 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 12:55:21.032389 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 12:55:21.032414 kernel: iommu: Default domain type: Translated Jan 30 12:55:21.032435 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 12:55:21.032456 kernel: PCI: Using ACPI for IRQ routing Jan 30 12:55:21.032477 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 12:55:21.032497 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 12:55:21.032518 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 30 12:55:21.032679 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 12:55:21.034888 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 12:55:21.035061 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 12:55:21.035087 kernel: vgaarb: loaded Jan 30 12:55:21.035109 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 12:55:21.035129 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 12:55:21.035150 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 12:55:21.035171 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 12:55:21.035192 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 12:55:21.035213 kernel: pnp: PnP ACPI init Jan 30 12:55:21.035233 kernel: pnp: PnP ACPI: found 4 devices Jan 30 12:55:21.035258 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 12:55:21.035279 kernel: NET: Registered PF_INET protocol family Jan 30 12:55:21.035309 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 12:55:21.035324 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 12:55:21.035339 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 12:55:21.035363 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 12:55:21.035384 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 12:55:21.035404 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 12:55:21.035425 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 12:55:21.035450 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 12:55:21.035471 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 12:55:21.035491 kernel: NET: Registered PF_XDP protocol family Jan 30 12:55:21.035654 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 12:55:21.035813 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 12:55:21.035953 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 12:55:21.036088 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 12:55:21.036216 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 30 12:55:21.036377 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 12:55:21.036529 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 12:55:21.036556 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 12:55:21.036707 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 44405 usecs Jan 30 12:55:21.036732 kernel: PCI: CLS 0 bytes, default 64 Jan 30 12:55:21.036754 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 12:55:21.038851 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134dbeb26, max_idle_ns: 440795298546 ns Jan 30 12:55:21.038877 kernel: Initialise system trusted keyrings Jan 30 12:55:21.038898 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 12:55:21.038925 kernel: Key type asymmetric registered Jan 30 12:55:21.038946 kernel: Asymmetric key parser 'x509' registered Jan 30 12:55:21.038969 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 12:55:21.038990 kernel: io scheduler mq-deadline registered Jan 30 12:55:21.039011 kernel: io scheduler kyber registered Jan 30 12:55:21.039032 kernel: io scheduler bfq registered Jan 30 12:55:21.039052 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 12:55:21.039074 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 12:55:21.039095 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 12:55:21.039120 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 12:55:21.039141 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 12:55:21.039163 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 12:55:21.039187 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 12:55:21.039210 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 12:55:21.039231 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 12:55:21.039255 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 12:55:21.039484 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 12:55:21.039639 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 12:55:21.039791 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T12:55:20 UTC (1738241720) Jan 30 12:55:21.039926 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 12:55:21.039951 kernel: intel_pstate: CPU model not supported Jan 30 12:55:21.039972 kernel: NET: Registered PF_INET6 protocol family Jan 30 12:55:21.039992 kernel: Segment Routing with IPv6 Jan 30 12:55:21.040014 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 12:55:21.040035 kernel: NET: Registered PF_PACKET protocol family Jan 30 12:55:21.040061 kernel: Key type dns_resolver registered Jan 30 12:55:21.040085 kernel: IPI shorthand broadcast: enabled Jan 30 12:55:21.040106 kernel: sched_clock: Marking stable (1377005445, 162399791)->(1581872032, -42466796) Jan 30 12:55:21.040127 kernel: registered taskstats version 1 Jan 30 12:55:21.040148 kernel: Loading compiled-in X.509 certificates Jan 30 12:55:21.040169 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 12:55:21.040189 kernel: Key type .fscrypt registered Jan 30 12:55:21.040209 kernel: Key type fscrypt-provisioning registered Jan 30 12:55:21.040229 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 12:55:21.040256 kernel: ima: Allocated hash algorithm: sha1 Jan 30 12:55:21.040277 kernel: ima: No architecture policies found Jan 30 12:55:21.040298 kernel: clk: Disabling unused clocks Jan 30 12:55:21.040319 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 12:55:21.040358 kernel: Write protecting the kernel read-only data: 38912k Jan 30 12:55:21.040405 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 12:55:21.040430 kernel: Run /init as init process Jan 30 12:55:21.040452 kernel: with arguments: Jan 30 12:55:21.040482 kernel: /init Jan 30 12:55:21.040514 kernel: with environment: Jan 30 12:55:21.040536 kernel: HOME=/ Jan 30 12:55:21.040557 kernel: TERM=linux Jan 30 12:55:21.040576 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 12:55:21.040595 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:55:21.040625 systemd[1]: Detected virtualization kvm. Jan 30 12:55:21.040648 systemd[1]: Detected architecture x86-64. Jan 30 12:55:21.040673 systemd[1]: Running in initrd. Jan 30 12:55:21.040698 systemd[1]: No hostname configured, using default hostname. Jan 30 12:55:21.040720 systemd[1]: Hostname set to . Jan 30 12:55:21.040744 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:55:21.040766 systemd[1]: Queued start job for default target initrd.target. Jan 30 12:55:21.042826 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:55:21.042851 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:55:21.042875 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 12:55:21.042898 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:55:21.042926 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 12:55:21.042950 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 12:55:21.042976 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 12:55:21.043000 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 12:55:21.043023 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:55:21.043046 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:55:21.043069 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:55:21.043095 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:55:21.043119 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:55:21.043151 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:55:21.043180 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:55:21.043204 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:55:21.043230 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 12:55:21.043258 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 12:55:21.043276 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:55:21.043292 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:55:21.043307 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:55:21.043325 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:55:21.043350 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 12:55:21.043373 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:55:21.043396 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 12:55:21.043423 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 12:55:21.043446 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:55:21.043469 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:55:21.043492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:55:21.043554 systemd-journald[184]: Collecting audit messages is disabled. Jan 30 12:55:21.043613 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 12:55:21.043637 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:55:21.043663 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 12:55:21.043688 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 12:55:21.043715 systemd-journald[184]: Journal started Jan 30 12:55:21.043765 systemd-journald[184]: Runtime Journal (/run/log/journal/f5e8b3eed9f949e9894c582d2f1eb54d) is 4.9M, max 39.3M, 34.4M free. Jan 30 12:55:21.048278 systemd-modules-load[185]: Inserted module 'overlay' Jan 30 12:55:21.094585 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 12:55:21.094630 kernel: Bridge firewalling registered Jan 30 12:55:21.080321 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 30 12:55:21.097124 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:55:21.106618 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:55:21.112862 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:21.113654 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:55:21.122020 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:55:21.123847 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:55:21.128117 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:55:21.140013 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:55:21.157725 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:55:21.158576 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:55:21.160752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:55:21.175005 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:55:21.176332 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:55:21.180384 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 12:55:21.219915 systemd-resolved[219]: Positive Trust Anchors: Jan 30 12:55:21.219931 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:55:21.220029 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:55:21.226377 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 30 12:55:21.227657 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:55:21.228933 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:55:21.230806 dracut-cmdline[221]: dracut-dracut-053 Jan 30 12:55:21.232254 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 12:55:21.332816 kernel: SCSI subsystem initialized Jan 30 12:55:21.346815 kernel: Loading iSCSI transport class v2.0-870. Jan 30 12:55:21.360833 kernel: iscsi: registered transport (tcp) Jan 30 12:55:21.388091 kernel: iscsi: registered transport (qla4xxx) Jan 30 12:55:21.388190 kernel: QLogic iSCSI HBA Driver Jan 30 12:55:21.449209 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 12:55:21.456115 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 12:55:21.504304 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 12:55:21.504387 kernel: device-mapper: uevent: version 1.0.3 Jan 30 12:55:21.504403 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 12:55:21.550826 kernel: raid6: avx2x4 gen() 17525 MB/s Jan 30 12:55:21.568836 kernel: raid6: avx2x2 gen() 16497 MB/s Jan 30 12:55:21.587116 kernel: raid6: avx2x1 gen() 12704 MB/s Jan 30 12:55:21.587226 kernel: raid6: using algorithm avx2x4 gen() 17525 MB/s Jan 30 12:55:21.606164 kernel: raid6: .... xor() 6412 MB/s, rmw enabled Jan 30 12:55:21.606253 kernel: raid6: using avx2x2 recovery algorithm Jan 30 12:55:21.631810 kernel: xor: automatically using best checksumming function avx Jan 30 12:55:21.825869 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 12:55:21.841921 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:55:21.848032 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:55:21.883222 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 30 12:55:21.893807 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:55:21.902812 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 12:55:21.926931 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jan 30 12:55:21.968853 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:55:21.976071 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:55:22.060873 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:55:22.067982 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 12:55:22.091073 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 12:55:22.096761 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:55:22.098930 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:55:22.099642 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:55:22.105526 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 12:55:22.137002 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:55:22.178854 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 30 12:55:22.251713 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 12:55:22.251751 kernel: scsi host0: Virtio SCSI HBA Jan 30 12:55:22.251940 kernel: ACPI: bus type USB registered Jan 30 12:55:22.251954 kernel: usbcore: registered new interface driver usbfs Jan 30 12:55:22.251968 kernel: usbcore: registered new interface driver hub Jan 30 12:55:22.251981 kernel: usbcore: registered new device driver usb Jan 30 12:55:22.251993 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 12:55:22.252111 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 12:55:22.252131 kernel: GPT:9289727 != 125829119 Jan 30 12:55:22.252143 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 12:55:22.252156 kernel: GPT:9289727 != 125829119 Jan 30 12:55:22.252167 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 12:55:22.252180 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:55:22.250995 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:55:22.256715 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 30 12:55:22.287074 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 12:55:22.287106 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Jan 30 12:55:22.287960 kernel: libata version 3.00 loaded. Jan 30 12:55:22.287988 kernel: AES CTR mode by8 optimization enabled Jan 30 12:55:22.251145 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:55:22.251915 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:55:22.256105 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:55:22.256355 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:22.257215 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:55:22.263136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:55:22.304671 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 12:55:22.325978 kernel: scsi host1: ata_piix Jan 30 12:55:22.326182 kernel: scsi host2: ata_piix Jan 30 12:55:22.326402 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 30 12:55:22.326430 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 30 12:55:22.375817 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (469) Jan 30 12:55:22.380809 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (451) Jan 30 12:55:22.384818 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 30 12:55:22.395300 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 30 12:55:22.395469 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 30 12:55:22.395656 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 30 12:55:22.396049 kernel: hub 1-0:1.0: USB hub found Jan 30 12:55:22.396369 kernel: hub 1-0:1.0: 2 ports detected Jan 30 12:55:22.391697 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 12:55:22.422179 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 12:55:22.423289 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:22.433704 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 12:55:22.434844 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 12:55:22.442193 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:55:22.451067 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 12:55:22.455010 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:55:22.463165 disk-uuid[535]: Primary Header is updated. Jan 30 12:55:22.463165 disk-uuid[535]: Secondary Entries is updated. Jan 30 12:55:22.463165 disk-uuid[535]: Secondary Header is updated. Jan 30 12:55:22.474200 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:55:22.493510 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:55:23.502931 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:55:23.503437 disk-uuid[536]: The operation has completed successfully. Jan 30 12:55:23.593764 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 12:55:23.593909 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 12:55:23.611067 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 12:55:23.615685 sh[561]: Success Jan 30 12:55:23.639850 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 12:55:23.789591 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 12:55:23.813921 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 12:55:23.815764 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 12:55:23.881096 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 12:55:23.881201 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 12:55:23.881234 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 12:55:23.883217 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 12:55:23.884802 kernel: BTRFS info (device dm-0): using free space tree Jan 30 12:55:23.901955 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 12:55:23.903425 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 12:55:23.909068 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 12:55:23.915041 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 12:55:23.942813 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 12:55:23.942898 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 12:55:23.942937 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:55:23.974868 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:55:24.004418 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 12:55:24.007817 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 12:55:24.019602 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 12:55:24.026127 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 12:55:24.062288 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:55:24.073857 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:55:24.104704 systemd-networkd[746]: lo: Link UP Jan 30 12:55:24.104715 systemd-networkd[746]: lo: Gained carrier Jan 30 12:55:24.107120 systemd-networkd[746]: Enumeration completed Jan 30 12:55:24.107240 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:55:24.107718 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 12:55:24.107725 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 30 12:55:24.109018 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:55:24.109024 systemd-networkd[746]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 12:55:24.109725 systemd-networkd[746]: eth0: Link UP Jan 30 12:55:24.109731 systemd-networkd[746]: eth0: Gained carrier Jan 30 12:55:24.109742 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 12:55:24.113944 systemd[1]: Reached target network.target - Network. Jan 30 12:55:24.114875 systemd-networkd[746]: eth1: Link UP Jan 30 12:55:24.114881 systemd-networkd[746]: eth1: Gained carrier Jan 30 12:55:24.114897 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:55:24.137014 systemd-networkd[746]: eth0: DHCPv4 address 209.38.73.11/20, gateway 209.38.64.1 acquired from 169.254.169.253 Jan 30 12:55:24.145080 systemd-networkd[746]: eth1: DHCPv4 address 10.124.0.6/20 acquired from 169.254.169.253 Jan 30 12:55:24.195315 ignition[712]: Ignition 2.20.0 Jan 30 12:55:24.196271 ignition[712]: Stage: fetch-offline Jan 30 12:55:24.196829 ignition[712]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:24.196842 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 12:55:24.196998 ignition[712]: parsed url from cmdline: "" Jan 30 12:55:24.197003 ignition[712]: no config URL provided Jan 30 12:55:24.197011 ignition[712]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 12:55:24.197024 ignition[712]: no config at "/usr/lib/ignition/user.ign" Jan 30 12:55:24.200436 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:55:24.197034 ignition[712]: failed to fetch config: resource requires networking Jan 30 12:55:24.199188 ignition[712]: Ignition finished successfully Jan 30 12:55:24.206063 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 12:55:24.225799 ignition[755]: Ignition 2.20.0 Jan 30 12:55:24.225811 ignition[755]: Stage: fetch Jan 30 12:55:24.226041 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:24.226054 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 12:55:24.226165 ignition[755]: parsed url from cmdline: "" Jan 30 12:55:24.226169 ignition[755]: no config URL provided Jan 30 12:55:24.226174 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 12:55:24.226183 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jan 30 12:55:24.226209 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 30 12:55:24.259734 ignition[755]: GET result: OK Jan 30 12:55:24.260019 ignition[755]: parsing config with SHA512: 74637e8896d71ad7cb217cf2c1d118ae851c5c53f1a06e65e82f4cd8e97da8e395b05c0e35943bd6cb052acc054fc519772cce6874a822841c4b6baf9b39a1bb Jan 30 12:55:24.269368 unknown[755]: fetched base config from "system" Jan 30 12:55:24.270051 unknown[755]: fetched base config from "system" Jan 30 12:55:24.270076 unknown[755]: fetched user config from "digitalocean" Jan 30 12:55:24.270841 ignition[755]: fetch: fetch complete Jan 30 12:55:24.270850 ignition[755]: fetch: fetch passed Jan 30 12:55:24.270965 ignition[755]: Ignition finished successfully Jan 30 12:55:24.272481 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 12:55:24.280042 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 12:55:24.318317 ignition[761]: Ignition 2.20.0 Jan 30 12:55:24.318335 ignition[761]: Stage: kargs Jan 30 12:55:24.318626 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:24.318644 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 12:55:24.322298 ignition[761]: kargs: kargs passed Jan 30 12:55:24.322410 ignition[761]: Ignition finished successfully Jan 30 12:55:24.324346 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 12:55:24.329082 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 12:55:24.353712 ignition[768]: Ignition 2.20.0 Jan 30 12:55:24.353729 ignition[768]: Stage: disks Jan 30 12:55:24.353984 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:24.353997 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 12:55:24.356828 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 12:55:24.355457 ignition[768]: disks: disks passed Jan 30 12:55:24.355522 ignition[768]: Ignition finished successfully Jan 30 12:55:24.363964 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 12:55:24.365381 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 12:55:24.366508 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:55:24.368107 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:55:24.369430 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:55:24.377103 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 12:55:24.408174 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 12:55:24.414065 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 12:55:24.420986 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 12:55:24.549880 kernel: EXT4-fs (vda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 12:55:24.550873 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 12:55:24.552162 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 12:55:24.570024 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:55:24.573303 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 12:55:24.576250 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jan 30 12:55:24.586122 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 12:55:24.589312 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (784) Jan 30 12:55:24.590322 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 12:55:24.590382 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 12:55:24.590410 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:55:24.600570 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 12:55:24.601292 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:55:24.608390 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 12:55:24.615816 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:55:24.621122 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 12:55:24.627453 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:55:24.718237 coreos-metadata[786]: Jan 30 12:55:24.718 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 12:55:24.727806 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 12:55:24.735807 coreos-metadata[786]: Jan 30 12:55:24.734 INFO Fetch successful Jan 30 12:55:24.737802 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Jan 30 12:55:24.744274 coreos-metadata[787]: Jan 30 12:55:24.744 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 12:55:24.747209 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 12:55:24.751205 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jan 30 12:55:24.752825 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jan 30 12:55:24.758338 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 12:55:24.760620 coreos-metadata[787]: Jan 30 12:55:24.760 INFO Fetch successful Jan 30 12:55:24.770808 coreos-metadata[787]: Jan 30 12:55:24.770 INFO wrote hostname ci-4186.1.0-8-ccc447c07f to /sysroot/etc/hostname Jan 30 12:55:24.772136 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 12:55:24.893307 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 12:55:24.899977 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 12:55:24.902004 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 12:55:24.926349 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 12:55:24.929933 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 12:55:24.949544 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 12:55:24.964402 ignition[905]: INFO : Ignition 2.20.0 Jan 30 12:55:24.966179 ignition[905]: INFO : Stage: mount Jan 30 12:55:24.966179 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:24.966179 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 12:55:24.969888 ignition[905]: INFO : mount: mount passed Jan 30 12:55:24.970603 ignition[905]: INFO : Ignition finished successfully Jan 30 12:55:24.973005 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 12:55:24.979032 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 12:55:25.004169 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:55:25.023721 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (916) Jan 30 12:55:25.023804 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 12:55:25.024857 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 12:55:25.027392 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:55:25.033814 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:55:25.037142 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:55:25.067837 ignition[933]: INFO : Ignition 2.20.0 Jan 30 12:55:25.067837 ignition[933]: INFO : Stage: files Jan 30 12:55:25.067837 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:25.067837 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 12:55:25.072036 ignition[933]: DEBUG : files: compiled without relabeling support, skipping Jan 30 12:55:25.072036 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 12:55:25.072036 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 12:55:25.075722 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 12:55:25.076812 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 12:55:25.076812 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 12:55:25.076332 unknown[933]: wrote ssh authorized keys file for user: core Jan 30 12:55:25.079894 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 12:55:25.079894 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 12:55:25.113479 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 12:55:25.190889 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 12:55:25.190889 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 12:55:25.193740 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 12:55:25.621292 systemd-networkd[746]: eth0: Gained IPv6LL Jan 30 12:55:25.651169 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 12:55:25.731731 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 12:55:25.731731 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 12:55:25.735395 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 12:55:25.735395 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 12:55:25.735395 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 12:55:25.735395 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 12:55:25.735395 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 12:55:25.735395 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 12:55:25.735395 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 12:55:25.735395 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:55:25.735395 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:55:25.735395 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 12:55:25.735395 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 12:55:25.735395 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 12:55:25.735395 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 12:55:26.133378 systemd-networkd[746]: eth1: Gained IPv6LL Jan 30 12:55:26.152505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 12:55:26.425208 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 12:55:26.425208 ignition[933]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 12:55:26.427862 ignition[933]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 12:55:26.427862 ignition[933]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 12:55:26.427862 ignition[933]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 12:55:26.427862 ignition[933]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 12:55:26.427862 ignition[933]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 12:55:26.433579 ignition[933]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:55:26.433579 ignition[933]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:55:26.433579 ignition[933]: INFO : files: files passed Jan 30 12:55:26.433579 ignition[933]: INFO : Ignition finished successfully Jan 30 12:55:26.430303 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 12:55:26.439151 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 12:55:26.444850 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 12:55:26.448298 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 12:55:26.448470 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 12:55:26.478475 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:55:26.478475 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:55:26.482409 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:55:26.484499 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:55:26.486479 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 12:55:26.493175 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 12:55:26.534426 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 12:55:26.534585 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 12:55:26.536106 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 12:55:26.536995 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 12:55:26.538253 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 12:55:26.547061 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 12:55:26.566713 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:55:26.572091 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 12:55:26.588334 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:55:26.589223 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:55:26.590550 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 12:55:26.591484 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 12:55:26.591623 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:55:26.593161 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 12:55:26.594077 systemd[1]: Stopped target basic.target - Basic System. Jan 30 12:55:26.595236 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 12:55:26.596362 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:55:26.597419 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 12:55:26.598582 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 12:55:26.599565 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:55:26.600904 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 12:55:26.601863 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 12:55:26.603069 systemd[1]: Stopped target swap.target - Swaps. Jan 30 12:55:26.604069 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 12:55:26.604297 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:55:26.605664 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:55:26.606779 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:55:26.607904 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 12:55:26.608242 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:55:26.609081 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 12:55:26.609237 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 12:55:26.610826 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 12:55:26.610951 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:55:26.612537 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 12:55:26.612650 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 12:55:26.613538 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 12:55:26.613732 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 12:55:26.622206 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 12:55:26.626021 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 12:55:26.626628 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 12:55:26.626765 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:55:26.631525 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 12:55:26.631672 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:55:26.646110 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 12:55:26.646270 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 12:55:26.652579 ignition[987]: INFO : Ignition 2.20.0 Jan 30 12:55:26.652579 ignition[987]: INFO : Stage: umount Jan 30 12:55:26.652579 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:55:26.652579 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 12:55:26.668752 ignition[987]: INFO : umount: umount passed Jan 30 12:55:26.668752 ignition[987]: INFO : Ignition finished successfully Jan 30 12:55:26.663358 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 12:55:26.663521 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 12:55:26.664896 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 12:55:26.665016 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 12:55:26.665575 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 12:55:26.665673 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 12:55:26.666308 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 12:55:26.666362 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 12:55:26.687081 systemd[1]: Stopped target network.target - Network. Jan 30 12:55:26.693040 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 12:55:26.693142 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:55:26.721523 systemd[1]: Stopped target paths.target - Path Units. Jan 30 12:55:26.751241 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 12:55:26.754887 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:55:26.755610 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 12:55:26.756087 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 12:55:26.756685 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 12:55:26.756761 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:55:26.759210 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 12:55:26.759272 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:55:26.760652 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 12:55:26.760727 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 12:55:26.761960 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 12:55:26.762039 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 12:55:26.763218 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 12:55:26.764722 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 12:55:26.767592 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 12:55:26.770044 systemd-networkd[746]: eth0: DHCPv6 lease lost Jan 30 12:55:26.771205 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 12:55:26.771348 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 12:55:26.774004 systemd-networkd[746]: eth1: DHCPv6 lease lost Jan 30 12:55:26.777439 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 12:55:26.777650 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 12:55:26.779186 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 12:55:26.779318 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 12:55:26.782128 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 12:55:26.782204 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:55:26.783442 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 12:55:26.783517 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 12:55:26.794097 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 12:55:26.796746 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 12:55:26.796863 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:55:26.798343 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 12:55:26.798431 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:55:26.799749 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 12:55:26.799949 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 12:55:26.801081 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 12:55:26.801154 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:55:26.802417 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:55:26.817854 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 12:55:26.827078 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:55:26.828248 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 12:55:26.828321 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 12:55:26.829149 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 12:55:26.829205 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:55:26.831099 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 12:55:26.831190 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:55:26.833254 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 12:55:26.833343 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 12:55:26.834555 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:55:26.834631 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:55:26.841102 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 12:55:26.841881 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 12:55:26.841978 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:55:26.843129 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 12:55:26.843195 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:55:26.845309 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 12:55:26.845382 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:55:26.846091 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:55:26.846156 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:26.851437 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 12:55:26.851593 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 12:55:26.861698 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 12:55:26.861848 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 12:55:26.863548 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 12:55:26.871054 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 12:55:26.892794 systemd[1]: Switching root. Jan 30 12:55:26.984361 systemd-journald[184]: Journal stopped Jan 30 12:55:28.499099 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 30 12:55:28.499223 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 12:55:28.499253 kernel: SELinux: policy capability open_perms=1 Jan 30 12:55:28.499269 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 12:55:28.499291 kernel: SELinux: policy capability always_check_network=0 Jan 30 12:55:28.499308 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 12:55:28.499332 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 12:55:28.499356 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 12:55:28.499375 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 12:55:28.499393 kernel: audit: type=1403 audit(1738241727.169:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 12:55:28.499417 systemd[1]: Successfully loaded SELinux policy in 54.493ms. Jan 30 12:55:28.499441 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.048ms. Jan 30 12:55:28.499463 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:55:28.499482 systemd[1]: Detected virtualization kvm. Jan 30 12:55:28.499502 systemd[1]: Detected architecture x86-64. Jan 30 12:55:28.499520 systemd[1]: Detected first boot. Jan 30 12:55:28.499540 systemd[1]: Hostname set to . Jan 30 12:55:28.499559 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:55:28.499585 zram_generator::config[1030]: No configuration found. Jan 30 12:55:28.499636 systemd[1]: Populated /etc with preset unit settings. Jan 30 12:55:28.499655 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 12:55:28.499681 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 12:55:28.499700 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 12:55:28.499721 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 12:55:28.499742 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 12:55:28.499760 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 12:55:28.499797 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 12:55:28.499821 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 12:55:28.499841 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 12:55:28.499863 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 12:55:28.499885 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 12:55:28.499903 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:55:28.499925 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:55:28.499944 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 12:55:28.499963 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 12:55:28.499983 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 12:55:28.500006 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:55:28.500025 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 12:55:28.500044 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:55:28.500063 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 12:55:28.500094 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 12:55:28.500113 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 12:55:28.500138 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 12:55:28.500157 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:55:28.500177 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:55:28.500195 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:55:28.500216 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:55:28.500236 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 12:55:28.500254 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 12:55:28.500273 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:55:28.500293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:55:28.500317 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:55:28.500336 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 12:55:28.500355 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 12:55:28.500374 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 12:55:28.500393 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 12:55:28.500412 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:55:28.500433 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 12:55:28.500453 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 12:55:28.500471 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 12:55:28.500500 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 12:55:28.500532 systemd[1]: Reached target machines.target - Containers. Jan 30 12:55:28.500550 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 12:55:28.500571 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:55:28.500590 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:55:28.500611 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 12:55:28.500632 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:55:28.500653 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:55:28.500679 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:55:28.500703 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 12:55:28.500725 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:55:28.500745 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 12:55:28.500765 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 12:55:28.500865 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 12:55:28.500888 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 12:55:28.500920 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 12:55:28.500941 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:55:28.500968 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:55:28.500990 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 12:55:28.501011 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 12:55:28.501030 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:55:28.501052 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 12:55:28.501072 systemd[1]: Stopped verity-setup.service. Jan 30 12:55:28.501091 kernel: fuse: init (API version 7.39) Jan 30 12:55:28.501115 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:55:28.501188 systemd-journald[1110]: Collecting audit messages is disabled. Jan 30 12:55:28.501236 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 12:55:28.501256 kernel: ACPI: bus type drm_connector registered Jan 30 12:55:28.501276 systemd-journald[1110]: Journal started Jan 30 12:55:28.501314 systemd-journald[1110]: Runtime Journal (/run/log/journal/f5e8b3eed9f949e9894c582d2f1eb54d) is 4.9M, max 39.3M, 34.4M free. Jan 30 12:55:28.505958 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 12:55:28.110363 systemd[1]: Queued start job for default target multi-user.target. Jan 30 12:55:28.133659 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 12:55:28.134362 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 12:55:28.511214 kernel: loop: module loaded Jan 30 12:55:28.511305 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:55:28.515538 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 12:55:28.517398 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 12:55:28.520115 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 12:55:28.521454 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 12:55:28.522995 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:55:28.525200 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 12:55:28.525358 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 12:55:28.526480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:55:28.526718 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:55:28.530100 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:55:28.530311 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:55:28.532393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:55:28.532615 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:55:28.533487 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 12:55:28.533675 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 12:55:28.535182 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:55:28.535539 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:55:28.537483 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:55:28.540309 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 12:55:28.541334 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 12:55:28.563634 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 12:55:28.564993 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 12:55:28.572940 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 12:55:28.583900 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 12:55:28.584652 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 12:55:28.584704 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:55:28.590197 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 12:55:28.604053 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 12:55:28.614080 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 12:55:28.615048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:55:28.620094 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 12:55:28.623006 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 12:55:28.623849 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:55:28.632060 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 12:55:28.633932 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:55:28.645244 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:55:28.656588 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 12:55:28.661594 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 12:55:28.675536 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 12:55:28.677681 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 12:55:28.679998 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 12:55:28.681700 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 12:55:28.688068 systemd-journald[1110]: Time spent on flushing to /var/log/journal/f5e8b3eed9f949e9894c582d2f1eb54d is 110.134ms for 992 entries. Jan 30 12:55:28.688068 systemd-journald[1110]: System Journal (/var/log/journal/f5e8b3eed9f949e9894c582d2f1eb54d) is 8.0M, max 195.6M, 187.6M free. Jan 30 12:55:28.844070 systemd-journald[1110]: Received client request to flush runtime journal. Jan 30 12:55:28.844144 kernel: loop0: detected capacity change from 0 to 141000 Jan 30 12:55:28.844174 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 12:55:28.844202 kernel: loop1: detected capacity change from 0 to 218376 Jan 30 12:55:28.697676 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 12:55:28.710100 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 12:55:28.804257 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:55:28.810519 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 12:55:28.811349 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 12:55:28.841483 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Jan 30 12:55:28.841556 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Jan 30 12:55:28.849636 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:55:28.851529 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 12:55:28.861043 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 12:55:28.862022 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:55:28.872025 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 12:55:28.902494 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 12:55:28.906834 kernel: loop2: detected capacity change from 0 to 138184 Jan 30 12:55:28.957289 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 12:55:28.967148 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:55:28.974288 kernel: loop3: detected capacity change from 0 to 8 Jan 30 12:55:29.010452 kernel: loop4: detected capacity change from 0 to 141000 Jan 30 12:55:29.043817 kernel: loop5: detected capacity change from 0 to 218376 Jan 30 12:55:29.067895 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 30 12:55:29.067979 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 30 12:55:29.076946 kernel: loop6: detected capacity change from 0 to 138184 Jan 30 12:55:29.081366 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:55:29.114895 kernel: loop7: detected capacity change from 0 to 8 Jan 30 12:55:29.120062 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 30 12:55:29.122968 (sd-merge)[1177]: Merged extensions into '/usr'. Jan 30 12:55:29.137093 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 12:55:29.137122 systemd[1]: Reloading... Jan 30 12:55:29.308862 zram_generator::config[1204]: No configuration found. Jan 30 12:55:29.650005 ldconfig[1145]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 12:55:29.663030 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:55:29.752731 systemd[1]: Reloading finished in 614 ms. Jan 30 12:55:29.784223 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 12:55:29.793565 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 12:55:29.805547 systemd[1]: Starting ensure-sysext.service... Jan 30 12:55:29.807954 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:55:29.823904 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Jan 30 12:55:29.823924 systemd[1]: Reloading... Jan 30 12:55:29.873012 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 12:55:29.873544 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 12:55:29.878266 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 12:55:29.881847 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 30 12:55:29.881952 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 30 12:55:29.888596 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:55:29.888612 systemd-tmpfiles[1249]: Skipping /boot Jan 30 12:55:29.908194 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:55:29.908209 systemd-tmpfiles[1249]: Skipping /boot Jan 30 12:55:29.951817 zram_generator::config[1279]: No configuration found. Jan 30 12:55:30.189186 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:55:30.285590 systemd[1]: Reloading finished in 461 ms. Jan 30 12:55:30.311248 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 12:55:30.316538 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:55:30.331070 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 12:55:30.334019 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 12:55:30.337006 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 12:55:30.343038 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:55:30.346928 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:55:30.350488 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 12:55:30.363485 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:55:30.364986 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:55:30.372243 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:55:30.378589 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:55:30.383171 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:55:30.384998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:55:30.385263 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:55:30.389958 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:55:30.390160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:55:30.390371 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:55:30.390575 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:55:30.399323 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:55:30.400710 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:55:30.411205 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:55:30.412496 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:55:30.412915 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:55:30.418322 systemd[1]: Finished ensure-sysext.service. Jan 30 12:55:30.448577 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jan 30 12:55:30.458268 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 12:55:30.490453 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 12:55:30.492830 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 12:55:30.494178 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:55:30.494363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:55:30.496390 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:55:30.497618 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:55:30.506180 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:55:30.508846 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 12:55:30.511103 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:55:30.512036 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:55:30.517957 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 12:55:30.518970 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:55:30.519142 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:55:30.549250 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:55:30.549852 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:55:30.549940 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:55:30.559515 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 12:55:30.561762 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 12:55:30.598028 augenrules[1378]: No rules Jan 30 12:55:30.600346 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 12:55:30.603877 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 12:55:30.618842 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 12:55:30.649364 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 12:55:30.690667 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 12:55:30.703834 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1352) Jan 30 12:55:30.760813 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 12:55:30.766834 kernel: ACPI: button: Power Button [PWRF] Jan 30 12:55:30.811938 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 30 12:55:30.813858 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:55:30.814160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:55:30.826044 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:55:30.831082 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:55:30.836405 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:55:30.837119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:55:30.837186 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 12:55:30.837209 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 12:55:30.856024 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:55:30.856957 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:55:30.872907 systemd-networkd[1370]: lo: Link UP Jan 30 12:55:30.873272 systemd-networkd[1370]: lo: Gained carrier Jan 30 12:55:30.876077 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 12:55:30.876848 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 12:55:30.878759 systemd-timesyncd[1338]: No network connectivity, watching for changes. Jan 30 12:55:30.880169 systemd-networkd[1370]: Enumeration completed Jan 30 12:55:30.880436 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:55:30.881029 systemd-networkd[1370]: eth0: Configuring with /run/systemd/network/10-7e:0e:ec:56:46:c3.network. Jan 30 12:55:30.881903 kernel: ISO 9660 Extensions: RRIP_1991A Jan 30 12:55:30.882276 systemd-networkd[1370]: eth1: Configuring with /run/systemd/network/10-d2:a9:77:16:e6:6a.network. Jan 30 12:55:30.883080 systemd-networkd[1370]: eth0: Link UP Jan 30 12:55:30.883227 systemd-networkd[1370]: eth0: Gained carrier Jan 30 12:55:30.886533 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 30 12:55:30.889217 systemd-networkd[1370]: eth1: Link UP Jan 30 12:55:30.889229 systemd-networkd[1370]: eth1: Gained carrier Jan 30 12:55:30.898089 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Jan 30 12:55:30.898092 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 12:55:30.914030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:55:30.914238 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:55:30.915725 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:55:30.917892 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:55:30.918072 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:55:30.918923 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:55:30.936841 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 12:55:30.946842 systemd-resolved[1324]: Positive Trust Anchors: Jan 30 12:55:30.947854 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:55:30.947967 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:55:30.959237 systemd-resolved[1324]: Using system hostname 'ci-4186.1.0-8-ccc447c07f'. Jan 30 12:55:30.964477 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:55:30.966008 systemd[1]: Reached target network.target - Network. Jan 30 12:55:30.966578 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:55:31.003857 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 12:55:31.054203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:55:31.063816 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 12:55:31.077807 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 12:55:31.078194 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 12:55:31.101952 kernel: Console: switching to colour dummy device 80x25 Jan 30 12:55:31.104354 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 12:55:31.104459 kernel: [drm] features: -context_init Jan 30 12:55:31.114846 kernel: [drm] number of scanouts: 1 Jan 30 12:55:31.118305 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:55:31.118615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:31.122840 kernel: [drm] number of cap sets: 0 Jan 30 12:55:31.125876 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 12:55:31.139812 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 12:55:31.143203 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 12:55:31.140285 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:55:31.152822 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 12:55:31.216459 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:55:31.238239 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 12:55:31.241396 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:55:31.241670 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:31.251468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:55:31.259087 systemd-timesyncd[1338]: Contacted time server 205.233.73.201:123 (1.flatcar.pool.ntp.org). Jan 30 12:55:31.259176 systemd-timesyncd[1338]: Initial clock synchronization to Thu 2025-01-30 12:55:31.149823 UTC. Jan 30 12:55:31.283806 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 12:55:31.317809 kernel: EDAC MC: Ver: 3.0.0 Jan 30 12:55:31.337188 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:55:31.349688 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 12:55:31.356076 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 12:55:31.376039 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:55:31.408649 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 12:55:31.411117 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:55:31.412108 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:55:31.412410 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 12:55:31.412578 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 12:55:31.413296 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 12:55:31.414377 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 12:55:31.414479 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 12:55:31.414560 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 12:55:31.414590 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:55:31.414647 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:55:31.416421 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 12:55:31.419440 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 12:55:31.433129 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 12:55:31.437926 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 12:55:31.442631 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 12:55:31.444167 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:55:31.445657 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:55:31.446455 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:55:31.446498 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:55:31.451978 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 12:55:31.458053 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 12:55:31.469201 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 12:55:31.475798 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:55:31.483701 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 12:55:31.492918 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 12:55:31.493794 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 12:55:31.504082 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 12:55:31.510610 jq[1446]: false Jan 30 12:55:31.516947 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 12:55:31.524079 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 12:55:31.532029 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 12:55:31.548006 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 12:55:31.551348 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 12:55:31.552252 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 12:55:31.555449 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 12:55:31.558936 extend-filesystems[1449]: Found loop4 Jan 30 12:55:31.566189 extend-filesystems[1449]: Found loop5 Jan 30 12:55:31.566189 extend-filesystems[1449]: Found loop6 Jan 30 12:55:31.566189 extend-filesystems[1449]: Found loop7 Jan 30 12:55:31.566189 extend-filesystems[1449]: Found vda Jan 30 12:55:31.566189 extend-filesystems[1449]: Found vda1 Jan 30 12:55:31.566189 extend-filesystems[1449]: Found vda2 Jan 30 12:55:31.566189 extend-filesystems[1449]: Found vda3 Jan 30 12:55:31.566189 extend-filesystems[1449]: Found usr Jan 30 12:55:31.566189 extend-filesystems[1449]: Found vda4 Jan 30 12:55:31.566189 extend-filesystems[1449]: Found vda6 Jan 30 12:55:31.566189 extend-filesystems[1449]: Found vda7 Jan 30 12:55:31.566189 extend-filesystems[1449]: Found vda9 Jan 30 12:55:31.566189 extend-filesystems[1449]: Checking size of /dev/vda9 Jan 30 12:55:31.569529 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 12:55:31.587158 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 12:55:31.600559 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 12:55:31.600875 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 12:55:31.635790 jq[1458]: true Jan 30 12:55:31.645213 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 12:55:31.645912 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 12:55:31.664298 extend-filesystems[1449]: Resized partition /dev/vda9 Jan 30 12:55:31.676174 dbus-daemon[1445]: [system] SELinux support is enabled Jan 30 12:55:31.676450 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 12:55:31.686888 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 12:55:31.686934 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 12:55:31.690578 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 12:55:31.690745 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 30 12:55:31.690808 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 12:55:31.706507 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) Jan 30 12:55:31.713268 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 30 12:55:31.713342 update_engine[1456]: I20250130 12:55:31.696272 1456 main.cc:92] Flatcar Update Engine starting Jan 30 12:55:31.713847 coreos-metadata[1444]: Jan 30 12:55:31.699 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 12:55:31.701620 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 12:55:31.730027 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1369) Jan 30 12:55:31.730135 coreos-metadata[1444]: Jan 30 12:55:31.722 INFO Fetch successful Jan 30 12:55:31.733049 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 12:55:31.733363 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 12:55:31.736627 systemd[1]: Started update-engine.service - Update Engine. Jan 30 12:55:31.743987 update_engine[1456]: I20250130 12:55:31.741259 1456 update_check_scheduler.cc:74] Next update check in 11m2s Jan 30 12:55:31.746064 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 12:55:31.751399 tar[1463]: linux-amd64/LICENSE Jan 30 12:55:31.754843 tar[1463]: linux-amd64/helm Jan 30 12:55:31.772852 jq[1477]: true Jan 30 12:55:31.910094 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 12:55:31.912141 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 12:55:31.939816 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 12:55:31.995811 extend-filesystems[1482]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 12:55:31.995811 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 12:55:31.995811 extend-filesystems[1482]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 12:55:32.024989 extend-filesystems[1449]: Resized filesystem in /dev/vda9 Jan 30 12:55:32.024989 extend-filesystems[1449]: Found vdb Jan 30 12:55:31.999334 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 12:55:32.004911 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 12:55:32.012621 systemd-logind[1455]: New seat seat0. Jan 30 12:55:32.014799 systemd-logind[1455]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 12:55:32.014828 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 12:55:32.018920 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 12:55:32.088076 bash[1511]: Updated "/home/core/.ssh/authorized_keys" Jan 30 12:55:32.084375 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 12:55:32.097835 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 12:55:32.100216 systemd[1]: Starting sshkeys.service... Jan 30 12:55:32.149023 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 12:55:32.154626 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 12:55:32.164416 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 12:55:32.182257 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 12:55:32.201410 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 12:55:32.255620 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 12:55:32.256379 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 12:55:32.266063 coreos-metadata[1528]: Jan 30 12:55:32.266 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 12:55:32.272110 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 12:55:32.276370 containerd[1481]: time="2025-01-30T12:55:32.275338035Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 12:55:32.298859 coreos-metadata[1528]: Jan 30 12:55:32.298 INFO Fetch successful Jan 30 12:55:32.319564 containerd[1481]: time="2025-01-30T12:55:32.319435682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:32.325062 containerd[1481]: time="2025-01-30T12:55:32.324993018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:55:32.325062 containerd[1481]: time="2025-01-30T12:55:32.325041202Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 12:55:32.325062 containerd[1481]: time="2025-01-30T12:55:32.325061080Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 12:55:32.325614 containerd[1481]: time="2025-01-30T12:55:32.325584131Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 12:55:32.325614 containerd[1481]: time="2025-01-30T12:55:32.325613121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:32.325716 containerd[1481]: time="2025-01-30T12:55:32.325673933Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:55:32.325716 containerd[1481]: time="2025-01-30T12:55:32.325688405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:32.325919 containerd[1481]: time="2025-01-30T12:55:32.325900559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:55:32.325974 containerd[1481]: time="2025-01-30T12:55:32.325920221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:32.325974 containerd[1481]: time="2025-01-30T12:55:32.325933705Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:55:32.325974 containerd[1481]: time="2025-01-30T12:55:32.325942571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:32.326126 containerd[1481]: time="2025-01-30T12:55:32.326013587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:32.326240 containerd[1481]: time="2025-01-30T12:55:32.326215423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:55:32.326354 containerd[1481]: time="2025-01-30T12:55:32.326326749Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:55:32.326354 containerd[1481]: time="2025-01-30T12:55:32.326343115Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 12:55:32.326476 containerd[1481]: time="2025-01-30T12:55:32.326435640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 12:55:32.326575 containerd[1481]: time="2025-01-30T12:55:32.326491816Z" level=info msg="metadata content store policy set" policy=shared Jan 30 12:55:32.328431 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 12:55:32.330254 unknown[1528]: wrote ssh authorized keys file for user: core Jan 30 12:55:32.344011 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 12:55:32.355562 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 12:55:32.357503 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 12:55:32.364171 containerd[1481]: time="2025-01-30T12:55:32.364046558Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 12:55:32.364300 containerd[1481]: time="2025-01-30T12:55:32.364207152Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 12:55:32.364300 containerd[1481]: time="2025-01-30T12:55:32.364258708Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 12:55:32.364300 containerd[1481]: time="2025-01-30T12:55:32.364287463Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 12:55:32.364471 containerd[1481]: time="2025-01-30T12:55:32.364327125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 12:55:32.366131 containerd[1481]: time="2025-01-30T12:55:32.364591117Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 12:55:32.366131 containerd[1481]: time="2025-01-30T12:55:32.364931172Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 12:55:32.366131 containerd[1481]: time="2025-01-30T12:55:32.365060894Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 12:55:32.366131 containerd[1481]: time="2025-01-30T12:55:32.365081877Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 12:55:32.366131 containerd[1481]: time="2025-01-30T12:55:32.365100818Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 12:55:32.366131 containerd[1481]: time="2025-01-30T12:55:32.365120582Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 12:55:32.366131 containerd[1481]: time="2025-01-30T12:55:32.365138009Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 12:55:32.366131 containerd[1481]: time="2025-01-30T12:55:32.365154054Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 12:55:32.366131 containerd[1481]: time="2025-01-30T12:55:32.365172155Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 12:55:32.366131 containerd[1481]: time="2025-01-30T12:55:32.365191978Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 12:55:32.366131 containerd[1481]: time="2025-01-30T12:55:32.365209711Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 12:55:32.366131 containerd[1481]: time="2025-01-30T12:55:32.365226975Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 12:55:32.366131 containerd[1481]: time="2025-01-30T12:55:32.365247855Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 12:55:32.366131 containerd[1481]: time="2025-01-30T12:55:32.365277391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.366700 containerd[1481]: time="2025-01-30T12:55:32.365310352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.366700 containerd[1481]: time="2025-01-30T12:55:32.365333412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.366700 containerd[1481]: time="2025-01-30T12:55:32.365353678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.366700 containerd[1481]: time="2025-01-30T12:55:32.365370450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.366700 containerd[1481]: time="2025-01-30T12:55:32.365390586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.366700 containerd[1481]: time="2025-01-30T12:55:32.365407241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.366700 containerd[1481]: time="2025-01-30T12:55:32.365424410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.366700 containerd[1481]: time="2025-01-30T12:55:32.365441681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.366700 containerd[1481]: time="2025-01-30T12:55:32.365463679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.366700 containerd[1481]: time="2025-01-30T12:55:32.365495518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.366700 containerd[1481]: time="2025-01-30T12:55:32.365517172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.366700 containerd[1481]: time="2025-01-30T12:55:32.365534063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.366700 containerd[1481]: time="2025-01-30T12:55:32.365555169Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 12:55:32.366700 containerd[1481]: time="2025-01-30T12:55:32.365584089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.366700 containerd[1481]: time="2025-01-30T12:55:32.365622328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.368323 containerd[1481]: time="2025-01-30T12:55:32.365644092Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 12:55:32.368323 containerd[1481]: time="2025-01-30T12:55:32.365703630Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 12:55:32.368323 containerd[1481]: time="2025-01-30T12:55:32.365729204Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 12:55:32.368323 containerd[1481]: time="2025-01-30T12:55:32.365746872Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 12:55:32.368323 containerd[1481]: time="2025-01-30T12:55:32.367256482Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 12:55:32.368520 containerd[1481]: time="2025-01-30T12:55:32.368433753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.368520 containerd[1481]: time="2025-01-30T12:55:32.368470889Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 12:55:32.368520 containerd[1481]: time="2025-01-30T12:55:32.368483194Z" level=info msg="NRI interface is disabled by configuration." Jan 30 12:55:32.368520 containerd[1481]: time="2025-01-30T12:55:32.368509495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 12:55:32.373186 containerd[1481]: time="2025-01-30T12:55:32.371103737Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 12:55:32.373186 containerd[1481]: time="2025-01-30T12:55:32.371285206Z" level=info msg="Connect containerd service" Jan 30 12:55:32.373186 containerd[1481]: time="2025-01-30T12:55:32.372140823Z" level=info msg="using legacy CRI server" Jan 30 12:55:32.373186 containerd[1481]: time="2025-01-30T12:55:32.372174785Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 12:55:32.373186 containerd[1481]: time="2025-01-30T12:55:32.372414114Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 12:55:32.374835 containerd[1481]: time="2025-01-30T12:55:32.374144281Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 12:55:32.376359 containerd[1481]: time="2025-01-30T12:55:32.375748317Z" level=info msg="Start subscribing containerd event" Jan 30 12:55:32.376359 containerd[1481]: time="2025-01-30T12:55:32.375849642Z" level=info msg="Start recovering state" Jan 30 12:55:32.376359 containerd[1481]: time="2025-01-30T12:55:32.375958897Z" level=info msg="Start event monitor" Jan 30 12:55:32.376359 containerd[1481]: time="2025-01-30T12:55:32.375976137Z" level=info msg="Start snapshots syncer" Jan 30 12:55:32.376359 containerd[1481]: time="2025-01-30T12:55:32.376000060Z" level=info msg="Start cni network conf syncer for default" Jan 30 12:55:32.376359 containerd[1481]: time="2025-01-30T12:55:32.376011012Z" level=info msg="Start streaming server" Jan 30 12:55:32.377457 containerd[1481]: time="2025-01-30T12:55:32.377428128Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 12:55:32.377539 containerd[1481]: time="2025-01-30T12:55:32.377490217Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 12:55:32.377652 containerd[1481]: time="2025-01-30T12:55:32.377625571Z" level=info msg="containerd successfully booted in 0.104397s" Jan 30 12:55:32.380926 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 12:55:32.421542 update-ssh-keys[1545]: Updated "/home/core/.ssh/authorized_keys" Jan 30 12:55:32.422406 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 12:55:32.429427 systemd[1]: Finished sshkeys.service. Jan 30 12:55:32.597074 systemd-networkd[1370]: eth0: Gained IPv6LL Jan 30 12:55:32.602575 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 12:55:32.607441 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 12:55:32.622073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:55:32.627618 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 12:55:32.699502 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 12:55:32.789609 systemd-networkd[1370]: eth1: Gained IPv6LL Jan 30 12:55:32.833972 tar[1463]: linux-amd64/README.md Jan 30 12:55:32.856429 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 12:55:34.079386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:55:34.083851 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 12:55:34.087895 systemd[1]: Startup finished in 1.543s (kernel) + 6.417s (initrd) + 6.971s (userspace) = 14.932s. Jan 30 12:55:34.096794 (kubelet)[1568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:55:34.120288 agetty[1544]: failed to open credentials directory Jan 30 12:55:34.124044 agetty[1542]: failed to open credentials directory Jan 30 12:55:35.006700 kubelet[1568]: E0130 12:55:35.006559 1568 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:55:35.010206 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:55:35.010424 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:55:35.010853 systemd[1]: kubelet.service: Consumed 1.434s CPU time. Jan 30 12:55:41.037991 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 12:55:41.049301 systemd[1]: Started sshd@0-209.38.73.11:22-139.178.68.195:36998.service - OpenSSH per-connection server daemon (139.178.68.195:36998). Jan 30 12:55:41.150586 sshd[1580]: Accepted publickey for core from 139.178.68.195 port 36998 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:55:41.153171 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:41.167982 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 12:55:41.175328 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 12:55:41.177968 systemd-logind[1455]: New session 1 of user core. Jan 30 12:55:41.194572 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 12:55:41.203352 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 12:55:41.221531 (systemd)[1584]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 12:55:41.408561 systemd[1584]: Queued start job for default target default.target. Jan 30 12:55:41.416964 systemd[1584]: Created slice app.slice - User Application Slice. Jan 30 12:55:41.417011 systemd[1584]: Reached target paths.target - Paths. Jan 30 12:55:41.417039 systemd[1584]: Reached target timers.target - Timers. Jan 30 12:55:41.419036 systemd[1584]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 12:55:41.463566 systemd[1584]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 12:55:41.463795 systemd[1584]: Reached target sockets.target - Sockets. Jan 30 12:55:41.463824 systemd[1584]: Reached target basic.target - Basic System. Jan 30 12:55:41.463898 systemd[1584]: Reached target default.target - Main User Target. Jan 30 12:55:41.463977 systemd[1584]: Startup finished in 231ms. Jan 30 12:55:41.464187 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 12:55:41.474187 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 12:55:41.546372 systemd[1]: Started sshd@1-209.38.73.11:22-139.178.68.195:37008.service - OpenSSH per-connection server daemon (139.178.68.195:37008). Jan 30 12:55:41.610466 sshd[1595]: Accepted publickey for core from 139.178.68.195 port 37008 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:55:41.612282 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:41.621113 systemd-logind[1455]: New session 2 of user core. Jan 30 12:55:41.628154 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 12:55:41.694915 sshd[1597]: Connection closed by 139.178.68.195 port 37008 Jan 30 12:55:41.695580 sshd-session[1595]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:41.704328 systemd[1]: sshd@1-209.38.73.11:22-139.178.68.195:37008.service: Deactivated successfully. Jan 30 12:55:41.706700 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 12:55:41.710054 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Jan 30 12:55:41.715298 systemd[1]: Started sshd@2-209.38.73.11:22-139.178.68.195:37018.service - OpenSSH per-connection server daemon (139.178.68.195:37018). Jan 30 12:55:41.717698 systemd-logind[1455]: Removed session 2. Jan 30 12:55:41.770962 sshd[1602]: Accepted publickey for core from 139.178.68.195 port 37018 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:55:41.773602 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:41.781394 systemd-logind[1455]: New session 3 of user core. Jan 30 12:55:41.782983 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 12:55:41.841430 sshd[1604]: Connection closed by 139.178.68.195 port 37018 Jan 30 12:55:41.842192 sshd-session[1602]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:41.858848 systemd[1]: sshd@2-209.38.73.11:22-139.178.68.195:37018.service: Deactivated successfully. Jan 30 12:55:41.861627 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 12:55:41.864160 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Jan 30 12:55:41.870379 systemd[1]: Started sshd@3-209.38.73.11:22-139.178.68.195:37026.service - OpenSSH per-connection server daemon (139.178.68.195:37026). Jan 30 12:55:41.872470 systemd-logind[1455]: Removed session 3. Jan 30 12:55:41.935077 sshd[1609]: Accepted publickey for core from 139.178.68.195 port 37026 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:55:41.937021 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:41.942976 systemd-logind[1455]: New session 4 of user core. Jan 30 12:55:41.951090 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 12:55:42.016352 sshd[1611]: Connection closed by 139.178.68.195 port 37026 Jan 30 12:55:42.017091 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:42.030825 systemd[1]: sshd@3-209.38.73.11:22-139.178.68.195:37026.service: Deactivated successfully. Jan 30 12:55:42.033982 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 12:55:42.036714 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Jan 30 12:55:42.048412 systemd[1]: Started sshd@4-209.38.73.11:22-139.178.68.195:37036.service - OpenSSH per-connection server daemon (139.178.68.195:37036). Jan 30 12:55:42.052543 systemd-logind[1455]: Removed session 4. Jan 30 12:55:42.102194 sshd[1616]: Accepted publickey for core from 139.178.68.195 port 37036 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:55:42.104645 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:42.113356 systemd-logind[1455]: New session 5 of user core. Jan 30 12:55:42.129091 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 12:55:42.208991 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 12:55:42.209454 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:55:42.224378 sudo[1619]: pam_unix(sudo:session): session closed for user root Jan 30 12:55:42.229811 sshd[1618]: Connection closed by 139.178.68.195 port 37036 Jan 30 12:55:42.228407 sshd-session[1616]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:42.242391 systemd[1]: sshd@4-209.38.73.11:22-139.178.68.195:37036.service: Deactivated successfully. Jan 30 12:55:42.244906 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 12:55:42.248054 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Jan 30 12:55:42.253200 systemd[1]: Started sshd@5-209.38.73.11:22-139.178.68.195:37042.service - OpenSSH per-connection server daemon (139.178.68.195:37042). Jan 30 12:55:42.255606 systemd-logind[1455]: Removed session 5. Jan 30 12:55:42.320589 sshd[1624]: Accepted publickey for core from 139.178.68.195 port 37042 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:55:42.323217 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:42.331912 systemd-logind[1455]: New session 6 of user core. Jan 30 12:55:42.343136 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 12:55:42.410792 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 12:55:42.411331 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:55:42.418237 sudo[1628]: pam_unix(sudo:session): session closed for user root Jan 30 12:55:42.428560 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 12:55:42.429131 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:55:42.449386 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 12:55:42.506515 augenrules[1650]: No rules Jan 30 12:55:42.508496 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 12:55:42.508798 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 12:55:42.511121 sudo[1627]: pam_unix(sudo:session): session closed for user root Jan 30 12:55:42.514945 sshd[1626]: Connection closed by 139.178.68.195 port 37042 Jan 30 12:55:42.515921 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:42.525000 systemd[1]: sshd@5-209.38.73.11:22-139.178.68.195:37042.service: Deactivated successfully. Jan 30 12:55:42.528153 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 12:55:42.531152 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Jan 30 12:55:42.535202 systemd[1]: Started sshd@6-209.38.73.11:22-139.178.68.195:37046.service - OpenSSH per-connection server daemon (139.178.68.195:37046). Jan 30 12:55:42.537418 systemd-logind[1455]: Removed session 6. Jan 30 12:55:42.598145 sshd[1658]: Accepted publickey for core from 139.178.68.195 port 37046 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:55:42.600063 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:42.606033 systemd-logind[1455]: New session 7 of user core. Jan 30 12:55:42.615140 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 12:55:42.679667 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 12:55:42.680261 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:55:43.243244 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 12:55:43.256483 (dockerd)[1679]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 12:55:43.765829 dockerd[1679]: time="2025-01-30T12:55:43.764326921Z" level=info msg="Starting up" Jan 30 12:55:43.903847 systemd[1]: var-lib-docker-metacopy\x2dcheck3242768137-merged.mount: Deactivated successfully. Jan 30 12:55:43.940123 dockerd[1679]: time="2025-01-30T12:55:43.940031049Z" level=info msg="Loading containers: start." Jan 30 12:55:44.173866 kernel: Initializing XFRM netlink socket Jan 30 12:55:44.307568 systemd-networkd[1370]: docker0: Link UP Jan 30 12:55:44.351839 dockerd[1679]: time="2025-01-30T12:55:44.351706595Z" level=info msg="Loading containers: done." Jan 30 12:55:44.377654 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1986263323-merged.mount: Deactivated successfully. Jan 30 12:55:44.387151 dockerd[1679]: time="2025-01-30T12:55:44.387056239Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 12:55:44.387349 dockerd[1679]: time="2025-01-30T12:55:44.387245576Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 12:55:44.387415 dockerd[1679]: time="2025-01-30T12:55:44.387400547Z" level=info msg="Daemon has completed initialization" Jan 30 12:55:44.461870 dockerd[1679]: time="2025-01-30T12:55:44.461753411Z" level=info msg="API listen on /run/docker.sock" Jan 30 12:55:44.462628 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 12:55:45.260870 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 12:55:45.270229 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:55:45.363866 containerd[1481]: time="2025-01-30T12:55:45.363455982Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 12:55:45.479139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:55:45.479475 (kubelet)[1883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:55:45.562116 kubelet[1883]: E0130 12:55:45.561926 1883 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:55:45.567663 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:55:45.567886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:55:46.084027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount556934809.mount: Deactivated successfully. Jan 30 12:55:47.552197 containerd[1481]: time="2025-01-30T12:55:47.552127129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:47.554705 containerd[1481]: time="2025-01-30T12:55:47.554640287Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674824" Jan 30 12:55:47.556915 containerd[1481]: time="2025-01-30T12:55:47.556871920Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:47.573020 containerd[1481]: time="2025-01-30T12:55:47.572933849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:47.574597 containerd[1481]: time="2025-01-30T12:55:47.573955186Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 2.210452942s" Jan 30 12:55:47.574597 containerd[1481]: time="2025-01-30T12:55:47.574006024Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 30 12:55:47.574950 containerd[1481]: time="2025-01-30T12:55:47.574926532Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 12:55:49.119839 containerd[1481]: time="2025-01-30T12:55:49.119468669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:49.129817 containerd[1481]: time="2025-01-30T12:55:49.129717331Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770711" Jan 30 12:55:49.133587 containerd[1481]: time="2025-01-30T12:55:49.133504109Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:49.143830 containerd[1481]: time="2025-01-30T12:55:49.143712651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:49.144942 containerd[1481]: time="2025-01-30T12:55:49.144784134Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.569809825s" Jan 30 12:55:49.144942 containerd[1481]: time="2025-01-30T12:55:49.144824109Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 30 12:55:49.146557 containerd[1481]: time="2025-01-30T12:55:49.146366140Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 12:55:50.434030 containerd[1481]: time="2025-01-30T12:55:50.432794563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:50.436488 containerd[1481]: time="2025-01-30T12:55:50.436380852Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169759" Jan 30 12:55:50.440153 containerd[1481]: time="2025-01-30T12:55:50.439989338Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:50.448473 containerd[1481]: time="2025-01-30T12:55:50.448368161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:50.450976 containerd[1481]: time="2025-01-30T12:55:50.450818690Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 1.304411792s" Jan 30 12:55:50.450976 containerd[1481]: time="2025-01-30T12:55:50.450869523Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 30 12:55:50.451645 containerd[1481]: time="2025-01-30T12:55:50.451614301Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 12:55:51.673264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3042235287.mount: Deactivated successfully. Jan 30 12:55:52.360889 containerd[1481]: time="2025-01-30T12:55:52.359891681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:52.363971 containerd[1481]: time="2025-01-30T12:55:52.363895503Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 30 12:55:52.366411 containerd[1481]: time="2025-01-30T12:55:52.366342920Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:52.374635 containerd[1481]: time="2025-01-30T12:55:52.374546856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:52.375827 containerd[1481]: time="2025-01-30T12:55:52.375465078Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 1.923806157s" Jan 30 12:55:52.375827 containerd[1481]: time="2025-01-30T12:55:52.375628357Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 12:55:52.377030 containerd[1481]: time="2025-01-30T12:55:52.376822589Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 12:55:52.385658 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 30 12:55:52.921656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2378599847.mount: Deactivated successfully. Jan 30 12:55:54.146694 containerd[1481]: time="2025-01-30T12:55:54.145859646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:54.148392 containerd[1481]: time="2025-01-30T12:55:54.148304248Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 30 12:55:54.150377 containerd[1481]: time="2025-01-30T12:55:54.150324907Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:54.158854 containerd[1481]: time="2025-01-30T12:55:54.158733516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:54.161265 containerd[1481]: time="2025-01-30T12:55:54.161016879Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.784150362s" Jan 30 12:55:54.161265 containerd[1481]: time="2025-01-30T12:55:54.161076208Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 30 12:55:54.162241 containerd[1481]: time="2025-01-30T12:55:54.161907986Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 12:55:54.650516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1041934876.mount: Deactivated successfully. Jan 30 12:55:54.666055 containerd[1481]: time="2025-01-30T12:55:54.665936156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:54.668686 containerd[1481]: time="2025-01-30T12:55:54.668570311Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 12:55:54.671979 containerd[1481]: time="2025-01-30T12:55:54.671830300Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:54.680747 containerd[1481]: time="2025-01-30T12:55:54.680634791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:54.685975 containerd[1481]: time="2025-01-30T12:55:54.685884680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 523.930889ms" Jan 30 12:55:54.687467 containerd[1481]: time="2025-01-30T12:55:54.686216901Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 12:55:54.688507 containerd[1481]: time="2025-01-30T12:55:54.688141978Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 12:55:55.326373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3515443151.mount: Deactivated successfully. Jan 30 12:55:55.445082 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 30 12:55:55.819408 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 12:55:55.828245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:55:56.088139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:55:56.091940 (kubelet)[2045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:55:56.166731 kubelet[2045]: E0130 12:55:56.166637 2045 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:55:56.169751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:55:56.169973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:55:57.889568 containerd[1481]: time="2025-01-30T12:55:57.889458612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:57.893278 containerd[1481]: time="2025-01-30T12:55:57.893189048Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Jan 30 12:55:57.897821 containerd[1481]: time="2025-01-30T12:55:57.896989451Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:57.906209 containerd[1481]: time="2025-01-30T12:55:57.906139661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:57.908416 containerd[1481]: time="2025-01-30T12:55:57.908352290Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.220156983s" Jan 30 12:55:57.908624 containerd[1481]: time="2025-01-30T12:55:57.908601544Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 30 12:56:01.153651 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:56:01.172314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:56:01.224415 systemd[1]: Reloading requested from client PID 2113 ('systemctl') (unit session-7.scope)... Jan 30 12:56:01.224444 systemd[1]: Reloading... Jan 30 12:56:01.417367 zram_generator::config[2152]: No configuration found. Jan 30 12:56:01.642388 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:56:01.818237 systemd[1]: Reloading finished in 592 ms. Jan 30 12:56:01.884905 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 12:56:01.885028 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 12:56:01.885518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:56:01.891650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:56:02.066578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:56:02.091503 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 12:56:02.160096 kubelet[2205]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:56:02.161809 kubelet[2205]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 12:56:02.161809 kubelet[2205]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:56:02.161809 kubelet[2205]: I0130 12:56:02.160751 2205 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 12:56:02.690065 kubelet[2205]: I0130 12:56:02.690002 2205 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 12:56:02.690319 kubelet[2205]: I0130 12:56:02.690303 2205 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 12:56:02.691006 kubelet[2205]: I0130 12:56:02.690971 2205 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 12:56:02.753553 kubelet[2205]: I0130 12:56:02.753496 2205 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 12:56:02.757749 kubelet[2205]: E0130 12:56:02.757566 2205 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://209.38.73.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 209.38.73.11:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:02.782990 kubelet[2205]: E0130 12:56:02.782785 2205 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 12:56:02.782990 kubelet[2205]: I0130 12:56:02.782848 2205 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 12:56:02.790033 kubelet[2205]: I0130 12:56:02.789971 2205 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 12:56:02.791980 kubelet[2205]: I0130 12:56:02.791842 2205 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 12:56:02.792480 kubelet[2205]: I0130 12:56:02.791957 2205 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-8-ccc447c07f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 12:56:02.792480 kubelet[2205]: I0130 12:56:02.792381 2205 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 12:56:02.792480 kubelet[2205]: I0130 12:56:02.792398 2205 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 12:56:02.792856 kubelet[2205]: I0130 12:56:02.792631 2205 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:56:02.801284 kubelet[2205]: I0130 12:56:02.801186 2205 kubelet.go:446] "Attempting to sync node with API server" Jan 30 12:56:02.801284 kubelet[2205]: I0130 12:56:02.801254 2205 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 12:56:02.801284 kubelet[2205]: I0130 12:56:02.801294 2205 kubelet.go:352] "Adding apiserver pod source" Jan 30 12:56:02.803495 kubelet[2205]: I0130 12:56:02.801312 2205 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 12:56:02.814758 kubelet[2205]: W0130 12:56:02.814171 2205 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://209.38.73.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 209.38.73.11:6443: connect: connection refused Jan 30 12:56:02.814758 kubelet[2205]: E0130 12:56:02.814303 2205 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://209.38.73.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 209.38.73.11:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:02.814758 kubelet[2205]: I0130 12:56:02.814507 2205 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 12:56:02.822072 kubelet[2205]: I0130 12:56:02.822021 2205 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 12:56:02.823401 kubelet[2205]: W0130 12:56:02.823138 2205 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://209.38.73.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-8-ccc447c07f&limit=500&resourceVersion=0": dial tcp 209.38.73.11:6443: connect: connection refused Jan 30 12:56:02.823401 kubelet[2205]: E0130 12:56:02.823250 2205 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://209.38.73.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-8-ccc447c07f&limit=500&resourceVersion=0\": dial tcp 209.38.73.11:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:02.823401 kubelet[2205]: W0130 12:56:02.823435 2205 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 12:56:02.824999 kubelet[2205]: I0130 12:56:02.824620 2205 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 12:56:02.824999 kubelet[2205]: I0130 12:56:02.824673 2205 server.go:1287] "Started kubelet" Jan 30 12:56:02.828805 kubelet[2205]: I0130 12:56:02.827882 2205 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 12:56:02.829825 kubelet[2205]: I0130 12:56:02.829766 2205 server.go:490] "Adding debug handlers to kubelet server" Jan 30 12:56:02.838766 kubelet[2205]: I0130 12:56:02.838664 2205 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 12:56:02.839378 kubelet[2205]: I0130 12:56:02.839342 2205 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 12:56:02.841799 kubelet[2205]: I0130 12:56:02.841738 2205 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 12:56:02.849616 kubelet[2205]: E0130 12:56:02.849563 2205 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.0-8-ccc447c07f\" not found" Jan 30 12:56:02.849616 kubelet[2205]: E0130 12:56:02.840288 2205 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://209.38.73.11:6443/api/v1/namespaces/default/events\": dial tcp 209.38.73.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-8-ccc447c07f.181f79ac76e96a3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-8-ccc447c07f,UID:ci-4186.1.0-8-ccc447c07f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-8-ccc447c07f,},FirstTimestamp:2025-01-30 12:56:02.824645178 +0000 UTC m=+0.727926076,LastTimestamp:2025-01-30 12:56:02.824645178 +0000 UTC m=+0.727926076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-8-ccc447c07f,}" Jan 30 12:56:02.852099 kubelet[2205]: E0130 12:56:02.851404 2205 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.73.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-8-ccc447c07f?timeout=10s\": dial tcp 209.38.73.11:6443: connect: connection refused" interval="200ms" Jan 30 12:56:02.852099 kubelet[2205]: I0130 12:56:02.842107 2205 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 12:56:02.852099 kubelet[2205]: I0130 12:56:02.851955 2205 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 12:56:02.852351 kubelet[2205]: I0130 12:56:02.852164 2205 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 12:56:02.855989 kubelet[2205]: I0130 12:56:02.853548 2205 reconciler.go:26] "Reconciler: start to sync state" Jan 30 12:56:02.855989 kubelet[2205]: W0130 12:56:02.854110 2205 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://209.38.73.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.73.11:6443: connect: connection refused Jan 30 12:56:02.855989 kubelet[2205]: E0130 12:56:02.854182 2205 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://209.38.73.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 209.38.73.11:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:02.869568 kubelet[2205]: I0130 12:56:02.869515 2205 factory.go:221] Registration of the containerd container factory successfully Jan 30 12:56:02.869568 kubelet[2205]: I0130 12:56:02.869578 2205 factory.go:221] Registration of the systemd container factory successfully Jan 30 12:56:02.870164 kubelet[2205]: I0130 12:56:02.869847 2205 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 12:56:02.872886 kubelet[2205]: E0130 12:56:02.872824 2205 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 12:56:02.893789 kubelet[2205]: I0130 12:56:02.893062 2205 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 12:56:02.895659 kubelet[2205]: I0130 12:56:02.895518 2205 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 12:56:02.895659 kubelet[2205]: I0130 12:56:02.895574 2205 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 12:56:02.895659 kubelet[2205]: I0130 12:56:02.895609 2205 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 12:56:02.895659 kubelet[2205]: I0130 12:56:02.895619 2205 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 12:56:02.896214 kubelet[2205]: E0130 12:56:02.895721 2205 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 12:56:02.911238 kubelet[2205]: W0130 12:56:02.910316 2205 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://209.38.73.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.73.11:6443: connect: connection refused Jan 30 12:56:02.911238 kubelet[2205]: E0130 12:56:02.910375 2205 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://209.38.73.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 209.38.73.11:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:02.912205 kubelet[2205]: I0130 12:56:02.912164 2205 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 12:56:02.912205 kubelet[2205]: I0130 12:56:02.912196 2205 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 12:56:02.912432 kubelet[2205]: I0130 12:56:02.912248 2205 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:56:02.923650 kubelet[2205]: I0130 12:56:02.923557 2205 policy_none.go:49] "None policy: Start" Jan 30 12:56:02.923650 kubelet[2205]: I0130 12:56:02.923629 2205 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 12:56:02.923650 kubelet[2205]: I0130 12:56:02.923650 2205 state_mem.go:35] "Initializing new in-memory state store" Jan 30 12:56:02.942074 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 12:56:02.950722 kubelet[2205]: E0130 12:56:02.950646 2205 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.0-8-ccc447c07f\" not found" Jan 30 12:56:02.962525 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 12:56:02.970294 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 12:56:02.983625 kubelet[2205]: I0130 12:56:02.982468 2205 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 12:56:02.983625 kubelet[2205]: I0130 12:56:02.982881 2205 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 12:56:02.983625 kubelet[2205]: I0130 12:56:02.982921 2205 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 12:56:02.983625 kubelet[2205]: I0130 12:56:02.983477 2205 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 12:56:02.985290 kubelet[2205]: E0130 12:56:02.985255 2205 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 12:56:02.985417 kubelet[2205]: E0130 12:56:02.985306 2205 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-8-ccc447c07f\" not found" Jan 30 12:56:03.011418 systemd[1]: Created slice kubepods-burstable-pod6f4cde05045e3f0f6a37c8c4ad562dd0.slice - libcontainer container kubepods-burstable-pod6f4cde05045e3f0f6a37c8c4ad562dd0.slice. Jan 30 12:56:03.022683 kubelet[2205]: E0130 12:56:03.022609 2205 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.0-8-ccc447c07f\" not found" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.028725 systemd[1]: Created slice kubepods-burstable-pode339c807fd702c48bd707be3a8923564.slice - libcontainer container kubepods-burstable-pode339c807fd702c48bd707be3a8923564.slice. Jan 30 12:56:03.034306 kubelet[2205]: E0130 12:56:03.034261 2205 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.0-8-ccc447c07f\" not found" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.037141 systemd[1]: Created slice kubepods-burstable-pod5265fc38822794995330deed242dabd6.slice - libcontainer container kubepods-burstable-pod5265fc38822794995330deed242dabd6.slice. Jan 30 12:56:03.040517 kubelet[2205]: E0130 12:56:03.040449 2205 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.0-8-ccc447c07f\" not found" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.052223 kubelet[2205]: E0130 12:56:03.052158 2205 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.73.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-8-ccc447c07f?timeout=10s\": dial tcp 209.38.73.11:6443: connect: connection refused" interval="400ms" Jan 30 12:56:03.054824 kubelet[2205]: I0130 12:56:03.054740 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f4cde05045e3f0f6a37c8c4ad562dd0-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-8-ccc447c07f\" (UID: \"6f4cde05045e3f0f6a37c8c4ad562dd0\") " pod="kube-system/kube-apiserver-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.054998 kubelet[2205]: I0130 12:56:03.054841 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f4cde05045e3f0f6a37c8c4ad562dd0-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-8-ccc447c07f\" (UID: \"6f4cde05045e3f0f6a37c8c4ad562dd0\") " pod="kube-system/kube-apiserver-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.054998 kubelet[2205]: I0130 12:56:03.054887 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e339c807fd702c48bd707be3a8923564-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-8-ccc447c07f\" (UID: \"e339c807fd702c48bd707be3a8923564\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.054998 kubelet[2205]: I0130 12:56:03.054924 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e339c807fd702c48bd707be3a8923564-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-8-ccc447c07f\" (UID: \"e339c807fd702c48bd707be3a8923564\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.055179 kubelet[2205]: I0130 12:56:03.054994 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e339c807fd702c48bd707be3a8923564-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-8-ccc447c07f\" (UID: \"e339c807fd702c48bd707be3a8923564\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.055179 kubelet[2205]: I0130 12:56:03.055050 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f4cde05045e3f0f6a37c8c4ad562dd0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-8-ccc447c07f\" (UID: \"6f4cde05045e3f0f6a37c8c4ad562dd0\") " pod="kube-system/kube-apiserver-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.055179 kubelet[2205]: I0130 12:56:03.055080 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e339c807fd702c48bd707be3a8923564-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-8-ccc447c07f\" (UID: \"e339c807fd702c48bd707be3a8923564\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.055179 kubelet[2205]: I0130 12:56:03.055106 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e339c807fd702c48bd707be3a8923564-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-8-ccc447c07f\" (UID: \"e339c807fd702c48bd707be3a8923564\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.055179 kubelet[2205]: I0130 12:56:03.055145 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5265fc38822794995330deed242dabd6-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-8-ccc447c07f\" (UID: \"5265fc38822794995330deed242dabd6\") " pod="kube-system/kube-scheduler-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.085266 kubelet[2205]: I0130 12:56:03.084807 2205 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.085474 kubelet[2205]: E0130 12:56:03.085432 2205 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://209.38.73.11:6443/api/v1/nodes\": dial tcp 209.38.73.11:6443: connect: connection refused" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.289049 kubelet[2205]: I0130 12:56:03.288416 2205 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.291258 kubelet[2205]: E0130 12:56:03.289181 2205 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://209.38.73.11:6443/api/v1/nodes\": dial tcp 209.38.73.11:6443: connect: connection refused" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.324257 kubelet[2205]: E0130 12:56:03.324187 2205 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:03.325185 containerd[1481]: time="2025-01-30T12:56:03.325132804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-8-ccc447c07f,Uid:6f4cde05045e3f0f6a37c8c4ad562dd0,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:03.335859 kubelet[2205]: E0130 12:56:03.334925 2205 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:03.336150 containerd[1481]: time="2025-01-30T12:56:03.335677264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-8-ccc447c07f,Uid:e339c807fd702c48bd707be3a8923564,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:03.340380 systemd-resolved[1324]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 30 12:56:03.341048 kubelet[2205]: E0130 12:56:03.340977 2205 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:03.341757 containerd[1481]: time="2025-01-30T12:56:03.341505650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-8-ccc447c07f,Uid:5265fc38822794995330deed242dabd6,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:03.453934 kubelet[2205]: E0130 12:56:03.453846 2205 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.73.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-8-ccc447c07f?timeout=10s\": dial tcp 209.38.73.11:6443: connect: connection refused" interval="800ms" Jan 30 12:56:03.691748 kubelet[2205]: I0130 12:56:03.691189 2205 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.691748 kubelet[2205]: E0130 12:56:03.691596 2205 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://209.38.73.11:6443/api/v1/nodes\": dial tcp 209.38.73.11:6443: connect: connection refused" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:03.765288 kubelet[2205]: W0130 12:56:03.765187 2205 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://209.38.73.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 209.38.73.11:6443: connect: connection refused Jan 30 12:56:03.765703 kubelet[2205]: E0130 12:56:03.765650 2205 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://209.38.73.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 209.38.73.11:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:03.917712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2360561800.mount: Deactivated successfully. Jan 30 12:56:03.923158 kubelet[2205]: W0130 12:56:03.923013 2205 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://209.38.73.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.73.11:6443: connect: connection refused Jan 30 12:56:03.923158 kubelet[2205]: E0130 12:56:03.923119 2205 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://209.38.73.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 209.38.73.11:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:03.962991 containerd[1481]: time="2025-01-30T12:56:03.962764055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:56:03.967630 containerd[1481]: time="2025-01-30T12:56:03.967491847Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 12:56:03.979693 containerd[1481]: time="2025-01-30T12:56:03.979042649Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:56:03.986666 containerd[1481]: time="2025-01-30T12:56:03.986572666Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:56:03.990618 containerd[1481]: time="2025-01-30T12:56:03.990541786Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:56:03.994755 containerd[1481]: time="2025-01-30T12:56:03.994648795Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 12:56:03.999400 containerd[1481]: time="2025-01-30T12:56:03.999312399Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 12:56:04.002374 containerd[1481]: time="2025-01-30T12:56:04.002274606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:56:04.004554 containerd[1481]: time="2025-01-30T12:56:04.004487144Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 671.081019ms" Jan 30 12:56:04.010446 kubelet[2205]: W0130 12:56:04.010282 2205 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://209.38.73.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-8-ccc447c07f&limit=500&resourceVersion=0": dial tcp 209.38.73.11:6443: connect: connection refused Jan 30 12:56:04.010446 kubelet[2205]: E0130 12:56:04.010395 2205 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://209.38.73.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-8-ccc447c07f&limit=500&resourceVersion=0\": dial tcp 209.38.73.11:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:04.013039 containerd[1481]: time="2025-01-30T12:56:04.012986543Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 677.137027ms" Jan 30 12:56:04.030909 containerd[1481]: time="2025-01-30T12:56:04.030830118Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 689.216881ms" Jan 30 12:56:04.220915 kubelet[2205]: W0130 12:56:04.218527 2205 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://209.38.73.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.73.11:6443: connect: connection refused Jan 30 12:56:04.220915 kubelet[2205]: E0130 12:56:04.218588 2205 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://209.38.73.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 209.38.73.11:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:04.243549 containerd[1481]: time="2025-01-30T12:56:04.242544437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:04.244248 containerd[1481]: time="2025-01-30T12:56:04.243847906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:04.244248 containerd[1481]: time="2025-01-30T12:56:04.243897657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:04.244248 containerd[1481]: time="2025-01-30T12:56:04.244072878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:04.245505 containerd[1481]: time="2025-01-30T12:56:04.244534324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:04.245505 containerd[1481]: time="2025-01-30T12:56:04.245330339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:04.245505 containerd[1481]: time="2025-01-30T12:56:04.245361645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:04.246161 containerd[1481]: time="2025-01-30T12:56:04.246018565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:04.249898 containerd[1481]: time="2025-01-30T12:56:04.249056161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:04.249898 containerd[1481]: time="2025-01-30T12:56:04.249140290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:04.249898 containerd[1481]: time="2025-01-30T12:56:04.249162490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:04.249898 containerd[1481]: time="2025-01-30T12:56:04.249313180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:04.255536 kubelet[2205]: E0130 12:56:04.255463 2205 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.73.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-8-ccc447c07f?timeout=10s\": dial tcp 209.38.73.11:6443: connect: connection refused" interval="1.6s" Jan 30 12:56:04.286102 systemd[1]: Started cri-containerd-4db85164a9ddc37fd81670a952be770406ed66f92e5aba7a1ab1a1c798763777.scope - libcontainer container 4db85164a9ddc37fd81670a952be770406ed66f92e5aba7a1ab1a1c798763777. Jan 30 12:56:04.316238 systemd[1]: Started cri-containerd-0638fd6103acebcafe15cfa623fe6b2bceb196bf37de86583b9319594ae9baef.scope - libcontainer container 0638fd6103acebcafe15cfa623fe6b2bceb196bf37de86583b9319594ae9baef. Jan 30 12:56:04.319971 systemd[1]: Started cri-containerd-f1472187b28b2f7dcaf0ab2ea893200709508c17702871152195e4bfb5276fc4.scope - libcontainer container f1472187b28b2f7dcaf0ab2ea893200709508c17702871152195e4bfb5276fc4. Jan 30 12:56:04.424102 containerd[1481]: time="2025-01-30T12:56:04.423136938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-8-ccc447c07f,Uid:6f4cde05045e3f0f6a37c8c4ad562dd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1472187b28b2f7dcaf0ab2ea893200709508c17702871152195e4bfb5276fc4\"" Jan 30 12:56:04.428545 containerd[1481]: time="2025-01-30T12:56:04.427843967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-8-ccc447c07f,Uid:e339c807fd702c48bd707be3a8923564,Namespace:kube-system,Attempt:0,} returns sandbox id \"4db85164a9ddc37fd81670a952be770406ed66f92e5aba7a1ab1a1c798763777\"" Jan 30 12:56:04.432469 kubelet[2205]: E0130 12:56:04.430870 2205 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:04.432469 kubelet[2205]: E0130 12:56:04.432220 2205 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:04.437384 containerd[1481]: time="2025-01-30T12:56:04.437308950Z" level=info msg="CreateContainer within sandbox \"4db85164a9ddc37fd81670a952be770406ed66f92e5aba7a1ab1a1c798763777\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 12:56:04.437913 containerd[1481]: time="2025-01-30T12:56:04.437878868Z" level=info msg="CreateContainer within sandbox \"f1472187b28b2f7dcaf0ab2ea893200709508c17702871152195e4bfb5276fc4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 12:56:04.447188 containerd[1481]: time="2025-01-30T12:56:04.447083745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-8-ccc447c07f,Uid:5265fc38822794995330deed242dabd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0638fd6103acebcafe15cfa623fe6b2bceb196bf37de86583b9319594ae9baef\"" Jan 30 12:56:04.449860 kubelet[2205]: E0130 12:56:04.449622 2205 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:04.453153 containerd[1481]: time="2025-01-30T12:56:04.453099338Z" level=info msg="CreateContainer within sandbox \"0638fd6103acebcafe15cfa623fe6b2bceb196bf37de86583b9319594ae9baef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 12:56:04.495007 kubelet[2205]: I0130 12:56:04.493336 2205 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:04.495007 kubelet[2205]: E0130 12:56:04.494080 2205 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://209.38.73.11:6443/api/v1/nodes\": dial tcp 209.38.73.11:6443: connect: connection refused" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:04.508338 containerd[1481]: time="2025-01-30T12:56:04.508231788Z" level=info msg="CreateContainer within sandbox \"4db85164a9ddc37fd81670a952be770406ed66f92e5aba7a1ab1a1c798763777\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"213038510be4c29abd537abe925576ea0050edb7d6b0551ea626cadc9d461447\"" Jan 30 12:56:04.509230 containerd[1481]: time="2025-01-30T12:56:04.509196619Z" level=info msg="StartContainer for \"213038510be4c29abd537abe925576ea0050edb7d6b0551ea626cadc9d461447\"" Jan 30 12:56:04.520543 containerd[1481]: time="2025-01-30T12:56:04.520465988Z" level=info msg="CreateContainer within sandbox \"0638fd6103acebcafe15cfa623fe6b2bceb196bf37de86583b9319594ae9baef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"528d5d4864c643c275688e926ae39d09154599e1d473ff4ce95491d823e47a40\"" Jan 30 12:56:04.524838 containerd[1481]: time="2025-01-30T12:56:04.522682339Z" level=info msg="StartContainer for \"528d5d4864c643c275688e926ae39d09154599e1d473ff4ce95491d823e47a40\"" Jan 30 12:56:04.531283 containerd[1481]: time="2025-01-30T12:56:04.531215579Z" level=info msg="CreateContainer within sandbox \"f1472187b28b2f7dcaf0ab2ea893200709508c17702871152195e4bfb5276fc4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a244c164e4ef72a5e4ff0043caa055f96fa17e456041a2412b1822ddeee86bca\"" Jan 30 12:56:04.532542 containerd[1481]: time="2025-01-30T12:56:04.532490769Z" level=info msg="StartContainer for \"a244c164e4ef72a5e4ff0043caa055f96fa17e456041a2412b1822ddeee86bca\"" Jan 30 12:56:04.569863 systemd[1]: Started cri-containerd-213038510be4c29abd537abe925576ea0050edb7d6b0551ea626cadc9d461447.scope - libcontainer container 213038510be4c29abd537abe925576ea0050edb7d6b0551ea626cadc9d461447. Jan 30 12:56:04.594574 systemd[1]: Started cri-containerd-528d5d4864c643c275688e926ae39d09154599e1d473ff4ce95491d823e47a40.scope - libcontainer container 528d5d4864c643c275688e926ae39d09154599e1d473ff4ce95491d823e47a40. Jan 30 12:56:04.632091 systemd[1]: Started cri-containerd-a244c164e4ef72a5e4ff0043caa055f96fa17e456041a2412b1822ddeee86bca.scope - libcontainer container a244c164e4ef72a5e4ff0043caa055f96fa17e456041a2412b1822ddeee86bca. Jan 30 12:56:04.707976 containerd[1481]: time="2025-01-30T12:56:04.707890251Z" level=info msg="StartContainer for \"213038510be4c29abd537abe925576ea0050edb7d6b0551ea626cadc9d461447\" returns successfully" Jan 30 12:56:04.724822 containerd[1481]: time="2025-01-30T12:56:04.724319838Z" level=info msg="StartContainer for \"528d5d4864c643c275688e926ae39d09154599e1d473ff4ce95491d823e47a40\" returns successfully" Jan 30 12:56:04.771676 containerd[1481]: time="2025-01-30T12:56:04.771512261Z" level=info msg="StartContainer for \"a244c164e4ef72a5e4ff0043caa055f96fa17e456041a2412b1822ddeee86bca\" returns successfully" Jan 30 12:56:04.814823 kubelet[2205]: E0130 12:56:04.814067 2205 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://209.38.73.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 209.38.73.11:6443: connect: connection refused" logger="UnhandledError" Jan 30 12:56:04.930159 kubelet[2205]: E0130 12:56:04.928469 2205 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.0-8-ccc447c07f\" not found" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:04.930159 kubelet[2205]: E0130 12:56:04.928698 2205 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:04.939453 kubelet[2205]: E0130 12:56:04.939161 2205 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.0-8-ccc447c07f\" not found" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:04.940109 kubelet[2205]: E0130 12:56:04.939765 2205 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.0-8-ccc447c07f\" not found" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:04.940109 kubelet[2205]: E0130 12:56:04.939925 2205 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:04.940109 kubelet[2205]: E0130 12:56:04.939958 2205 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:05.103372 kubelet[2205]: E0130 12:56:05.103043 2205 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://209.38.73.11:6443/api/v1/namespaces/default/events\": dial tcp 209.38.73.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-8-ccc447c07f.181f79ac76e96a3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-8-ccc447c07f,UID:ci-4186.1.0-8-ccc447c07f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-8-ccc447c07f,},FirstTimestamp:2025-01-30 12:56:02.824645178 +0000 UTC m=+0.727926076,LastTimestamp:2025-01-30 12:56:02.824645178 +0000 UTC m=+0.727926076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-8-ccc447c07f,}" Jan 30 12:56:05.952264 kubelet[2205]: E0130 12:56:05.952160 2205 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.0-8-ccc447c07f\" not found" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:05.954376 kubelet[2205]: E0130 12:56:05.954346 2205 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:05.954601 kubelet[2205]: E0130 12:56:05.952523 2205 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4186.1.0-8-ccc447c07f\" not found" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:05.954939 kubelet[2205]: E0130 12:56:05.954919 2205 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:06.095977 kubelet[2205]: I0130 12:56:06.095925 2205 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:07.039119 kubelet[2205]: E0130 12:56:07.039065 2205 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.0-8-ccc447c07f\" not found" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:07.101330 kubelet[2205]: I0130 12:56:07.100767 2205 kubelet_node_status.go:79] "Successfully registered node" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:07.151374 kubelet[2205]: I0130 12:56:07.151321 2205 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:07.164837 kubelet[2205]: E0130 12:56:07.162971 2205 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4186.1.0-8-ccc447c07f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:07.164837 kubelet[2205]: I0130 12:56:07.163018 2205 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:07.166070 kubelet[2205]: E0130 12:56:07.166004 2205 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4186.1.0-8-ccc447c07f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:07.166070 kubelet[2205]: I0130 12:56:07.166042 2205 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:07.169238 kubelet[2205]: E0130 12:56:07.169166 2205 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4186.1.0-8-ccc447c07f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:07.810200 kubelet[2205]: I0130 12:56:07.810128 2205 apiserver.go:52] "Watching apiserver" Jan 30 12:56:07.852843 kubelet[2205]: I0130 12:56:07.852724 2205 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 12:56:09.602695 systemd[1]: Reloading requested from client PID 2480 ('systemctl') (unit session-7.scope)... Jan 30 12:56:09.603145 systemd[1]: Reloading... Jan 30 12:56:09.749826 zram_generator::config[2519]: No configuration found. Jan 30 12:56:09.898919 kubelet[2205]: I0130 12:56:09.896872 2205 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:09.908353 kubelet[2205]: W0130 12:56:09.907308 2205 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 12:56:09.908353 kubelet[2205]: E0130 12:56:09.907692 2205 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:09.952296 kubelet[2205]: E0130 12:56:09.952172 2205 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:09.959895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:56:10.136734 systemd[1]: Reloading finished in 532 ms. Jan 30 12:56:10.204523 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:56:10.219415 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 12:56:10.219825 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:56:10.219911 systemd[1]: kubelet.service: Consumed 1.224s CPU time, 120.6M memory peak, 0B memory swap peak. Jan 30 12:56:10.232332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:56:10.425088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:56:10.429154 (kubelet)[2570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 12:56:10.517893 kubelet[2570]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:56:10.517893 kubelet[2570]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 12:56:10.517893 kubelet[2570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:56:10.517893 kubelet[2570]: I0130 12:56:10.516927 2570 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 12:56:10.533039 kubelet[2570]: I0130 12:56:10.532979 2570 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 12:56:10.533039 kubelet[2570]: I0130 12:56:10.533034 2570 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 12:56:10.533654 kubelet[2570]: I0130 12:56:10.533617 2570 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 12:56:10.538272 kubelet[2570]: I0130 12:56:10.538203 2570 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 12:56:10.546303 kubelet[2570]: I0130 12:56:10.545762 2570 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 12:56:10.551961 kubelet[2570]: E0130 12:56:10.551879 2570 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 12:56:10.551961 kubelet[2570]: I0130 12:56:10.551917 2570 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 12:56:10.556481 kubelet[2570]: I0130 12:56:10.556288 2570 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 12:56:10.556751 kubelet[2570]: I0130 12:56:10.556679 2570 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 12:56:10.557066 kubelet[2570]: I0130 12:56:10.556755 2570 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-8-ccc447c07f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 12:56:10.557199 kubelet[2570]: I0130 12:56:10.557071 2570 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 12:56:10.557199 kubelet[2570]: I0130 12:56:10.557091 2570 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 12:56:10.557199 kubelet[2570]: I0130 12:56:10.557159 2570 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:56:10.557463 kubelet[2570]: I0130 12:56:10.557404 2570 kubelet.go:446] "Attempting to sync node with API server" Jan 30 12:56:10.557529 kubelet[2570]: I0130 12:56:10.557475 2570 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 12:56:10.557529 kubelet[2570]: I0130 12:56:10.557513 2570 kubelet.go:352] "Adding apiserver pod source" Jan 30 12:56:10.557632 kubelet[2570]: I0130 12:56:10.557529 2570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 12:56:10.559523 kubelet[2570]: I0130 12:56:10.559496 2570 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 12:56:10.562357 kubelet[2570]: I0130 12:56:10.562308 2570 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 12:56:10.563198 kubelet[2570]: I0130 12:56:10.563157 2570 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 12:56:10.563391 kubelet[2570]: I0130 12:56:10.563217 2570 server.go:1287] "Started kubelet" Jan 30 12:56:10.577813 kubelet[2570]: I0130 12:56:10.577237 2570 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 12:56:10.580038 kubelet[2570]: I0130 12:56:10.580006 2570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 12:56:10.582437 kubelet[2570]: I0130 12:56:10.582395 2570 server.go:490] "Adding debug handlers to kubelet server" Jan 30 12:56:10.587439 kubelet[2570]: I0130 12:56:10.586549 2570 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 12:56:10.587439 kubelet[2570]: I0130 12:56:10.586930 2570 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 12:56:10.594348 kubelet[2570]: I0130 12:56:10.594297 2570 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 12:56:10.604822 kubelet[2570]: I0130 12:56:10.602882 2570 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 12:56:10.604822 kubelet[2570]: E0130 12:56:10.603212 2570 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4186.1.0-8-ccc447c07f\" not found" Jan 30 12:56:10.607177 kubelet[2570]: I0130 12:56:10.607143 2570 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 12:56:10.607585 kubelet[2570]: I0130 12:56:10.607522 2570 reconciler.go:26] "Reconciler: start to sync state" Jan 30 12:56:10.620333 kubelet[2570]: I0130 12:56:10.620272 2570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 12:56:10.622150 kubelet[2570]: I0130 12:56:10.622107 2570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 12:56:10.622467 kubelet[2570]: I0130 12:56:10.622450 2570 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 12:56:10.622585 kubelet[2570]: I0130 12:56:10.622574 2570 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 12:56:10.622856 kubelet[2570]: I0130 12:56:10.622655 2570 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 12:56:10.622856 kubelet[2570]: E0130 12:56:10.622736 2570 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 12:56:10.646538 sudo[2587]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 12:56:10.647177 sudo[2587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 12:56:10.650485 kubelet[2570]: E0130 12:56:10.649461 2570 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 12:56:10.654575 kubelet[2570]: I0130 12:56:10.651291 2570 factory.go:221] Registration of the containerd container factory successfully Jan 30 12:56:10.654575 kubelet[2570]: I0130 12:56:10.651319 2570 factory.go:221] Registration of the systemd container factory successfully Jan 30 12:56:10.654575 kubelet[2570]: I0130 12:56:10.651433 2570 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 12:56:10.723847 kubelet[2570]: E0130 12:56:10.723710 2570 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 12:56:10.744321 kubelet[2570]: I0130 12:56:10.744277 2570 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 12:56:10.744559 kubelet[2570]: I0130 12:56:10.744538 2570 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 12:56:10.744829 kubelet[2570]: I0130 12:56:10.744663 2570 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:56:10.745205 kubelet[2570]: I0130 12:56:10.745177 2570 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 12:56:10.746809 kubelet[2570]: I0130 12:56:10.745314 2570 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 12:56:10.746809 kubelet[2570]: I0130 12:56:10.745367 2570 policy_none.go:49] "None policy: Start" Jan 30 12:56:10.746809 kubelet[2570]: I0130 12:56:10.745386 2570 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 12:56:10.746809 kubelet[2570]: I0130 12:56:10.745416 2570 state_mem.go:35] "Initializing new in-memory state store" Jan 30 12:56:10.746809 kubelet[2570]: I0130 12:56:10.745643 2570 state_mem.go:75] "Updated machine memory state" Jan 30 12:56:10.761941 kubelet[2570]: I0130 12:56:10.761901 2570 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 12:56:10.763853 kubelet[2570]: I0130 12:56:10.763821 2570 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 12:56:10.764149 kubelet[2570]: I0130 12:56:10.764089 2570 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 12:56:10.764921 kubelet[2570]: I0130 12:56:10.764877 2570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 12:56:10.770188 kubelet[2570]: E0130 12:56:10.770061 2570 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 12:56:10.873098 kubelet[2570]: I0130 12:56:10.872929 2570 kubelet_node_status.go:76] "Attempting to register node" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:10.888171 kubelet[2570]: I0130 12:56:10.888081 2570 kubelet_node_status.go:125] "Node was previously registered" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:10.888642 kubelet[2570]: I0130 12:56:10.888623 2570 kubelet_node_status.go:79] "Successfully registered node" node="ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:10.925566 kubelet[2570]: I0130 12:56:10.925449 2570 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:10.929576 kubelet[2570]: I0130 12:56:10.929486 2570 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:10.932219 kubelet[2570]: I0130 12:56:10.932100 2570 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:10.942602 kubelet[2570]: W0130 12:56:10.942557 2570 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 12:56:10.942741 kubelet[2570]: E0130 12:56:10.942647 2570 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4186.1.0-8-ccc447c07f\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:10.944015 kubelet[2570]: W0130 12:56:10.943982 2570 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 12:56:10.944132 kubelet[2570]: W0130 12:56:10.944079 2570 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 12:56:11.009289 kubelet[2570]: I0130 12:56:11.009239 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e339c807fd702c48bd707be3a8923564-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-8-ccc447c07f\" (UID: \"e339c807fd702c48bd707be3a8923564\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:11.009289 kubelet[2570]: I0130 12:56:11.009286 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e339c807fd702c48bd707be3a8923564-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-8-ccc447c07f\" (UID: \"e339c807fd702c48bd707be3a8923564\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:11.009485 kubelet[2570]: I0130 12:56:11.009310 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e339c807fd702c48bd707be3a8923564-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-8-ccc447c07f\" (UID: \"e339c807fd702c48bd707be3a8923564\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:11.009485 kubelet[2570]: I0130 12:56:11.009350 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5265fc38822794995330deed242dabd6-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-8-ccc447c07f\" (UID: \"5265fc38822794995330deed242dabd6\") " pod="kube-system/kube-scheduler-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:11.009485 kubelet[2570]: I0130 12:56:11.009370 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f4cde05045e3f0f6a37c8c4ad562dd0-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-8-ccc447c07f\" (UID: \"6f4cde05045e3f0f6a37c8c4ad562dd0\") " pod="kube-system/kube-apiserver-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:11.009485 kubelet[2570]: I0130 12:56:11.009404 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f4cde05045e3f0f6a37c8c4ad562dd0-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-8-ccc447c07f\" (UID: \"6f4cde05045e3f0f6a37c8c4ad562dd0\") " pod="kube-system/kube-apiserver-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:11.009965 kubelet[2570]: I0130 12:56:11.009917 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f4cde05045e3f0f6a37c8c4ad562dd0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-8-ccc447c07f\" (UID: \"6f4cde05045e3f0f6a37c8c4ad562dd0\") " pod="kube-system/kube-apiserver-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:11.010015 kubelet[2570]: I0130 12:56:11.009981 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e339c807fd702c48bd707be3a8923564-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-8-ccc447c07f\" (UID: \"e339c807fd702c48bd707be3a8923564\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:11.010060 kubelet[2570]: I0130 12:56:11.010010 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e339c807fd702c48bd707be3a8923564-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-8-ccc447c07f\" (UID: \"e339c807fd702c48bd707be3a8923564\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:11.244805 kubelet[2570]: E0130 12:56:11.243622 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:11.244973 kubelet[2570]: E0130 12:56:11.244823 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:11.245079 kubelet[2570]: E0130 12:56:11.245046 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:11.538855 sudo[2587]: pam_unix(sudo:session): session closed for user root Jan 30 12:56:11.559401 kubelet[2570]: I0130 12:56:11.559335 2570 apiserver.go:52] "Watching apiserver" Jan 30 12:56:11.607725 kubelet[2570]: I0130 12:56:11.607659 2570 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 12:56:11.703971 kubelet[2570]: E0130 12:56:11.701543 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:11.703971 kubelet[2570]: I0130 12:56:11.701902 2570 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:11.703971 kubelet[2570]: E0130 12:56:11.702626 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:11.717972 kubelet[2570]: W0130 12:56:11.717571 2570 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 12:56:11.717972 kubelet[2570]: E0130 12:56:11.717665 2570 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4186.1.0-8-ccc447c07f\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-8-ccc447c07f" Jan 30 12:56:11.717972 kubelet[2570]: E0130 12:56:11.717865 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:11.771811 kubelet[2570]: I0130 12:56:11.769822 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.0-8-ccc447c07f" podStartSLOduration=1.76979469 podStartE2EDuration="1.76979469s" podCreationTimestamp="2025-01-30 12:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:11.751785225 +0000 UTC m=+1.312714679" watchObservedRunningTime="2025-01-30 12:56:11.76979469 +0000 UTC m=+1.330724141" Jan 30 12:56:11.789191 kubelet[2570]: I0130 12:56:11.788734 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.0-8-ccc447c07f" podStartSLOduration=1.7887115310000001 podStartE2EDuration="1.788711531s" podCreationTimestamp="2025-01-30 12:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:11.770194353 +0000 UTC m=+1.331123811" watchObservedRunningTime="2025-01-30 12:56:11.788711531 +0000 UTC m=+1.349640984" Jan 30 12:56:11.813799 kubelet[2570]: I0130 12:56:11.812261 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.0-8-ccc447c07f" podStartSLOduration=2.8122386 podStartE2EDuration="2.8122386s" podCreationTimestamp="2025-01-30 12:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:11.788926911 +0000 UTC m=+1.349856341" watchObservedRunningTime="2025-01-30 12:56:11.8122386 +0000 UTC m=+1.373168052" Jan 30 12:56:12.704697 kubelet[2570]: E0130 12:56:12.704634 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:12.706934 kubelet[2570]: E0130 12:56:12.706758 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:12.758738 kubelet[2570]: E0130 12:56:12.758621 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:13.384438 sudo[1661]: pam_unix(sudo:session): session closed for user root Jan 30 12:56:13.388368 sshd[1660]: Connection closed by 139.178.68.195 port 37046 Jan 30 12:56:13.389030 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:13.394858 systemd[1]: sshd@6-209.38.73.11:22-139.178.68.195:37046.service: Deactivated successfully. Jan 30 12:56:13.399985 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 12:56:13.401127 systemd[1]: session-7.scope: Consumed 5.923s CPU time, 139.5M memory peak, 0B memory swap peak. Jan 30 12:56:13.403909 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Jan 30 12:56:13.405969 systemd-logind[1455]: Removed session 7. Jan 30 12:56:13.706673 kubelet[2570]: E0130 12:56:13.706522 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:14.708883 kubelet[2570]: E0130 12:56:14.708634 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:15.675052 kubelet[2570]: I0130 12:56:15.674975 2570 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 12:56:15.675961 containerd[1481]: time="2025-01-30T12:56:15.675904535Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 12:56:15.677117 kubelet[2570]: I0130 12:56:15.676658 2570 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 12:56:15.712114 kubelet[2570]: E0130 12:56:15.711605 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:16.342034 kubelet[2570]: W0130 12:56:16.339591 2570 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4186.1.0-8-ccc447c07f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.0-8-ccc447c07f' and this object Jan 30 12:56:16.342034 kubelet[2570]: E0130 12:56:16.339648 2570 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4186.1.0-8-ccc447c07f\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186.1.0-8-ccc447c07f' and this object" logger="UnhandledError" Jan 30 12:56:16.349284 systemd[1]: Created slice kubepods-burstable-pod4e612718_f4bd_4961_95a1_789060e6c17e.slice - libcontainer container kubepods-burstable-pod4e612718_f4bd_4961_95a1_789060e6c17e.slice. Jan 30 12:56:16.352711 kubelet[2570]: I0130 12:56:16.351118 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-hostproc\") pod \"cilium-lsjg9\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " pod="kube-system/cilium-lsjg9" Jan 30 12:56:16.352711 kubelet[2570]: I0130 12:56:16.351301 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-cilium-cgroup\") pod \"cilium-lsjg9\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " pod="kube-system/cilium-lsjg9" Jan 30 12:56:16.352987 kubelet[2570]: I0130 12:56:16.352965 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e612718-f4bd-4961-95a1-789060e6c17e-cilium-config-path\") pod \"cilium-lsjg9\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " pod="kube-system/cilium-lsjg9" Jan 30 12:56:16.353186 kubelet[2570]: I0130 12:56:16.353170 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35c0c305-248b-47b6-9b26-d1b700e5c000-lib-modules\") pod \"kube-proxy-srldg\" (UID: \"35c0c305-248b-47b6-9b26-d1b700e5c000\") " pod="kube-system/kube-proxy-srldg" Jan 30 12:56:16.353268 kubelet[2570]: I0130 12:56:16.353255 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-cni-path\") pod \"cilium-lsjg9\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " pod="kube-system/cilium-lsjg9" Jan 30 12:56:16.353336 kubelet[2570]: I0130 12:56:16.353319 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-lib-modules\") pod \"cilium-lsjg9\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " pod="kube-system/cilium-lsjg9" Jan 30 12:56:16.353445 kubelet[2570]: I0130 12:56:16.353432 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35c0c305-248b-47b6-9b26-d1b700e5c000-xtables-lock\") pod \"kube-proxy-srldg\" (UID: \"35c0c305-248b-47b6-9b26-d1b700e5c000\") " pod="kube-system/kube-proxy-srldg" Jan 30 12:56:16.353507 kubelet[2570]: I0130 12:56:16.353497 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e612718-f4bd-4961-95a1-789060e6c17e-clustermesh-secrets\") pod \"cilium-lsjg9\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " pod="kube-system/cilium-lsjg9" Jan 30 12:56:16.353563 kubelet[2570]: I0130 12:56:16.353553 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-host-proc-sys-kernel\") pod \"cilium-lsjg9\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " pod="kube-system/cilium-lsjg9" Jan 30 12:56:16.353626 kubelet[2570]: I0130 12:56:16.353616 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/35c0c305-248b-47b6-9b26-d1b700e5c000-kube-proxy\") pod \"kube-proxy-srldg\" (UID: \"35c0c305-248b-47b6-9b26-d1b700e5c000\") " pod="kube-system/kube-proxy-srldg" Jan 30 12:56:16.353713 kubelet[2570]: I0130 12:56:16.353694 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e612718-f4bd-4961-95a1-789060e6c17e-hubble-tls\") pod \"cilium-lsjg9\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " pod="kube-system/cilium-lsjg9" Jan 30 12:56:16.353976 kubelet[2570]: I0130 12:56:16.353954 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-cilium-run\") pod \"cilium-lsjg9\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " pod="kube-system/cilium-lsjg9" Jan 30 12:56:16.354190 kubelet[2570]: I0130 12:56:16.354078 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-565ch\" (UniqueName: \"kubernetes.io/projected/35c0c305-248b-47b6-9b26-d1b700e5c000-kube-api-access-565ch\") pod \"kube-proxy-srldg\" (UID: \"35c0c305-248b-47b6-9b26-d1b700e5c000\") " pod="kube-system/kube-proxy-srldg" Jan 30 12:56:16.354375 kubelet[2570]: I0130 12:56:16.354358 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-host-proc-sys-net\") pod \"cilium-lsjg9\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " pod="kube-system/cilium-lsjg9" Jan 30 12:56:16.354497 kubelet[2570]: I0130 12:56:16.354482 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwcpq\" (UniqueName: \"kubernetes.io/projected/4e612718-f4bd-4961-95a1-789060e6c17e-kube-api-access-mwcpq\") pod \"cilium-lsjg9\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " pod="kube-system/cilium-lsjg9" Jan 30 12:56:16.354632 kubelet[2570]: I0130 12:56:16.354599 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-xtables-lock\") pod \"cilium-lsjg9\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " pod="kube-system/cilium-lsjg9" Jan 30 12:56:16.354742 kubelet[2570]: I0130 12:56:16.354729 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-bpf-maps\") pod \"cilium-lsjg9\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " pod="kube-system/cilium-lsjg9" Jan 30 12:56:16.354880 kubelet[2570]: I0130 12:56:16.354865 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-etc-cni-netd\") pod \"cilium-lsjg9\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " pod="kube-system/cilium-lsjg9" Jan 30 12:56:16.374746 systemd[1]: Created slice kubepods-besteffort-pod35c0c305_248b_47b6_9b26_d1b700e5c000.slice - libcontainer container kubepods-besteffort-pod35c0c305_248b_47b6_9b26_d1b700e5c000.slice. Jan 30 12:56:16.379816 kubelet[2570]: E0130 12:56:16.377925 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:16.483821 kubelet[2570]: E0130 12:56:16.476962 2570 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 12:56:16.483821 kubelet[2570]: E0130 12:56:16.477009 2570 projected.go:194] Error preparing data for projected volume kube-api-access-565ch for pod kube-system/kube-proxy-srldg: configmap "kube-root-ca.crt" not found Jan 30 12:56:16.483821 kubelet[2570]: E0130 12:56:16.477090 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35c0c305-248b-47b6-9b26-d1b700e5c000-kube-api-access-565ch podName:35c0c305-248b-47b6-9b26-d1b700e5c000 nodeName:}" failed. No retries permitted until 2025-01-30 12:56:16.977063978 +0000 UTC m=+6.537993409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-565ch" (UniqueName: "kubernetes.io/projected/35c0c305-248b-47b6-9b26-d1b700e5c000-kube-api-access-565ch") pod "kube-proxy-srldg" (UID: "35c0c305-248b-47b6-9b26-d1b700e5c000") : configmap "kube-root-ca.crt" not found Jan 30 12:56:16.505369 kubelet[2570]: E0130 12:56:16.505151 2570 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 12:56:16.505691 kubelet[2570]: E0130 12:56:16.505671 2570 projected.go:194] Error preparing data for projected volume kube-api-access-mwcpq for pod kube-system/cilium-lsjg9: configmap "kube-root-ca.crt" not found Jan 30 12:56:16.505976 kubelet[2570]: E0130 12:56:16.505936 2570 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e612718-f4bd-4961-95a1-789060e6c17e-kube-api-access-mwcpq podName:4e612718-f4bd-4961-95a1-789060e6c17e nodeName:}" failed. No retries permitted until 2025-01-30 12:56:17.005907015 +0000 UTC m=+6.566836448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mwcpq" (UniqueName: "kubernetes.io/projected/4e612718-f4bd-4961-95a1-789060e6c17e-kube-api-access-mwcpq") pod "cilium-lsjg9" (UID: "4e612718-f4bd-4961-95a1-789060e6c17e") : configmap "kube-root-ca.crt" not found Jan 30 12:56:16.713971 kubelet[2570]: E0130 12:56:16.712705 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:16.713971 kubelet[2570]: E0130 12:56:16.713239 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:16.783450 kubelet[2570]: I0130 12:56:16.783365 2570 status_manager.go:890] "Failed to get status for pod" podUID="8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72" pod="kube-system/cilium-operator-6c4d7847fc-6gp8z" err="pods \"cilium-operator-6c4d7847fc-6gp8z\" is forbidden: User \"system:node:ci-4186.1.0-8-ccc447c07f\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186.1.0-8-ccc447c07f' and this object" Jan 30 12:56:16.786937 systemd[1]: Created slice kubepods-besteffort-pod8eada1ee_5a8b_4e3a_b4f4_0068f5b8ed72.slice - libcontainer container kubepods-besteffort-pod8eada1ee_5a8b_4e3a_b4f4_0068f5b8ed72.slice. Jan 30 12:56:16.857932 kubelet[2570]: I0130 12:56:16.857742 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6gp8z\" (UID: \"8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72\") " pod="kube-system/cilium-operator-6c4d7847fc-6gp8z" Jan 30 12:56:16.857932 kubelet[2570]: I0130 12:56:16.857916 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs48t\" (UniqueName: \"kubernetes.io/projected/8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72-kube-api-access-zs48t\") pod \"cilium-operator-6c4d7847fc-6gp8z\" (UID: \"8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72\") " pod="kube-system/cilium-operator-6c4d7847fc-6gp8z" Jan 30 12:56:17.091382 kubelet[2570]: E0130 12:56:17.091170 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:17.092349 containerd[1481]: time="2025-01-30T12:56:17.092104866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6gp8z,Uid:8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:17.159968 update_engine[1456]: I20250130 12:56:17.159858 1456 update_attempter.cc:509] Updating boot flags... Jan 30 12:56:17.166636 containerd[1481]: time="2025-01-30T12:56:17.165084647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:17.166636 containerd[1481]: time="2025-01-30T12:56:17.165207072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:17.166636 containerd[1481]: time="2025-01-30T12:56:17.165271853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:17.166636 containerd[1481]: time="2025-01-30T12:56:17.165412983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:17.196723 systemd[1]: Started cri-containerd-972e3a376c97749afd404cde7af36b7e184bd066220c40367cc0266f93b3c035.scope - libcontainer container 972e3a376c97749afd404cde7af36b7e184bd066220c40367cc0266f93b3c035. Jan 30 12:56:17.252894 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2687) Jan 30 12:56:17.270871 kubelet[2570]: E0130 12:56:17.269879 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:17.278735 containerd[1481]: time="2025-01-30T12:56:17.278262366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lsjg9,Uid:4e612718-f4bd-4961-95a1-789060e6c17e,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:17.353550 containerd[1481]: time="2025-01-30T12:56:17.352230590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6gp8z,Uid:8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72,Namespace:kube-system,Attempt:0,} returns sandbox id \"972e3a376c97749afd404cde7af36b7e184bd066220c40367cc0266f93b3c035\"" Jan 30 12:56:17.354876 kubelet[2570]: E0130 12:56:17.354551 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:17.360890 containerd[1481]: time="2025-01-30T12:56:17.360559619Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 12:56:17.383860 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2690) Jan 30 12:56:17.433407 containerd[1481]: time="2025-01-30T12:56:17.428743000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:17.433407 containerd[1481]: time="2025-01-30T12:56:17.432946157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:17.433407 containerd[1481]: time="2025-01-30T12:56:17.432965623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:17.433407 containerd[1481]: time="2025-01-30T12:56:17.433277531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:17.468164 systemd[1]: Started cri-containerd-3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c.scope - libcontainer container 3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c. Jan 30 12:56:17.512954 containerd[1481]: time="2025-01-30T12:56:17.512909312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lsjg9,Uid:4e612718-f4bd-4961-95a1-789060e6c17e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c\"" Jan 30 12:56:17.514284 kubelet[2570]: E0130 12:56:17.514240 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:17.582834 kubelet[2570]: E0130 12:56:17.582629 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:17.583879 containerd[1481]: time="2025-01-30T12:56:17.583816084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-srldg,Uid:35c0c305-248b-47b6-9b26-d1b700e5c000,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:17.644234 containerd[1481]: time="2025-01-30T12:56:17.642567231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:17.644234 containerd[1481]: time="2025-01-30T12:56:17.642665576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:17.644234 containerd[1481]: time="2025-01-30T12:56:17.642870066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:17.644234 containerd[1481]: time="2025-01-30T12:56:17.644085010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:17.672451 systemd[1]: run-containerd-runc-k8s.io-c789a97e087d924aab3fc5c42dbd31f8ec344e612568556410a8aa8eb465b058-runc.0kiH1H.mount: Deactivated successfully. Jan 30 12:56:17.683618 systemd[1]: Started cri-containerd-c789a97e087d924aab3fc5c42dbd31f8ec344e612568556410a8aa8eb465b058.scope - libcontainer container c789a97e087d924aab3fc5c42dbd31f8ec344e612568556410a8aa8eb465b058. Jan 30 12:56:17.719823 kubelet[2570]: E0130 12:56:17.719760 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:17.726905 containerd[1481]: time="2025-01-30T12:56:17.726562363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-srldg,Uid:35c0c305-248b-47b6-9b26-d1b700e5c000,Namespace:kube-system,Attempt:0,} returns sandbox id \"c789a97e087d924aab3fc5c42dbd31f8ec344e612568556410a8aa8eb465b058\"" Jan 30 12:56:17.727903 kubelet[2570]: E0130 12:56:17.727879 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:17.731624 containerd[1481]: time="2025-01-30T12:56:17.731584589Z" level=info msg="CreateContainer within sandbox \"c789a97e087d924aab3fc5c42dbd31f8ec344e612568556410a8aa8eb465b058\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 12:56:17.770503 containerd[1481]: time="2025-01-30T12:56:17.770440249Z" level=info msg="CreateContainer within sandbox \"c789a97e087d924aab3fc5c42dbd31f8ec344e612568556410a8aa8eb465b058\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"802346c16d2463fd76c5c5efeaf98339a3d49321a5cd8d312d2987190404960e\"" Jan 30 12:56:17.771524 containerd[1481]: time="2025-01-30T12:56:17.771470661Z" level=info msg="StartContainer for \"802346c16d2463fd76c5c5efeaf98339a3d49321a5cd8d312d2987190404960e\"" Jan 30 12:56:17.812080 systemd[1]: Started cri-containerd-802346c16d2463fd76c5c5efeaf98339a3d49321a5cd8d312d2987190404960e.scope - libcontainer container 802346c16d2463fd76c5c5efeaf98339a3d49321a5cd8d312d2987190404960e. Jan 30 12:56:17.859265 containerd[1481]: time="2025-01-30T12:56:17.859216287Z" level=info msg="StartContainer for \"802346c16d2463fd76c5c5efeaf98339a3d49321a5cd8d312d2987190404960e\" returns successfully" Jan 30 12:56:18.726192 kubelet[2570]: E0130 12:56:18.726112 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:18.920672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3797818352.mount: Deactivated successfully. Jan 30 12:56:20.650669 kubelet[2570]: I0130 12:56:20.650528 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-srldg" podStartSLOduration=4.65039927 podStartE2EDuration="4.65039927s" podCreationTimestamp="2025-01-30 12:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:18.759631777 +0000 UTC m=+8.320561230" watchObservedRunningTime="2025-01-30 12:56:20.65039927 +0000 UTC m=+10.211328717" Jan 30 12:56:21.298368 containerd[1481]: time="2025-01-30T12:56:21.298098716Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:21.301164 containerd[1481]: time="2025-01-30T12:56:21.301064931Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 12:56:21.304086 containerd[1481]: time="2025-01-30T12:56:21.303994658Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:21.307498 containerd[1481]: time="2025-01-30T12:56:21.307299252Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.946678259s" Jan 30 12:56:21.307498 containerd[1481]: time="2025-01-30T12:56:21.307368337Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 12:56:21.311856 containerd[1481]: time="2025-01-30T12:56:21.310029073Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 12:56:21.311856 containerd[1481]: time="2025-01-30T12:56:21.311338244Z" level=info msg="CreateContainer within sandbox \"972e3a376c97749afd404cde7af36b7e184bd066220c40367cc0266f93b3c035\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 12:56:21.341290 containerd[1481]: time="2025-01-30T12:56:21.341085545Z" level=info msg="CreateContainer within sandbox \"972e3a376c97749afd404cde7af36b7e184bd066220c40367cc0266f93b3c035\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f\"" Jan 30 12:56:21.342460 containerd[1481]: time="2025-01-30T12:56:21.342253527Z" level=info msg="StartContainer for \"e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f\"" Jan 30 12:56:21.398311 systemd[1]: run-containerd-runc-k8s.io-e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f-runc.KeZvcM.mount: Deactivated successfully. Jan 30 12:56:21.408156 systemd[1]: Started cri-containerd-e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f.scope - libcontainer container e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f. Jan 30 12:56:21.457764 containerd[1481]: time="2025-01-30T12:56:21.457622327Z" level=info msg="StartContainer for \"e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f\" returns successfully" Jan 30 12:56:21.737811 kubelet[2570]: E0130 12:56:21.737747 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:22.745800 kubelet[2570]: E0130 12:56:22.745295 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:22.770382 kubelet[2570]: E0130 12:56:22.769712 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:22.796967 kubelet[2570]: I0130 12:56:22.795711 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6gp8z" podStartSLOduration=2.84646366 podStartE2EDuration="6.795685253s" podCreationTimestamp="2025-01-30 12:56:16 +0000 UTC" firstStartedPulling="2025-01-30 12:56:17.359694669 +0000 UTC m=+6.920624102" lastFinishedPulling="2025-01-30 12:56:21.308916251 +0000 UTC m=+10.869845695" observedRunningTime="2025-01-30 12:56:21.770170556 +0000 UTC m=+11.331100000" watchObservedRunningTime="2025-01-30 12:56:22.795685253 +0000 UTC m=+12.356614708" Jan 30 12:56:26.931234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3995343625.mount: Deactivated successfully. Jan 30 12:56:29.639473 containerd[1481]: time="2025-01-30T12:56:29.639386090Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:29.643124 containerd[1481]: time="2025-01-30T12:56:29.643010592Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 12:56:29.645565 containerd[1481]: time="2025-01-30T12:56:29.645476592Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:56:29.649361 containerd[1481]: time="2025-01-30T12:56:29.649074589Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.338994564s" Jan 30 12:56:29.649361 containerd[1481]: time="2025-01-30T12:56:29.649143236Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 12:56:29.655600 containerd[1481]: time="2025-01-30T12:56:29.655527854Z" level=info msg="CreateContainer within sandbox \"3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 12:56:29.790586 containerd[1481]: time="2025-01-30T12:56:29.790500133Z" level=info msg="CreateContainer within sandbox \"3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd\"" Jan 30 12:56:29.791399 containerd[1481]: time="2025-01-30T12:56:29.791365249Z" level=info msg="StartContainer for \"43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd\"" Jan 30 12:56:29.920500 systemd[1]: Started cri-containerd-43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd.scope - libcontainer container 43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd. Jan 30 12:56:29.988883 containerd[1481]: time="2025-01-30T12:56:29.988591027Z" level=info msg="StartContainer for \"43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd\" returns successfully" Jan 30 12:56:30.002859 systemd[1]: cri-containerd-43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd.scope: Deactivated successfully. Jan 30 12:56:30.089567 containerd[1481]: time="2025-01-30T12:56:30.064031351Z" level=info msg="shim disconnected" id=43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd namespace=k8s.io Jan 30 12:56:30.090150 containerd[1481]: time="2025-01-30T12:56:30.089584343Z" level=warning msg="cleaning up after shim disconnected" id=43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd namespace=k8s.io Jan 30 12:56:30.090150 containerd[1481]: time="2025-01-30T12:56:30.089609582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:56:30.777583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd-rootfs.mount: Deactivated successfully. Jan 30 12:56:30.777888 kubelet[2570]: E0130 12:56:30.777515 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:30.781625 containerd[1481]: time="2025-01-30T12:56:30.780365404Z" level=info msg="CreateContainer within sandbox \"3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 12:56:30.836580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2096721110.mount: Deactivated successfully. Jan 30 12:56:30.841328 containerd[1481]: time="2025-01-30T12:56:30.841188994Z" level=info msg="CreateContainer within sandbox \"3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3\"" Jan 30 12:56:30.843191 containerd[1481]: time="2025-01-30T12:56:30.843142180Z" level=info msg="StartContainer for \"d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3\"" Jan 30 12:56:30.921125 systemd[1]: Started cri-containerd-d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3.scope - libcontainer container d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3. Jan 30 12:56:30.970285 containerd[1481]: time="2025-01-30T12:56:30.970128726Z" level=info msg="StartContainer for \"d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3\" returns successfully" Jan 30 12:56:30.993164 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 12:56:30.993677 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:56:30.993868 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:56:31.002528 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:56:31.003328 systemd[1]: cri-containerd-d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3.scope: Deactivated successfully. Jan 30 12:56:31.066366 containerd[1481]: time="2025-01-30T12:56:31.066296247Z" level=info msg="shim disconnected" id=d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3 namespace=k8s.io Jan 30 12:56:31.066690 containerd[1481]: time="2025-01-30T12:56:31.066438540Z" level=warning msg="cleaning up after shim disconnected" id=d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3 namespace=k8s.io Jan 30 12:56:31.066690 containerd[1481]: time="2025-01-30T12:56:31.066453378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:56:31.093244 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:56:31.777895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3-rootfs.mount: Deactivated successfully. Jan 30 12:56:31.785450 kubelet[2570]: E0130 12:56:31.784960 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:31.795239 containerd[1481]: time="2025-01-30T12:56:31.795034949Z" level=info msg="CreateContainer within sandbox \"3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 12:56:31.864968 containerd[1481]: time="2025-01-30T12:56:31.864858312Z" level=info msg="CreateContainer within sandbox \"3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793\"" Jan 30 12:56:31.866266 containerd[1481]: time="2025-01-30T12:56:31.865844919Z" level=info msg="StartContainer for \"3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793\"" Jan 30 12:56:31.926167 systemd[1]: Started cri-containerd-3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793.scope - libcontainer container 3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793. Jan 30 12:56:31.981386 containerd[1481]: time="2025-01-30T12:56:31.979919689Z" level=info msg="StartContainer for \"3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793\" returns successfully" Jan 30 12:56:31.980411 systemd[1]: cri-containerd-3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793.scope: Deactivated successfully. Jan 30 12:56:32.035882 containerd[1481]: time="2025-01-30T12:56:32.035611342Z" level=info msg="shim disconnected" id=3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793 namespace=k8s.io Jan 30 12:56:32.035882 containerd[1481]: time="2025-01-30T12:56:32.035728336Z" level=warning msg="cleaning up after shim disconnected" id=3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793 namespace=k8s.io Jan 30 12:56:32.035882 containerd[1481]: time="2025-01-30T12:56:32.035742550Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:56:32.777435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793-rootfs.mount: Deactivated successfully. Jan 30 12:56:32.790590 kubelet[2570]: E0130 12:56:32.790434 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:32.794228 containerd[1481]: time="2025-01-30T12:56:32.794172430Z" level=info msg="CreateContainer within sandbox \"3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 12:56:32.844520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2652758390.mount: Deactivated successfully. Jan 30 12:56:32.850677 containerd[1481]: time="2025-01-30T12:56:32.850577741Z" level=info msg="CreateContainer within sandbox \"3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa\"" Jan 30 12:56:32.853455 containerd[1481]: time="2025-01-30T12:56:32.852079646Z" level=info msg="StartContainer for \"31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa\"" Jan 30 12:56:32.900108 systemd[1]: Started cri-containerd-31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa.scope - libcontainer container 31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa. Jan 30 12:56:32.938491 systemd[1]: cri-containerd-31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa.scope: Deactivated successfully. Jan 30 12:56:32.943267 containerd[1481]: time="2025-01-30T12:56:32.943213546Z" level=info msg="StartContainer for \"31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa\" returns successfully" Jan 30 12:56:32.988412 containerd[1481]: time="2025-01-30T12:56:32.988155531Z" level=info msg="shim disconnected" id=31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa namespace=k8s.io Jan 30 12:56:32.988412 containerd[1481]: time="2025-01-30T12:56:32.988237573Z" level=warning msg="cleaning up after shim disconnected" id=31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa namespace=k8s.io Jan 30 12:56:32.988412 containerd[1481]: time="2025-01-30T12:56:32.988249154Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:56:33.777436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa-rootfs.mount: Deactivated successfully. Jan 30 12:56:33.796174 kubelet[2570]: E0130 12:56:33.796124 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:33.801361 containerd[1481]: time="2025-01-30T12:56:33.801291665Z" level=info msg="CreateContainer within sandbox \"3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 12:56:33.845490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4253727656.mount: Deactivated successfully. Jan 30 12:56:33.847641 containerd[1481]: time="2025-01-30T12:56:33.847495387Z" level=info msg="CreateContainer within sandbox \"3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece\"" Jan 30 12:56:33.848975 containerd[1481]: time="2025-01-30T12:56:33.848931738Z" level=info msg="StartContainer for \"87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece\"" Jan 30 12:56:33.897104 systemd[1]: Started cri-containerd-87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece.scope - libcontainer container 87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece. Jan 30 12:56:33.939765 containerd[1481]: time="2025-01-30T12:56:33.939702563Z" level=info msg="StartContainer for \"87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece\" returns successfully" Jan 30 12:56:34.177007 kubelet[2570]: I0130 12:56:34.176393 2570 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 12:56:34.259421 kubelet[2570]: I0130 12:56:34.259152 2570 status_manager.go:890] "Failed to get status for pod" podUID="02843921-e2a9-4218-b4d8-e7408ea4f6bc" pod="kube-system/coredns-668d6bf9bc-qddmj" err="pods \"coredns-668d6bf9bc-qddmj\" is forbidden: User \"system:node:ci-4186.1.0-8-ccc447c07f\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186.1.0-8-ccc447c07f' and this object" Jan 30 12:56:34.261153 systemd[1]: Created slice kubepods-burstable-pod02843921_e2a9_4218_b4d8_e7408ea4f6bc.slice - libcontainer container kubepods-burstable-pod02843921_e2a9_4218_b4d8_e7408ea4f6bc.slice. Jan 30 12:56:34.273209 kubelet[2570]: W0130 12:56:34.273043 2570 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186.1.0-8-ccc447c07f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.0-8-ccc447c07f' and this object Jan 30 12:56:34.276300 kubelet[2570]: I0130 12:56:34.275880 2570 status_manager.go:890] "Failed to get status for pod" podUID="02843921-e2a9-4218-b4d8-e7408ea4f6bc" pod="kube-system/coredns-668d6bf9bc-qddmj" err="pods \"coredns-668d6bf9bc-qddmj\" is forbidden: User \"system:node:ci-4186.1.0-8-ccc447c07f\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186.1.0-8-ccc447c07f' and this object" Jan 30 12:56:34.276300 kubelet[2570]: E0130 12:56:34.275926 2570 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4186.1.0-8-ccc447c07f\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186.1.0-8-ccc447c07f' and this object" logger="UnhandledError" Jan 30 12:56:34.284667 systemd[1]: Created slice kubepods-burstable-pod528bef6e_c425_4b0c_a1e3_f93631bd9de1.slice - libcontainer container kubepods-burstable-pod528bef6e_c425_4b0c_a1e3_f93631bd9de1.slice. Jan 30 12:56:34.307083 kubelet[2570]: I0130 12:56:34.307024 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02843921-e2a9-4218-b4d8-e7408ea4f6bc-config-volume\") pod \"coredns-668d6bf9bc-qddmj\" (UID: \"02843921-e2a9-4218-b4d8-e7408ea4f6bc\") " pod="kube-system/coredns-668d6bf9bc-qddmj" Jan 30 12:56:34.307743 kubelet[2570]: I0130 12:56:34.307433 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzvw\" (UniqueName: \"kubernetes.io/projected/528bef6e-c425-4b0c-a1e3-f93631bd9de1-kube-api-access-jmzvw\") pod \"coredns-668d6bf9bc-qpzdd\" (UID: \"528bef6e-c425-4b0c-a1e3-f93631bd9de1\") " pod="kube-system/coredns-668d6bf9bc-qpzdd" Jan 30 12:56:34.307743 kubelet[2570]: I0130 12:56:34.307484 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/528bef6e-c425-4b0c-a1e3-f93631bd9de1-config-volume\") pod \"coredns-668d6bf9bc-qpzdd\" (UID: \"528bef6e-c425-4b0c-a1e3-f93631bd9de1\") " pod="kube-system/coredns-668d6bf9bc-qpzdd" Jan 30 12:56:34.307743 kubelet[2570]: I0130 12:56:34.307513 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-852m7\" (UniqueName: \"kubernetes.io/projected/02843921-e2a9-4218-b4d8-e7408ea4f6bc-kube-api-access-852m7\") pod \"coredns-668d6bf9bc-qddmj\" (UID: \"02843921-e2a9-4218-b4d8-e7408ea4f6bc\") " pod="kube-system/coredns-668d6bf9bc-qddmj" Jan 30 12:56:34.807749 kubelet[2570]: E0130 12:56:34.807496 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:34.855460 kubelet[2570]: I0130 12:56:34.854827 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lsjg9" podStartSLOduration=6.719736978 podStartE2EDuration="18.854597912s" podCreationTimestamp="2025-01-30 12:56:16 +0000 UTC" firstStartedPulling="2025-01-30 12:56:17.516378304 +0000 UTC m=+7.077307739" lastFinishedPulling="2025-01-30 12:56:29.651239241 +0000 UTC m=+19.212168673" observedRunningTime="2025-01-30 12:56:34.838242752 +0000 UTC m=+24.399172204" watchObservedRunningTime="2025-01-30 12:56:34.854597912 +0000 UTC m=+24.415527376" Jan 30 12:56:35.188518 kubelet[2570]: E0130 12:56:35.170976 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:35.188682 containerd[1481]: time="2025-01-30T12:56:35.172200502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qddmj,Uid:02843921-e2a9-4218-b4d8-e7408ea4f6bc,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:35.198347 kubelet[2570]: E0130 12:56:35.197975 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:35.199045 containerd[1481]: time="2025-01-30T12:56:35.199001168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qpzdd,Uid:528bef6e-c425-4b0c-a1e3-f93631bd9de1,Namespace:kube-system,Attempt:0,}" Jan 30 12:56:35.811293 kubelet[2570]: E0130 12:56:35.809985 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:36.384445 systemd-networkd[1370]: cilium_host: Link UP Jan 30 12:56:36.386546 systemd-networkd[1370]: cilium_net: Link UP Jan 30 12:56:36.387036 systemd-networkd[1370]: cilium_net: Gained carrier Jan 30 12:56:36.387261 systemd-networkd[1370]: cilium_host: Gained carrier Jan 30 12:56:36.387428 systemd-networkd[1370]: cilium_net: Gained IPv6LL Jan 30 12:56:36.387633 systemd-networkd[1370]: cilium_host: Gained IPv6LL Jan 30 12:56:36.558160 systemd-networkd[1370]: cilium_vxlan: Link UP Jan 30 12:56:36.558170 systemd-networkd[1370]: cilium_vxlan: Gained carrier Jan 30 12:56:36.812857 kubelet[2570]: E0130 12:56:36.812625 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:37.004817 kernel: NET: Registered PF_ALG protocol family Jan 30 12:56:38.113315 systemd-networkd[1370]: lxc_health: Link UP Jan 30 12:56:38.130012 systemd-networkd[1370]: lxc_health: Gained carrier Jan 30 12:56:38.453856 systemd-networkd[1370]: cilium_vxlan: Gained IPv6LL Jan 30 12:56:38.812717 kernel: eth0: renamed from tmp3b3d7 Jan 30 12:56:38.809540 systemd-networkd[1370]: lxc8418ff746db8: Link UP Jan 30 12:56:38.832627 kernel: eth0: renamed from tmp8ef80 Jan 30 12:56:38.830968 systemd-networkd[1370]: lxc8418ff746db8: Gained carrier Jan 30 12:56:38.833282 systemd-networkd[1370]: lxc4239327cb1f6: Link UP Jan 30 12:56:38.843011 systemd-networkd[1370]: lxc4239327cb1f6: Gained carrier Jan 30 12:56:39.222807 systemd-networkd[1370]: lxc_health: Gained IPv6LL Jan 30 12:56:39.273823 kubelet[2570]: E0130 12:56:39.272551 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:39.823655 kubelet[2570]: E0130 12:56:39.823047 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:40.437925 systemd-networkd[1370]: lxc4239327cb1f6: Gained IPv6LL Jan 30 12:56:40.693135 systemd-networkd[1370]: lxc8418ff746db8: Gained IPv6LL Jan 30 12:56:40.825849 kubelet[2570]: E0130 12:56:40.825738 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:45.450078 containerd[1481]: time="2025-01-30T12:56:45.449695760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:45.450078 containerd[1481]: time="2025-01-30T12:56:45.449831955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:45.450078 containerd[1481]: time="2025-01-30T12:56:45.449860724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:45.453880 containerd[1481]: time="2025-01-30T12:56:45.450034383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:45.510477 containerd[1481]: time="2025-01-30T12:56:45.510086269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:56:45.510477 containerd[1481]: time="2025-01-30T12:56:45.510210787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:56:45.510477 containerd[1481]: time="2025-01-30T12:56:45.510238106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:45.511875 containerd[1481]: time="2025-01-30T12:56:45.510424064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:56:45.570634 systemd[1]: run-containerd-runc-k8s.io-8ef808f24f5777cb4c20579b7fd50bc7f704677158909e6919a01b066d90384f-runc.gwgfmR.mount: Deactivated successfully. Jan 30 12:56:45.583115 systemd[1]: Started cri-containerd-8ef808f24f5777cb4c20579b7fd50bc7f704677158909e6919a01b066d90384f.scope - libcontainer container 8ef808f24f5777cb4c20579b7fd50bc7f704677158909e6919a01b066d90384f. Jan 30 12:56:45.616918 systemd[1]: Started cri-containerd-3b3d7226458b619009e406d0ba17d4f8d41f7af33f51c53218db62f0befb9246.scope - libcontainer container 3b3d7226458b619009e406d0ba17d4f8d41f7af33f51c53218db62f0befb9246. Jan 30 12:56:45.779550 containerd[1481]: time="2025-01-30T12:56:45.779294027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qddmj,Uid:02843921-e2a9-4218-b4d8-e7408ea4f6bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b3d7226458b619009e406d0ba17d4f8d41f7af33f51c53218db62f0befb9246\"" Jan 30 12:56:45.783002 kubelet[2570]: E0130 12:56:45.780721 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:45.786052 containerd[1481]: time="2025-01-30T12:56:45.785652495Z" level=info msg="CreateContainer within sandbox \"3b3d7226458b619009e406d0ba17d4f8d41f7af33f51c53218db62f0befb9246\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 12:56:45.789280 containerd[1481]: time="2025-01-30T12:56:45.788944845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qpzdd,Uid:528bef6e-c425-4b0c-a1e3-f93631bd9de1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ef808f24f5777cb4c20579b7fd50bc7f704677158909e6919a01b066d90384f\"" Jan 30 12:56:45.790941 kubelet[2570]: E0130 12:56:45.790731 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:45.796900 containerd[1481]: time="2025-01-30T12:56:45.796639151Z" level=info msg="CreateContainer within sandbox \"8ef808f24f5777cb4c20579b7fd50bc7f704677158909e6919a01b066d90384f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 12:56:45.843206 containerd[1481]: time="2025-01-30T12:56:45.843000147Z" level=info msg="CreateContainer within sandbox \"8ef808f24f5777cb4c20579b7fd50bc7f704677158909e6919a01b066d90384f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8c65b7f43023a946614afd8c33e63e1f97c05bc7da08c7d6ed8d286bd31496b\"" Jan 30 12:56:45.844012 containerd[1481]: time="2025-01-30T12:56:45.843916880Z" level=info msg="StartContainer for \"b8c65b7f43023a946614afd8c33e63e1f97c05bc7da08c7d6ed8d286bd31496b\"" Jan 30 12:56:45.849905 containerd[1481]: time="2025-01-30T12:56:45.849459383Z" level=info msg="CreateContainer within sandbox \"3b3d7226458b619009e406d0ba17d4f8d41f7af33f51c53218db62f0befb9246\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25fd284908270faf2e52342ea4beca4f817aaf6f8ec3e4ed2ad134a77991b7bf\"" Jan 30 12:56:45.851608 containerd[1481]: time="2025-01-30T12:56:45.851545909Z" level=info msg="StartContainer for \"25fd284908270faf2e52342ea4beca4f817aaf6f8ec3e4ed2ad134a77991b7bf\"" Jan 30 12:56:45.896078 systemd[1]: Started cri-containerd-b8c65b7f43023a946614afd8c33e63e1f97c05bc7da08c7d6ed8d286bd31496b.scope - libcontainer container b8c65b7f43023a946614afd8c33e63e1f97c05bc7da08c7d6ed8d286bd31496b. Jan 30 12:56:45.905055 systemd[1]: Started cri-containerd-25fd284908270faf2e52342ea4beca4f817aaf6f8ec3e4ed2ad134a77991b7bf.scope - libcontainer container 25fd284908270faf2e52342ea4beca4f817aaf6f8ec3e4ed2ad134a77991b7bf. Jan 30 12:56:45.967697 containerd[1481]: time="2025-01-30T12:56:45.966890268Z" level=info msg="StartContainer for \"b8c65b7f43023a946614afd8c33e63e1f97c05bc7da08c7d6ed8d286bd31496b\" returns successfully" Jan 30 12:56:45.976147 containerd[1481]: time="2025-01-30T12:56:45.976085498Z" level=info msg="StartContainer for \"25fd284908270faf2e52342ea4beca4f817aaf6f8ec3e4ed2ad134a77991b7bf\" returns successfully" Jan 30 12:56:46.471041 systemd[1]: run-containerd-runc-k8s.io-3b3d7226458b619009e406d0ba17d4f8d41f7af33f51c53218db62f0befb9246-runc.odpIed.mount: Deactivated successfully. Jan 30 12:56:46.869901 kubelet[2570]: E0130 12:56:46.869114 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:46.876462 kubelet[2570]: E0130 12:56:46.876097 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:46.892365 kubelet[2570]: I0130 12:56:46.892266 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qpzdd" podStartSLOduration=30.892242713999998 podStartE2EDuration="30.892242714s" podCreationTimestamp="2025-01-30 12:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:46.888992666 +0000 UTC m=+36.449922120" watchObservedRunningTime="2025-01-30 12:56:46.892242714 +0000 UTC m=+36.453172167" Jan 30 12:56:47.884674 kubelet[2570]: E0130 12:56:47.884360 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:47.884674 kubelet[2570]: E0130 12:56:47.884529 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:48.887201 kubelet[2570]: E0130 12:56:48.887151 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:56:48.889910 kubelet[2570]: E0130 12:56:48.889710 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:57:30.735361 systemd[1]: Started sshd@7-209.38.73.11:22-139.178.68.195:45652.service - OpenSSH per-connection server daemon (139.178.68.195:45652). Jan 30 12:57:30.852159 sshd[3960]: Accepted publickey for core from 139.178.68.195 port 45652 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:30.854978 sshd-session[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:30.865622 systemd-logind[1455]: New session 8 of user core. Jan 30 12:57:30.871206 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 12:57:31.674483 sshd[3962]: Connection closed by 139.178.68.195 port 45652 Jan 30 12:57:31.676143 sshd-session[3960]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:31.682412 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. Jan 30 12:57:31.683666 systemd[1]: sshd@7-209.38.73.11:22-139.178.68.195:45652.service: Deactivated successfully. Jan 30 12:57:31.688583 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 12:57:31.690906 systemd-logind[1455]: Removed session 8. Jan 30 12:57:32.625292 kubelet[2570]: E0130 12:57:32.623741 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:57:33.623686 kubelet[2570]: E0130 12:57:33.623547 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:57:36.696292 systemd[1]: Started sshd@8-209.38.73.11:22-139.178.68.195:37656.service - OpenSSH per-connection server daemon (139.178.68.195:37656). Jan 30 12:57:36.762886 sshd[3974]: Accepted publickey for core from 139.178.68.195 port 37656 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:36.765186 sshd-session[3974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:36.774902 systemd-logind[1455]: New session 9 of user core. Jan 30 12:57:36.784158 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 12:57:36.992006 sshd[3976]: Connection closed by 139.178.68.195 port 37656 Jan 30 12:57:36.993871 sshd-session[3974]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:37.000999 systemd[1]: sshd@8-209.38.73.11:22-139.178.68.195:37656.service: Deactivated successfully. Jan 30 12:57:37.005243 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 12:57:37.008094 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. Jan 30 12:57:37.010589 systemd-logind[1455]: Removed session 9. Jan 30 12:57:40.624748 kubelet[2570]: E0130 12:57:40.624303 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:57:42.013376 systemd[1]: Started sshd@9-209.38.73.11:22-139.178.68.195:37660.service - OpenSSH per-connection server daemon (139.178.68.195:37660). Jan 30 12:57:42.086393 sshd[3988]: Accepted publickey for core from 139.178.68.195 port 37660 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:42.088650 sshd-session[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:42.095636 systemd-logind[1455]: New session 10 of user core. Jan 30 12:57:42.108108 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 12:57:42.262892 sshd[3990]: Connection closed by 139.178.68.195 port 37660 Jan 30 12:57:42.262674 sshd-session[3988]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:42.268290 systemd[1]: sshd@9-209.38.73.11:22-139.178.68.195:37660.service: Deactivated successfully. Jan 30 12:57:42.271014 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 12:57:42.273024 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. Jan 30 12:57:42.275814 systemd-logind[1455]: Removed session 10. Jan 30 12:57:47.286291 systemd[1]: Started sshd@10-209.38.73.11:22-139.178.68.195:46586.service - OpenSSH per-connection server daemon (139.178.68.195:46586). Jan 30 12:57:47.338852 sshd[4002]: Accepted publickey for core from 139.178.68.195 port 46586 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:47.340686 sshd-session[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:47.349320 systemd-logind[1455]: New session 11 of user core. Jan 30 12:57:47.362086 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 12:57:47.509492 sshd[4004]: Connection closed by 139.178.68.195 port 46586 Jan 30 12:57:47.510105 sshd-session[4002]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:47.521550 systemd[1]: sshd@10-209.38.73.11:22-139.178.68.195:46586.service: Deactivated successfully. Jan 30 12:57:47.524481 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 12:57:47.526925 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. Jan 30 12:57:47.534155 systemd[1]: Started sshd@11-209.38.73.11:22-139.178.68.195:46594.service - OpenSSH per-connection server daemon (139.178.68.195:46594). Jan 30 12:57:47.537769 systemd-logind[1455]: Removed session 11. Jan 30 12:57:47.593070 sshd[4016]: Accepted publickey for core from 139.178.68.195 port 46594 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:47.595050 sshd-session[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:47.602682 systemd-logind[1455]: New session 12 of user core. Jan 30 12:57:47.613089 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 12:57:47.817958 sshd[4018]: Connection closed by 139.178.68.195 port 46594 Jan 30 12:57:47.817558 sshd-session[4016]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:47.834045 systemd[1]: sshd@11-209.38.73.11:22-139.178.68.195:46594.service: Deactivated successfully. Jan 30 12:57:47.837341 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 12:57:47.842666 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. Jan 30 12:57:47.851389 systemd[1]: Started sshd@12-209.38.73.11:22-139.178.68.195:46600.service - OpenSSH per-connection server daemon (139.178.68.195:46600). Jan 30 12:57:47.856934 systemd-logind[1455]: Removed session 12. Jan 30 12:57:47.903542 sshd[4027]: Accepted publickey for core from 139.178.68.195 port 46600 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:47.905329 sshd-session[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:47.912015 systemd-logind[1455]: New session 13 of user core. Jan 30 12:57:47.925138 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 12:57:48.082650 sshd[4029]: Connection closed by 139.178.68.195 port 46600 Jan 30 12:57:48.083598 sshd-session[4027]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:48.088982 systemd[1]: sshd@12-209.38.73.11:22-139.178.68.195:46600.service: Deactivated successfully. Jan 30 12:57:48.092304 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 12:57:48.094764 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. Jan 30 12:57:48.096220 systemd-logind[1455]: Removed session 13. Jan 30 12:57:51.623889 kubelet[2570]: E0130 12:57:51.623742 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:57:52.626122 kubelet[2570]: E0130 12:57:52.625571 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:57:53.102327 systemd[1]: Started sshd@13-209.38.73.11:22-139.178.68.195:46606.service - OpenSSH per-connection server daemon (139.178.68.195:46606). Jan 30 12:57:53.163914 sshd[4041]: Accepted publickey for core from 139.178.68.195 port 46606 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:53.166150 sshd-session[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:53.174871 systemd-logind[1455]: New session 14 of user core. Jan 30 12:57:53.183155 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 12:57:53.361870 sshd[4043]: Connection closed by 139.178.68.195 port 46606 Jan 30 12:57:53.363546 sshd-session[4041]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:53.369684 systemd[1]: sshd@13-209.38.73.11:22-139.178.68.195:46606.service: Deactivated successfully. Jan 30 12:57:53.372481 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 12:57:53.374921 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. Jan 30 12:57:53.377032 systemd-logind[1455]: Removed session 14. Jan 30 12:57:58.383286 systemd[1]: Started sshd@14-209.38.73.11:22-139.178.68.195:35064.service - OpenSSH per-connection server daemon (139.178.68.195:35064). Jan 30 12:57:58.438922 sshd[4055]: Accepted publickey for core from 139.178.68.195 port 35064 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:57:58.440516 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:57:58.446652 systemd-logind[1455]: New session 15 of user core. Jan 30 12:57:58.462101 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 12:57:58.649964 sshd[4057]: Connection closed by 139.178.68.195 port 35064 Jan 30 12:57:58.651871 sshd-session[4055]: pam_unix(sshd:session): session closed for user core Jan 30 12:57:58.657931 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. Jan 30 12:57:58.658669 systemd[1]: sshd@14-209.38.73.11:22-139.178.68.195:35064.service: Deactivated successfully. Jan 30 12:57:58.661859 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 12:57:58.664749 systemd-logind[1455]: Removed session 15. Jan 30 12:58:03.624294 kubelet[2570]: E0130 12:58:03.624167 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:03.672396 systemd[1]: Started sshd@15-209.38.73.11:22-139.178.68.195:35068.service - OpenSSH per-connection server daemon (139.178.68.195:35068). Jan 30 12:58:03.771243 sshd[4068]: Accepted publickey for core from 139.178.68.195 port 35068 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:58:03.774025 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:03.781614 systemd-logind[1455]: New session 16 of user core. Jan 30 12:58:03.789068 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 12:58:03.959617 sshd[4070]: Connection closed by 139.178.68.195 port 35068 Jan 30 12:58:03.961084 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:03.971376 systemd[1]: sshd@15-209.38.73.11:22-139.178.68.195:35068.service: Deactivated successfully. Jan 30 12:58:03.974889 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 12:58:03.977740 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. Jan 30 12:58:03.985290 systemd[1]: Started sshd@16-209.38.73.11:22-139.178.68.195:35070.service - OpenSSH per-connection server daemon (139.178.68.195:35070). Jan 30 12:58:03.987470 systemd-logind[1455]: Removed session 16. Jan 30 12:58:04.048002 sshd[4081]: Accepted publickey for core from 139.178.68.195 port 35070 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:58:04.050642 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:04.057889 systemd-logind[1455]: New session 17 of user core. Jan 30 12:58:04.066155 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 12:58:04.892584 sshd[4083]: Connection closed by 139.178.68.195 port 35070 Jan 30 12:58:04.894181 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:04.915416 systemd[1]: Started sshd@17-209.38.73.11:22-139.178.68.195:55334.service - OpenSSH per-connection server daemon (139.178.68.195:55334). Jan 30 12:58:04.917103 systemd[1]: sshd@16-209.38.73.11:22-139.178.68.195:35070.service: Deactivated successfully. Jan 30 12:58:04.921479 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 12:58:04.925126 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. Jan 30 12:58:04.926938 systemd-logind[1455]: Removed session 17. Jan 30 12:58:05.003022 sshd[4091]: Accepted publickey for core from 139.178.68.195 port 55334 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:58:05.005399 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:05.014667 systemd-logind[1455]: New session 18 of user core. Jan 30 12:58:05.017144 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 12:58:05.893012 sshd[4095]: Connection closed by 139.178.68.195 port 55334 Jan 30 12:58:05.892876 sshd-session[4091]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:05.912260 systemd[1]: sshd@17-209.38.73.11:22-139.178.68.195:55334.service: Deactivated successfully. Jan 30 12:58:05.917231 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 12:58:05.921283 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. Jan 30 12:58:05.930363 systemd[1]: Started sshd@18-209.38.73.11:22-139.178.68.195:55350.service - OpenSSH per-connection server daemon (139.178.68.195:55350). Jan 30 12:58:05.936040 systemd-logind[1455]: Removed session 18. Jan 30 12:58:06.030315 sshd[4111]: Accepted publickey for core from 139.178.68.195 port 55350 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:58:06.032065 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:06.038823 systemd-logind[1455]: New session 19 of user core. Jan 30 12:58:06.045080 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 12:58:06.440509 sshd[4113]: Connection closed by 139.178.68.195 port 55350 Jan 30 12:58:06.441502 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:06.455138 systemd[1]: sshd@18-209.38.73.11:22-139.178.68.195:55350.service: Deactivated successfully. Jan 30 12:58:06.460442 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 12:58:06.462978 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. Jan 30 12:58:06.471596 systemd[1]: Started sshd@19-209.38.73.11:22-139.178.68.195:55360.service - OpenSSH per-connection server daemon (139.178.68.195:55360). Jan 30 12:58:06.474485 systemd-logind[1455]: Removed session 19. Jan 30 12:58:06.553563 sshd[4122]: Accepted publickey for core from 139.178.68.195 port 55360 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:58:06.555874 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:06.564432 systemd-logind[1455]: New session 20 of user core. Jan 30 12:58:06.574089 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 12:58:06.628939 kubelet[2570]: E0130 12:58:06.628262 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:06.753727 sshd[4124]: Connection closed by 139.178.68.195 port 55360 Jan 30 12:58:06.754802 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:06.760454 systemd[1]: sshd@19-209.38.73.11:22-139.178.68.195:55360.service: Deactivated successfully. Jan 30 12:58:06.763356 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 12:58:06.764863 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. Jan 30 12:58:06.766527 systemd-logind[1455]: Removed session 20. Jan 30 12:58:10.625567 kubelet[2570]: E0130 12:58:10.625036 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:11.780373 systemd[1]: Started sshd@20-209.38.73.11:22-139.178.68.195:55372.service - OpenSSH per-connection server daemon (139.178.68.195:55372). Jan 30 12:58:11.832125 sshd[4136]: Accepted publickey for core from 139.178.68.195 port 55372 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:58:11.834622 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:11.843370 systemd-logind[1455]: New session 21 of user core. Jan 30 12:58:11.850186 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 12:58:11.990844 sshd[4138]: Connection closed by 139.178.68.195 port 55372 Jan 30 12:58:11.991543 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:11.996498 systemd[1]: sshd@20-209.38.73.11:22-139.178.68.195:55372.service: Deactivated successfully. Jan 30 12:58:11.999268 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 12:58:12.000640 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. Jan 30 12:58:12.002874 systemd-logind[1455]: Removed session 21. Jan 30 12:58:17.014359 systemd[1]: Started sshd@21-209.38.73.11:22-139.178.68.195:38818.service - OpenSSH per-connection server daemon (139.178.68.195:38818). Jan 30 12:58:17.068554 sshd[4152]: Accepted publickey for core from 139.178.68.195 port 38818 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:58:17.070953 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:17.080861 systemd-logind[1455]: New session 22 of user core. Jan 30 12:58:17.090532 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 12:58:17.255733 sshd[4154]: Connection closed by 139.178.68.195 port 38818 Jan 30 12:58:17.256529 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:17.261853 systemd[1]: sshd@21-209.38.73.11:22-139.178.68.195:38818.service: Deactivated successfully. Jan 30 12:58:17.264762 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 12:58:17.266517 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. Jan 30 12:58:17.269236 systemd-logind[1455]: Removed session 22. Jan 30 12:58:22.277260 systemd[1]: Started sshd@22-209.38.73.11:22-139.178.68.195:38832.service - OpenSSH per-connection server daemon (139.178.68.195:38832). Jan 30 12:58:22.332766 sshd[4169]: Accepted publickey for core from 139.178.68.195 port 38832 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:58:22.335250 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:22.344833 systemd-logind[1455]: New session 23 of user core. Jan 30 12:58:22.350164 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 12:58:22.505844 sshd[4171]: Connection closed by 139.178.68.195 port 38832 Jan 30 12:58:22.507036 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:22.513488 systemd[1]: sshd@22-209.38.73.11:22-139.178.68.195:38832.service: Deactivated successfully. Jan 30 12:58:22.518300 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 12:58:22.519816 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. Jan 30 12:58:22.521432 systemd-logind[1455]: Removed session 23. Jan 30 12:58:27.535893 systemd[1]: Started sshd@23-209.38.73.11:22-139.178.68.195:50902.service - OpenSSH per-connection server daemon (139.178.68.195:50902). Jan 30 12:58:27.597394 sshd[4182]: Accepted publickey for core from 139.178.68.195 port 50902 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:58:27.599917 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:27.607592 systemd-logind[1455]: New session 24 of user core. Jan 30 12:58:27.615156 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 12:58:27.789469 sshd[4184]: Connection closed by 139.178.68.195 port 50902 Jan 30 12:58:27.790916 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:27.808104 systemd[1]: sshd@23-209.38.73.11:22-139.178.68.195:50902.service: Deactivated successfully. Jan 30 12:58:27.812417 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 12:58:27.816543 systemd-logind[1455]: Session 24 logged out. Waiting for processes to exit. Jan 30 12:58:27.823280 systemd[1]: Started sshd@24-209.38.73.11:22-139.178.68.195:50908.service - OpenSSH per-connection server daemon (139.178.68.195:50908). Jan 30 12:58:27.826027 systemd-logind[1455]: Removed session 24. Jan 30 12:58:27.896931 sshd[4195]: Accepted publickey for core from 139.178.68.195 port 50908 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:58:27.899679 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:27.907201 systemd-logind[1455]: New session 25 of user core. Jan 30 12:58:27.914174 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 12:58:29.381895 kubelet[2570]: I0130 12:58:29.380946 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qddmj" podStartSLOduration=133.380921587 podStartE2EDuration="2m13.380921587s" podCreationTimestamp="2025-01-30 12:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:56:46.937032869 +0000 UTC m=+36.497962321" watchObservedRunningTime="2025-01-30 12:58:29.380921587 +0000 UTC m=+138.941851040" Jan 30 12:58:29.403127 containerd[1481]: time="2025-01-30T12:58:29.402747391Z" level=info msg="StopContainer for \"e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f\" with timeout 30 (s)" Jan 30 12:58:29.404637 containerd[1481]: time="2025-01-30T12:58:29.404304639Z" level=info msg="Stop container \"e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f\" with signal terminated" Jan 30 12:58:29.434551 systemd[1]: run-containerd-runc-k8s.io-87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece-runc.mPwUVD.mount: Deactivated successfully. Jan 30 12:58:29.439378 systemd[1]: cri-containerd-e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f.scope: Deactivated successfully. Jan 30 12:58:29.461631 containerd[1481]: time="2025-01-30T12:58:29.461468386Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 12:58:29.475263 containerd[1481]: time="2025-01-30T12:58:29.475213324Z" level=info msg="StopContainer for \"87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece\" with timeout 2 (s)" Jan 30 12:58:29.477190 containerd[1481]: time="2025-01-30T12:58:29.477010537Z" level=info msg="Stop container \"87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece\" with signal terminated" Jan 30 12:58:29.492942 systemd-networkd[1370]: lxc_health: Link DOWN Jan 30 12:58:29.492951 systemd-networkd[1370]: lxc_health: Lost carrier Jan 30 12:58:29.493511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f-rootfs.mount: Deactivated successfully. Jan 30 12:58:29.516531 containerd[1481]: time="2025-01-30T12:58:29.516444564Z" level=info msg="shim disconnected" id=e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f namespace=k8s.io Jan 30 12:58:29.517242 containerd[1481]: time="2025-01-30T12:58:29.516976053Z" level=warning msg="cleaning up after shim disconnected" id=e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f namespace=k8s.io Jan 30 12:58:29.517242 containerd[1481]: time="2025-01-30T12:58:29.517007172Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:58:29.527923 systemd[1]: cri-containerd-87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece.scope: Deactivated successfully. Jan 30 12:58:29.528166 systemd[1]: cri-containerd-87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece.scope: Consumed 11.108s CPU time. Jan 30 12:58:29.550652 containerd[1481]: time="2025-01-30T12:58:29.550583916Z" level=info msg="StopContainer for \"e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f\" returns successfully" Jan 30 12:58:29.551818 containerd[1481]: time="2025-01-30T12:58:29.551458988Z" level=info msg="StopPodSandbox for \"972e3a376c97749afd404cde7af36b7e184bd066220c40367cc0266f93b3c035\"" Jan 30 12:58:29.561351 containerd[1481]: time="2025-01-30T12:58:29.554459433Z" level=info msg="Container to stop \"e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:58:29.565664 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-972e3a376c97749afd404cde7af36b7e184bd066220c40367cc0266f93b3c035-shm.mount: Deactivated successfully. Jan 30 12:58:29.578134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece-rootfs.mount: Deactivated successfully. Jan 30 12:58:29.586049 systemd[1]: cri-containerd-972e3a376c97749afd404cde7af36b7e184bd066220c40367cc0266f93b3c035.scope: Deactivated successfully. Jan 30 12:58:29.589758 containerd[1481]: time="2025-01-30T12:58:29.589667304Z" level=info msg="shim disconnected" id=87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece namespace=k8s.io Jan 30 12:58:29.590665 containerd[1481]: time="2025-01-30T12:58:29.590323114Z" level=warning msg="cleaning up after shim disconnected" id=87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece namespace=k8s.io Jan 30 12:58:29.590665 containerd[1481]: time="2025-01-30T12:58:29.590429160Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:58:29.648810 containerd[1481]: time="2025-01-30T12:58:29.648562821Z" level=info msg="StopContainer for \"87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece\" returns successfully" Jan 30 12:58:29.651602 containerd[1481]: time="2025-01-30T12:58:29.651052760Z" level=info msg="StopPodSandbox for \"3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c\"" Jan 30 12:58:29.651602 containerd[1481]: time="2025-01-30T12:58:29.651110751Z" level=info msg="Container to stop \"3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:58:29.651602 containerd[1481]: time="2025-01-30T12:58:29.651154169Z" level=info msg="Container to stop \"43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:58:29.651602 containerd[1481]: time="2025-01-30T12:58:29.651167110Z" level=info msg="Container to stop \"d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:58:29.651602 containerd[1481]: time="2025-01-30T12:58:29.651181067Z" level=info msg="Container to stop \"31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:58:29.651602 containerd[1481]: time="2025-01-30T12:58:29.651194544Z" level=info msg="Container to stop \"87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:58:29.660766 containerd[1481]: time="2025-01-30T12:58:29.660489807Z" level=info msg="shim disconnected" id=972e3a376c97749afd404cde7af36b7e184bd066220c40367cc0266f93b3c035 namespace=k8s.io Jan 30 12:58:29.660766 containerd[1481]: time="2025-01-30T12:58:29.660565867Z" level=warning msg="cleaning up after shim disconnected" id=972e3a376c97749afd404cde7af36b7e184bd066220c40367cc0266f93b3c035 namespace=k8s.io Jan 30 12:58:29.660766 containerd[1481]: time="2025-01-30T12:58:29.660580572Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:58:29.665270 systemd[1]: cri-containerd-3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c.scope: Deactivated successfully. Jan 30 12:58:29.692977 containerd[1481]: time="2025-01-30T12:58:29.692901667Z" level=info msg="TearDown network for sandbox \"972e3a376c97749afd404cde7af36b7e184bd066220c40367cc0266f93b3c035\" successfully" Jan 30 12:58:29.692977 containerd[1481]: time="2025-01-30T12:58:29.692952856Z" level=info msg="StopPodSandbox for \"972e3a376c97749afd404cde7af36b7e184bd066220c40367cc0266f93b3c035\" returns successfully" Jan 30 12:58:29.730800 containerd[1481]: time="2025-01-30T12:58:29.730586982Z" level=info msg="shim disconnected" id=3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c namespace=k8s.io Jan 30 12:58:29.730800 containerd[1481]: time="2025-01-30T12:58:29.730703438Z" level=warning msg="cleaning up after shim disconnected" id=3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c namespace=k8s.io Jan 30 12:58:29.730800 containerd[1481]: time="2025-01-30T12:58:29.730757170Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:58:29.753704 containerd[1481]: time="2025-01-30T12:58:29.753646175Z" level=info msg="TearDown network for sandbox \"3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c\" successfully" Jan 30 12:58:29.753704 containerd[1481]: time="2025-01-30T12:58:29.753696808Z" level=info msg="StopPodSandbox for \"3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c\" returns successfully" Jan 30 12:58:29.861118 kubelet[2570]: I0130 12:58:29.860332 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwcpq\" (UniqueName: \"kubernetes.io/projected/4e612718-f4bd-4961-95a1-789060e6c17e-kube-api-access-mwcpq\") pod \"4e612718-f4bd-4961-95a1-789060e6c17e\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " Jan 30 12:58:29.861118 kubelet[2570]: I0130 12:58:29.860412 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-hostproc\") pod \"4e612718-f4bd-4961-95a1-789060e6c17e\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " Jan 30 12:58:29.861118 kubelet[2570]: I0130 12:58:29.860443 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-cilium-cgroup\") pod \"4e612718-f4bd-4961-95a1-789060e6c17e\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " Jan 30 12:58:29.861118 kubelet[2570]: I0130 12:58:29.860477 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e612718-f4bd-4961-95a1-789060e6c17e-cilium-config-path\") pod \"4e612718-f4bd-4961-95a1-789060e6c17e\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " Jan 30 12:58:29.861118 kubelet[2570]: I0130 12:58:29.860531 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-cilium-run\") pod \"4e612718-f4bd-4961-95a1-789060e6c17e\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " Jan 30 12:58:29.861118 kubelet[2570]: I0130 12:58:29.860565 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72-cilium-config-path\") pod \"8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72\" (UID: \"8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72\") " Jan 30 12:58:29.861631 kubelet[2570]: I0130 12:58:29.860595 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e612718-f4bd-4961-95a1-789060e6c17e-hubble-tls\") pod \"4e612718-f4bd-4961-95a1-789060e6c17e\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " Jan 30 12:58:29.861631 kubelet[2570]: I0130 12:58:29.860624 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-host-proc-sys-kernel\") pod \"4e612718-f4bd-4961-95a1-789060e6c17e\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " Jan 30 12:58:29.861631 kubelet[2570]: I0130 12:58:29.860649 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-host-proc-sys-net\") pod \"4e612718-f4bd-4961-95a1-789060e6c17e\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " Jan 30 12:58:29.861631 kubelet[2570]: I0130 12:58:29.860680 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-etc-cni-netd\") pod \"4e612718-f4bd-4961-95a1-789060e6c17e\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " Jan 30 12:58:29.861631 kubelet[2570]: I0130 12:58:29.860708 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e612718-f4bd-4961-95a1-789060e6c17e-clustermesh-secrets\") pod \"4e612718-f4bd-4961-95a1-789060e6c17e\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " Jan 30 12:58:29.861631 kubelet[2570]: I0130 12:58:29.860733 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-bpf-maps\") pod \"4e612718-f4bd-4961-95a1-789060e6c17e\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " Jan 30 12:58:29.861959 kubelet[2570]: I0130 12:58:29.860764 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs48t\" (UniqueName: \"kubernetes.io/projected/8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72-kube-api-access-zs48t\") pod \"8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72\" (UID: \"8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72\") " Jan 30 12:58:29.861959 kubelet[2570]: I0130 12:58:29.860826 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-cni-path\") pod \"4e612718-f4bd-4961-95a1-789060e6c17e\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " Jan 30 12:58:29.861959 kubelet[2570]: I0130 12:58:29.860850 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-lib-modules\") pod \"4e612718-f4bd-4961-95a1-789060e6c17e\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " Jan 30 12:58:29.861959 kubelet[2570]: I0130 12:58:29.860879 2570 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-xtables-lock\") pod \"4e612718-f4bd-4961-95a1-789060e6c17e\" (UID: \"4e612718-f4bd-4961-95a1-789060e6c17e\") " Jan 30 12:58:29.861959 kubelet[2570]: I0130 12:58:29.860967 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4e612718-f4bd-4961-95a1-789060e6c17e" (UID: "4e612718-f4bd-4961-95a1-789060e6c17e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.861959 kubelet[2570]: I0130 12:58:29.861024 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-hostproc" (OuterVolumeSpecName: "hostproc") pod "4e612718-f4bd-4961-95a1-789060e6c17e" (UID: "4e612718-f4bd-4961-95a1-789060e6c17e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.862128 kubelet[2570]: I0130 12:58:29.861048 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4e612718-f4bd-4961-95a1-789060e6c17e" (UID: "4e612718-f4bd-4961-95a1-789060e6c17e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.864812 kubelet[2570]: I0130 12:58:29.863038 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4e612718-f4bd-4961-95a1-789060e6c17e" (UID: "4e612718-f4bd-4961-95a1-789060e6c17e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.864812 kubelet[2570]: I0130 12:58:29.863110 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4e612718-f4bd-4961-95a1-789060e6c17e" (UID: "4e612718-f4bd-4961-95a1-789060e6c17e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.864812 kubelet[2570]: I0130 12:58:29.864544 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e612718-f4bd-4961-95a1-789060e6c17e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4e612718-f4bd-4961-95a1-789060e6c17e" (UID: "4e612718-f4bd-4961-95a1-789060e6c17e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 12:58:29.866384 kubelet[2570]: I0130 12:58:29.866327 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4e612718-f4bd-4961-95a1-789060e6c17e" (UID: "4e612718-f4bd-4961-95a1-789060e6c17e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.868202 kubelet[2570]: I0130 12:58:29.868146 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4e612718-f4bd-4961-95a1-789060e6c17e" (UID: "4e612718-f4bd-4961-95a1-789060e6c17e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.868424 kubelet[2570]: I0130 12:58:29.868406 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4e612718-f4bd-4961-95a1-789060e6c17e" (UID: "4e612718-f4bd-4961-95a1-789060e6c17e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.872509 kubelet[2570]: I0130 12:58:29.872448 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72" (UID: "8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 12:58:29.873143 kubelet[2570]: I0130 12:58:29.873108 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-cni-path" (OuterVolumeSpecName: "cni-path") pod "4e612718-f4bd-4961-95a1-789060e6c17e" (UID: "4e612718-f4bd-4961-95a1-789060e6c17e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.873536 kubelet[2570]: I0130 12:58:29.873298 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4e612718-f4bd-4961-95a1-789060e6c17e" (UID: "4e612718-f4bd-4961-95a1-789060e6c17e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:58:29.873536 kubelet[2570]: I0130 12:58:29.873416 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e612718-f4bd-4961-95a1-789060e6c17e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4e612718-f4bd-4961-95a1-789060e6c17e" (UID: "4e612718-f4bd-4961-95a1-789060e6c17e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 12:58:29.873536 kubelet[2570]: I0130 12:58:29.873487 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e612718-f4bd-4961-95a1-789060e6c17e-kube-api-access-mwcpq" (OuterVolumeSpecName: "kube-api-access-mwcpq") pod "4e612718-f4bd-4961-95a1-789060e6c17e" (UID: "4e612718-f4bd-4961-95a1-789060e6c17e"). InnerVolumeSpecName "kube-api-access-mwcpq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 12:58:29.873958 kubelet[2570]: I0130 12:58:29.873929 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e612718-f4bd-4961-95a1-789060e6c17e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4e612718-f4bd-4961-95a1-789060e6c17e" (UID: "4e612718-f4bd-4961-95a1-789060e6c17e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 12:58:29.874203 kubelet[2570]: I0130 12:58:29.874183 2570 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72-kube-api-access-zs48t" (OuterVolumeSpecName: "kube-api-access-zs48t") pod "8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72" (UID: "8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72"). InnerVolumeSpecName "kube-api-access-zs48t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 12:58:29.962162 kubelet[2570]: I0130 12:58:29.961759 2570 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e612718-f4bd-4961-95a1-789060e6c17e-hubble-tls\") on node \"ci-4186.1.0-8-ccc447c07f\" DevicePath \"\"" Jan 30 12:58:29.962162 kubelet[2570]: I0130 12:58:29.961905 2570 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-hostproc\") on node \"ci-4186.1.0-8-ccc447c07f\" DevicePath \"\"" Jan 30 12:58:29.962162 kubelet[2570]: I0130 12:58:29.961922 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-cilium-cgroup\") on node \"ci-4186.1.0-8-ccc447c07f\" DevicePath \"\"" Jan 30 12:58:29.962162 kubelet[2570]: I0130 12:58:29.961938 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e612718-f4bd-4961-95a1-789060e6c17e-cilium-config-path\") on node \"ci-4186.1.0-8-ccc447c07f\" DevicePath \"\"" Jan 30 12:58:29.962162 kubelet[2570]: I0130 12:58:29.961954 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-cilium-run\") on node \"ci-4186.1.0-8-ccc447c07f\" DevicePath \"\"" Jan 30 12:58:29.962162 kubelet[2570]: I0130 12:58:29.961970 2570 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72-cilium-config-path\") on node \"ci-4186.1.0-8-ccc447c07f\" DevicePath \"\"" Jan 30 12:58:29.962162 kubelet[2570]: I0130 12:58:29.961984 2570 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e612718-f4bd-4961-95a1-789060e6c17e-clustermesh-secrets\") on node \"ci-4186.1.0-8-ccc447c07f\" DevicePath \"\"" Jan 30 12:58:29.962162 kubelet[2570]: I0130 12:58:29.961997 2570 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-host-proc-sys-kernel\") on node \"ci-4186.1.0-8-ccc447c07f\" DevicePath \"\"" Jan 30 12:58:29.962832 kubelet[2570]: I0130 12:58:29.962010 2570 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-host-proc-sys-net\") on node \"ci-4186.1.0-8-ccc447c07f\" DevicePath \"\"" Jan 30 12:58:29.962832 kubelet[2570]: I0130 12:58:29.962027 2570 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-etc-cni-netd\") on node \"ci-4186.1.0-8-ccc447c07f\" DevicePath \"\"" Jan 30 12:58:29.962832 kubelet[2570]: I0130 12:58:29.962043 2570 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-bpf-maps\") on node \"ci-4186.1.0-8-ccc447c07f\" DevicePath \"\"" Jan 30 12:58:29.962832 kubelet[2570]: I0130 12:58:29.962059 2570 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-lib-modules\") on node \"ci-4186.1.0-8-ccc447c07f\" DevicePath \"\"" Jan 30 12:58:29.962832 kubelet[2570]: I0130 12:58:29.962077 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zs48t\" (UniqueName: \"kubernetes.io/projected/8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72-kube-api-access-zs48t\") on node \"ci-4186.1.0-8-ccc447c07f\" DevicePath \"\"" Jan 30 12:58:29.962832 kubelet[2570]: I0130 12:58:29.962094 2570 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-cni-path\") on node \"ci-4186.1.0-8-ccc447c07f\" DevicePath \"\"" Jan 30 12:58:29.962832 kubelet[2570]: I0130 12:58:29.962108 2570 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e612718-f4bd-4961-95a1-789060e6c17e-xtables-lock\") on node \"ci-4186.1.0-8-ccc447c07f\" DevicePath \"\"" Jan 30 12:58:29.962832 kubelet[2570]: I0130 12:58:29.962127 2570 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mwcpq\" (UniqueName: \"kubernetes.io/projected/4e612718-f4bd-4961-95a1-789060e6c17e-kube-api-access-mwcpq\") on node \"ci-4186.1.0-8-ccc447c07f\" DevicePath \"\"" Jan 30 12:58:30.145445 kubelet[2570]: I0130 12:58:30.145397 2570 scope.go:117] "RemoveContainer" containerID="87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece" Jan 30 12:58:30.155673 containerd[1481]: time="2025-01-30T12:58:30.155204397Z" level=info msg="RemoveContainer for \"87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece\"" Jan 30 12:58:30.163850 systemd[1]: Removed slice kubepods-burstable-pod4e612718_f4bd_4961_95a1_789060e6c17e.slice - libcontainer container kubepods-burstable-pod4e612718_f4bd_4961_95a1_789060e6c17e.slice. Jan 30 12:58:30.164342 systemd[1]: kubepods-burstable-pod4e612718_f4bd_4961_95a1_789060e6c17e.slice: Consumed 11.228s CPU time. Jan 30 12:58:30.166621 systemd[1]: Removed slice kubepods-besteffort-pod8eada1ee_5a8b_4e3a_b4f4_0068f5b8ed72.slice - libcontainer container kubepods-besteffort-pod8eada1ee_5a8b_4e3a_b4f4_0068f5b8ed72.slice. Jan 30 12:58:30.176484 containerd[1481]: time="2025-01-30T12:58:30.174354466Z" level=info msg="RemoveContainer for \"87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece\" returns successfully" Jan 30 12:58:30.176632 kubelet[2570]: I0130 12:58:30.175654 2570 scope.go:117] "RemoveContainer" containerID="31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa" Jan 30 12:58:30.189484 containerd[1481]: time="2025-01-30T12:58:30.188248608Z" level=info msg="RemoveContainer for \"31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa\"" Jan 30 12:58:30.205605 containerd[1481]: time="2025-01-30T12:58:30.205544042Z" level=info msg="RemoveContainer for \"31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa\" returns successfully" Jan 30 12:58:30.206290 kubelet[2570]: I0130 12:58:30.206258 2570 scope.go:117] "RemoveContainer" containerID="3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793" Jan 30 12:58:30.208088 containerd[1481]: time="2025-01-30T12:58:30.207979757Z" level=info msg="RemoveContainer for \"3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793\"" Jan 30 12:58:30.220708 containerd[1481]: time="2025-01-30T12:58:30.219749050Z" level=info msg="RemoveContainer for \"3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793\" returns successfully" Jan 30 12:58:30.221191 kubelet[2570]: I0130 12:58:30.221125 2570 scope.go:117] "RemoveContainer" containerID="d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3" Jan 30 12:58:30.224323 containerd[1481]: time="2025-01-30T12:58:30.224264939Z" level=info msg="RemoveContainer for \"d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3\"" Jan 30 12:58:30.230445 containerd[1481]: time="2025-01-30T12:58:30.230362343Z" level=info msg="RemoveContainer for \"d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3\" returns successfully" Jan 30 12:58:30.231011 kubelet[2570]: I0130 12:58:30.230698 2570 scope.go:117] "RemoveContainer" containerID="43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd" Jan 30 12:58:30.233364 containerd[1481]: time="2025-01-30T12:58:30.232890227Z" level=info msg="RemoveContainer for \"43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd\"" Jan 30 12:58:30.240371 containerd[1481]: time="2025-01-30T12:58:30.240315429Z" level=info msg="RemoveContainer for \"43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd\" returns successfully" Jan 30 12:58:30.241673 kubelet[2570]: I0130 12:58:30.240897 2570 scope.go:117] "RemoveContainer" containerID="87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece" Jan 30 12:58:30.241926 containerd[1481]: time="2025-01-30T12:58:30.241262986Z" level=error msg="ContainerStatus for \"87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece\": not found" Jan 30 12:58:30.242843 kubelet[2570]: E0130 12:58:30.242424 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece\": not found" containerID="87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece" Jan 30 12:58:30.243447 kubelet[2570]: I0130 12:58:30.243085 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece"} err="failed to get container status \"87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece\": rpc error: code = NotFound desc = an error occurred when try to find container \"87dbf04fd02a96018e978c18846dd0ce8db72deb2904b62c2501ef91914a2ece\": not found" Jan 30 12:58:30.243447 kubelet[2570]: I0130 12:58:30.243267 2570 scope.go:117] "RemoveContainer" containerID="31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa" Jan 30 12:58:30.245315 containerd[1481]: time="2025-01-30T12:58:30.245219208Z" level=error msg="ContainerStatus for \"31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa\": not found" Jan 30 12:58:30.246496 kubelet[2570]: E0130 12:58:30.246413 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa\": not found" containerID="31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa" Jan 30 12:58:30.247049 kubelet[2570]: I0130 12:58:30.246470 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa"} err="failed to get container status \"31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"31d65723190958de5ff7627bec9a8f6b2b25e14354c0a50ecf0fa083e2d307aa\": not found" Jan 30 12:58:30.247049 kubelet[2570]: I0130 12:58:30.246688 2570 scope.go:117] "RemoveContainer" containerID="3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793" Jan 30 12:58:30.247923 containerd[1481]: time="2025-01-30T12:58:30.247308933Z" level=error msg="ContainerStatus for \"3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793\": not found" Jan 30 12:58:30.248078 kubelet[2570]: E0130 12:58:30.247751 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793\": not found" containerID="3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793" Jan 30 12:58:30.248371 kubelet[2570]: I0130 12:58:30.248045 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793"} err="failed to get container status \"3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793\": rpc error: code = NotFound desc = an error occurred when try to find container \"3feaf55497af6729dc2f94bfc37e0d80f522062e5bab6dd66945360bee1e6793\": not found" Jan 30 12:58:30.248371 kubelet[2570]: I0130 12:58:30.248242 2570 scope.go:117] "RemoveContainer" containerID="d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3" Jan 30 12:58:30.248859 containerd[1481]: time="2025-01-30T12:58:30.248699140Z" level=error msg="ContainerStatus for \"d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3\": not found" Jan 30 12:58:30.249301 kubelet[2570]: E0130 12:58:30.249061 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3\": not found" containerID="d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3" Jan 30 12:58:30.249301 kubelet[2570]: I0130 12:58:30.249099 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3"} err="failed to get container status \"d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1b18d03ab1d55b352d38084895c2c475d3ae050ec896e9afe0294c1354401f3\": not found" Jan 30 12:58:30.249301 kubelet[2570]: I0130 12:58:30.249127 2570 scope.go:117] "RemoveContainer" containerID="43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd" Jan 30 12:58:30.249950 containerd[1481]: time="2025-01-30T12:58:30.249620464Z" level=error msg="ContainerStatus for \"43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd\": not found" Jan 30 12:58:30.250304 kubelet[2570]: E0130 12:58:30.249920 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd\": not found" containerID="43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd" Jan 30 12:58:30.250304 kubelet[2570]: I0130 12:58:30.250188 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd"} err="failed to get container status \"43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"43782bbac7e5afaaf6cea69b9560f2d1bf6f0e0a3584719302c4c85c794412cd\": not found" Jan 30 12:58:30.250304 kubelet[2570]: I0130 12:58:30.250220 2570 scope.go:117] "RemoveContainer" containerID="e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f" Jan 30 12:58:30.252854 containerd[1481]: time="2025-01-30T12:58:30.252416517Z" level=info msg="RemoveContainer for \"e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f\"" Jan 30 12:58:30.260046 containerd[1481]: time="2025-01-30T12:58:30.259967830Z" level=info msg="RemoveContainer for \"e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f\" returns successfully" Jan 30 12:58:30.261000 kubelet[2570]: I0130 12:58:30.260532 2570 scope.go:117] "RemoveContainer" containerID="e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f" Jan 30 12:58:30.261125 containerd[1481]: time="2025-01-30T12:58:30.260894746Z" level=error msg="ContainerStatus for \"e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f\": not found" Jan 30 12:58:30.261472 kubelet[2570]: E0130 12:58:30.261389 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f\": not found" containerID="e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f" Jan 30 12:58:30.261472 kubelet[2570]: I0130 12:58:30.261432 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f"} err="failed to get container status \"e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8085abc3d1f530611b629870337540566a2b00f297d7c2a03da42c26fd71b0f\": not found" Jan 30 12:58:30.418135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c-rootfs.mount: Deactivated successfully. Jan 30 12:58:30.419617 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3eb31f8a0400a3d8646fb045d703ff3207a9981a9bc4e221fd6b4b54347b1b3c-shm.mount: Deactivated successfully. Jan 30 12:58:30.419798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-972e3a376c97749afd404cde7af36b7e184bd066220c40367cc0266f93b3c035-rootfs.mount: Deactivated successfully. Jan 30 12:58:30.419917 systemd[1]: var-lib-kubelet-pods-4e612718\x2df4bd\x2d4961\x2d95a1\x2d789060e6c17e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmwcpq.mount: Deactivated successfully. Jan 30 12:58:30.420016 systemd[1]: var-lib-kubelet-pods-8eada1ee\x2d5a8b\x2d4e3a\x2db4f4\x2d0068f5b8ed72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzs48t.mount: Deactivated successfully. Jan 30 12:58:30.420114 systemd[1]: var-lib-kubelet-pods-4e612718\x2df4bd\x2d4961\x2d95a1\x2d789060e6c17e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 12:58:30.420325 systemd[1]: var-lib-kubelet-pods-4e612718\x2df4bd\x2d4961\x2d95a1\x2d789060e6c17e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 12:58:30.628315 kubelet[2570]: I0130 12:58:30.626810 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e612718-f4bd-4961-95a1-789060e6c17e" path="/var/lib/kubelet/pods/4e612718-f4bd-4961-95a1-789060e6c17e/volumes" Jan 30 12:58:30.628315 kubelet[2570]: I0130 12:58:30.627722 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72" path="/var/lib/kubelet/pods/8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72/volumes" Jan 30 12:58:30.811173 kubelet[2570]: E0130 12:58:30.811094 2570 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 12:58:31.313926 sshd[4197]: Connection closed by 139.178.68.195 port 50908 Jan 30 12:58:31.315599 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:31.327901 systemd[1]: sshd@24-209.38.73.11:22-139.178.68.195:50908.service: Deactivated successfully. Jan 30 12:58:31.332316 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 12:58:31.336630 systemd-logind[1455]: Session 25 logged out. Waiting for processes to exit. Jan 30 12:58:31.343508 systemd[1]: Started sshd@25-209.38.73.11:22-139.178.68.195:50914.service - OpenSSH per-connection server daemon (139.178.68.195:50914). Jan 30 12:58:31.346745 systemd-logind[1455]: Removed session 25. Jan 30 12:58:31.439572 sshd[4359]: Accepted publickey for core from 139.178.68.195 port 50914 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:58:31.441845 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:31.449271 systemd-logind[1455]: New session 26 of user core. Jan 30 12:58:31.457032 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 12:58:32.409545 sshd[4361]: Connection closed by 139.178.68.195 port 50914 Jan 30 12:58:32.410260 sshd-session[4359]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:32.427483 systemd[1]: sshd@25-209.38.73.11:22-139.178.68.195:50914.service: Deactivated successfully. Jan 30 12:58:32.431638 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 12:58:32.438605 systemd-logind[1455]: Session 26 logged out. Waiting for processes to exit. Jan 30 12:58:32.448429 systemd[1]: Started sshd@26-209.38.73.11:22-139.178.68.195:50922.service - OpenSSH per-connection server daemon (139.178.68.195:50922). Jan 30 12:58:32.455443 systemd-logind[1455]: Removed session 26. Jan 30 12:58:32.464583 kubelet[2570]: I0130 12:58:32.462157 2570 memory_manager.go:355] "RemoveStaleState removing state" podUID="4e612718-f4bd-4961-95a1-789060e6c17e" containerName="cilium-agent" Jan 30 12:58:32.464583 kubelet[2570]: I0130 12:58:32.462198 2570 memory_manager.go:355] "RemoveStaleState removing state" podUID="8eada1ee-5a8b-4e3a-b4f4-0068f5b8ed72" containerName="cilium-operator" Jan 30 12:58:32.518024 systemd[1]: Created slice kubepods-burstable-pod05aa0f6a_5288_4f08_9a1e_9db72559d4e7.slice - libcontainer container kubepods-burstable-pod05aa0f6a_5288_4f08_9a1e_9db72559d4e7.slice. Jan 30 12:58:32.520857 sshd[4370]: Accepted publickey for core from 139.178.68.195 port 50922 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:58:32.522134 sshd-session[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:32.536047 systemd-logind[1455]: New session 27 of user core. Jan 30 12:58:32.545097 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 12:58:32.583750 kubelet[2570]: I0130 12:58:32.583189 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05aa0f6a-5288-4f08-9a1e-9db72559d4e7-bpf-maps\") pod \"cilium-wfnns\" (UID: \"05aa0f6a-5288-4f08-9a1e-9db72559d4e7\") " pod="kube-system/cilium-wfnns" Jan 30 12:58:32.583750 kubelet[2570]: I0130 12:58:32.583263 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsbqd\" (UniqueName: \"kubernetes.io/projected/05aa0f6a-5288-4f08-9a1e-9db72559d4e7-kube-api-access-hsbqd\") pod \"cilium-wfnns\" (UID: \"05aa0f6a-5288-4f08-9a1e-9db72559d4e7\") " pod="kube-system/cilium-wfnns" Jan 30 12:58:32.583750 kubelet[2570]: I0130 12:58:32.583297 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05aa0f6a-5288-4f08-9a1e-9db72559d4e7-cilium-run\") pod \"cilium-wfnns\" (UID: \"05aa0f6a-5288-4f08-9a1e-9db72559d4e7\") " pod="kube-system/cilium-wfnns" Jan 30 12:58:32.583750 kubelet[2570]: I0130 12:58:32.583323 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05aa0f6a-5288-4f08-9a1e-9db72559d4e7-lib-modules\") pod \"cilium-wfnns\" (UID: \"05aa0f6a-5288-4f08-9a1e-9db72559d4e7\") " pod="kube-system/cilium-wfnns" Jan 30 12:58:32.583750 kubelet[2570]: I0130 12:58:32.583352 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05aa0f6a-5288-4f08-9a1e-9db72559d4e7-clustermesh-secrets\") pod \"cilium-wfnns\" (UID: \"05aa0f6a-5288-4f08-9a1e-9db72559d4e7\") " pod="kube-system/cilium-wfnns" Jan 30 12:58:32.583750 kubelet[2570]: I0130 12:58:32.583380 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05aa0f6a-5288-4f08-9a1e-9db72559d4e7-hubble-tls\") pod \"cilium-wfnns\" (UID: \"05aa0f6a-5288-4f08-9a1e-9db72559d4e7\") " pod="kube-system/cilium-wfnns" Jan 30 12:58:32.584186 kubelet[2570]: I0130 12:58:32.583405 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05aa0f6a-5288-4f08-9a1e-9db72559d4e7-etc-cni-netd\") pod \"cilium-wfnns\" (UID: \"05aa0f6a-5288-4f08-9a1e-9db72559d4e7\") " pod="kube-system/cilium-wfnns" Jan 30 12:58:32.584186 kubelet[2570]: I0130 12:58:32.583429 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05aa0f6a-5288-4f08-9a1e-9db72559d4e7-xtables-lock\") pod \"cilium-wfnns\" (UID: \"05aa0f6a-5288-4f08-9a1e-9db72559d4e7\") " pod="kube-system/cilium-wfnns" Jan 30 12:58:32.584186 kubelet[2570]: I0130 12:58:32.583453 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05aa0f6a-5288-4f08-9a1e-9db72559d4e7-cilium-config-path\") pod \"cilium-wfnns\" (UID: \"05aa0f6a-5288-4f08-9a1e-9db72559d4e7\") " pod="kube-system/cilium-wfnns" Jan 30 12:58:32.584186 kubelet[2570]: I0130 12:58:32.583477 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05aa0f6a-5288-4f08-9a1e-9db72559d4e7-host-proc-sys-net\") pod \"cilium-wfnns\" (UID: \"05aa0f6a-5288-4f08-9a1e-9db72559d4e7\") " pod="kube-system/cilium-wfnns" Jan 30 12:58:32.584186 kubelet[2570]: I0130 12:58:32.583504 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05aa0f6a-5288-4f08-9a1e-9db72559d4e7-cilium-cgroup\") pod \"cilium-wfnns\" (UID: \"05aa0f6a-5288-4f08-9a1e-9db72559d4e7\") " pod="kube-system/cilium-wfnns" Jan 30 12:58:32.584186 kubelet[2570]: I0130 12:58:32.583534 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05aa0f6a-5288-4f08-9a1e-9db72559d4e7-cni-path\") pod \"cilium-wfnns\" (UID: \"05aa0f6a-5288-4f08-9a1e-9db72559d4e7\") " pod="kube-system/cilium-wfnns" Jan 30 12:58:32.584507 kubelet[2570]: I0130 12:58:32.583565 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05aa0f6a-5288-4f08-9a1e-9db72559d4e7-hostproc\") pod \"cilium-wfnns\" (UID: \"05aa0f6a-5288-4f08-9a1e-9db72559d4e7\") " pod="kube-system/cilium-wfnns" Jan 30 12:58:32.584507 kubelet[2570]: I0130 12:58:32.583598 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05aa0f6a-5288-4f08-9a1e-9db72559d4e7-cilium-ipsec-secrets\") pod \"cilium-wfnns\" (UID: \"05aa0f6a-5288-4f08-9a1e-9db72559d4e7\") " pod="kube-system/cilium-wfnns" Jan 30 12:58:32.584507 kubelet[2570]: I0130 12:58:32.583625 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05aa0f6a-5288-4f08-9a1e-9db72559d4e7-host-proc-sys-kernel\") pod \"cilium-wfnns\" (UID: \"05aa0f6a-5288-4f08-9a1e-9db72559d4e7\") " pod="kube-system/cilium-wfnns" Jan 30 12:58:32.614113 sshd[4372]: Connection closed by 139.178.68.195 port 50922 Jan 30 12:58:32.615180 sshd-session[4370]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:32.628846 systemd[1]: sshd@26-209.38.73.11:22-139.178.68.195:50922.service: Deactivated successfully. Jan 30 12:58:32.633165 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 12:58:32.636297 systemd-logind[1455]: Session 27 logged out. Waiting for processes to exit. Jan 30 12:58:32.652855 systemd[1]: Started sshd@27-209.38.73.11:22-139.178.68.195:50936.service - OpenSSH per-connection server daemon (139.178.68.195:50936). Jan 30 12:58:32.655074 systemd-logind[1455]: Removed session 27. Jan 30 12:58:32.757374 sshd[4378]: Accepted publickey for core from 139.178.68.195 port 50936 ssh2: RSA SHA256:SFnHtt+NvFpqnNn2/BMXZVgPxdeWFU7F3mq/iAucyCA Jan 30 12:58:32.759905 sshd-session[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:58:32.766498 systemd-logind[1455]: New session 28 of user core. Jan 30 12:58:32.775145 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 12:58:32.833237 kubelet[2570]: E0130 12:58:32.830997 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:32.833413 containerd[1481]: time="2025-01-30T12:58:32.831675310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wfnns,Uid:05aa0f6a-5288-4f08-9a1e-9db72559d4e7,Namespace:kube-system,Attempt:0,}" Jan 30 12:58:32.878323 containerd[1481]: time="2025-01-30T12:58:32.878143619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:58:32.878323 containerd[1481]: time="2025-01-30T12:58:32.878227264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:58:32.878323 containerd[1481]: time="2025-01-30T12:58:32.878257230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:32.880069 containerd[1481]: time="2025-01-30T12:58:32.878391987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:58:32.914071 systemd[1]: Started cri-containerd-25b3eae16136cfba4fae4e281339df1b7b7c65a3ed5ae41b7612683e1ae09943.scope - libcontainer container 25b3eae16136cfba4fae4e281339df1b7b7c65a3ed5ae41b7612683e1ae09943. Jan 30 12:58:32.981490 containerd[1481]: time="2025-01-30T12:58:32.981435361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wfnns,Uid:05aa0f6a-5288-4f08-9a1e-9db72559d4e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"25b3eae16136cfba4fae4e281339df1b7b7c65a3ed5ae41b7612683e1ae09943\"" Jan 30 12:58:32.982582 kubelet[2570]: E0130 12:58:32.982545 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:32.987124 containerd[1481]: time="2025-01-30T12:58:32.986985154Z" level=info msg="CreateContainer within sandbox \"25b3eae16136cfba4fae4e281339df1b7b7c65a3ed5ae41b7612683e1ae09943\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 12:58:33.011458 containerd[1481]: time="2025-01-30T12:58:33.011153313Z" level=info msg="CreateContainer within sandbox \"25b3eae16136cfba4fae4e281339df1b7b7c65a3ed5ae41b7612683e1ae09943\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e20eee284187a0e46d6bb1e6163a0e2abfd08d526835a7fcdf88200ca7474633\"" Jan 30 12:58:33.014340 containerd[1481]: time="2025-01-30T12:58:33.014225613Z" level=info msg="StartContainer for \"e20eee284187a0e46d6bb1e6163a0e2abfd08d526835a7fcdf88200ca7474633\"" Jan 30 12:58:33.060844 systemd[1]: Started cri-containerd-e20eee284187a0e46d6bb1e6163a0e2abfd08d526835a7fcdf88200ca7474633.scope - libcontainer container e20eee284187a0e46d6bb1e6163a0e2abfd08d526835a7fcdf88200ca7474633. Jan 30 12:58:33.109957 containerd[1481]: time="2025-01-30T12:58:33.109647934Z" level=info msg="StartContainer for \"e20eee284187a0e46d6bb1e6163a0e2abfd08d526835a7fcdf88200ca7474633\" returns successfully" Jan 30 12:58:33.126982 systemd[1]: cri-containerd-e20eee284187a0e46d6bb1e6163a0e2abfd08d526835a7fcdf88200ca7474633.scope: Deactivated successfully. Jan 30 12:58:33.166992 kubelet[2570]: E0130 12:58:33.166765 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:33.174226 containerd[1481]: time="2025-01-30T12:58:33.174072017Z" level=info msg="shim disconnected" id=e20eee284187a0e46d6bb1e6163a0e2abfd08d526835a7fcdf88200ca7474633 namespace=k8s.io Jan 30 12:58:33.174226 containerd[1481]: time="2025-01-30T12:58:33.174162571Z" level=warning msg="cleaning up after shim disconnected" id=e20eee284187a0e46d6bb1e6163a0e2abfd08d526835a7fcdf88200ca7474633 namespace=k8s.io Jan 30 12:58:33.174226 containerd[1481]: time="2025-01-30T12:58:33.174181401Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:58:33.678526 kubelet[2570]: I0130 12:58:33.677256 2570 setters.go:602] "Node became not ready" node="ci-4186.1.0-8-ccc447c07f" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T12:58:33Z","lastTransitionTime":"2025-01-30T12:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 12:58:34.170155 kubelet[2570]: E0130 12:58:34.170111 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:34.175454 containerd[1481]: time="2025-01-30T12:58:34.175208363Z" level=info msg="CreateContainer within sandbox \"25b3eae16136cfba4fae4e281339df1b7b7c65a3ed5ae41b7612683e1ae09943\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 12:58:34.206205 containerd[1481]: time="2025-01-30T12:58:34.206118771Z" level=info msg="CreateContainer within sandbox \"25b3eae16136cfba4fae4e281339df1b7b7c65a3ed5ae41b7612683e1ae09943\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1f0484698f29f0eed08e5ecc8b0111bf5ee5a2d561dbc52a31ae91ca528e987f\"" Jan 30 12:58:34.207586 containerd[1481]: time="2025-01-30T12:58:34.207527304Z" level=info msg="StartContainer for \"1f0484698f29f0eed08e5ecc8b0111bf5ee5a2d561dbc52a31ae91ca528e987f\"" Jan 30 12:58:34.275206 systemd[1]: Started cri-containerd-1f0484698f29f0eed08e5ecc8b0111bf5ee5a2d561dbc52a31ae91ca528e987f.scope - libcontainer container 1f0484698f29f0eed08e5ecc8b0111bf5ee5a2d561dbc52a31ae91ca528e987f. Jan 30 12:58:34.322416 containerd[1481]: time="2025-01-30T12:58:34.322351762Z" level=info msg="StartContainer for \"1f0484698f29f0eed08e5ecc8b0111bf5ee5a2d561dbc52a31ae91ca528e987f\" returns successfully" Jan 30 12:58:34.334976 systemd[1]: cri-containerd-1f0484698f29f0eed08e5ecc8b0111bf5ee5a2d561dbc52a31ae91ca528e987f.scope: Deactivated successfully. Jan 30 12:58:34.380088 containerd[1481]: time="2025-01-30T12:58:34.379977266Z" level=info msg="shim disconnected" id=1f0484698f29f0eed08e5ecc8b0111bf5ee5a2d561dbc52a31ae91ca528e987f namespace=k8s.io Jan 30 12:58:34.380088 containerd[1481]: time="2025-01-30T12:58:34.380066239Z" level=warning msg="cleaning up after shim disconnected" id=1f0484698f29f0eed08e5ecc8b0111bf5ee5a2d561dbc52a31ae91ca528e987f namespace=k8s.io Jan 30 12:58:34.380088 containerd[1481]: time="2025-01-30T12:58:34.380082101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:58:34.693942 systemd[1]: run-containerd-runc-k8s.io-1f0484698f29f0eed08e5ecc8b0111bf5ee5a2d561dbc52a31ae91ca528e987f-runc.iavRIn.mount: Deactivated successfully. Jan 30 12:58:34.694113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f0484698f29f0eed08e5ecc8b0111bf5ee5a2d561dbc52a31ae91ca528e987f-rootfs.mount: Deactivated successfully. Jan 30 12:58:35.176630 kubelet[2570]: E0130 12:58:35.176431 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:35.182740 containerd[1481]: time="2025-01-30T12:58:35.182041334Z" level=info msg="CreateContainer within sandbox \"25b3eae16136cfba4fae4e281339df1b7b7c65a3ed5ae41b7612683e1ae09943\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 12:58:35.230038 containerd[1481]: time="2025-01-30T12:58:35.229958994Z" level=info msg="CreateContainer within sandbox \"25b3eae16136cfba4fae4e281339df1b7b7c65a3ed5ae41b7612683e1ae09943\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"14359b04126a8ed27f445c29edaeb25b7533a3c8ba86c3f26c7762871a24af4a\"" Jan 30 12:58:35.231240 containerd[1481]: time="2025-01-30T12:58:35.231200931Z" level=info msg="StartContainer for \"14359b04126a8ed27f445c29edaeb25b7533a3c8ba86c3f26c7762871a24af4a\"" Jan 30 12:58:35.290197 systemd[1]: Started cri-containerd-14359b04126a8ed27f445c29edaeb25b7533a3c8ba86c3f26c7762871a24af4a.scope - libcontainer container 14359b04126a8ed27f445c29edaeb25b7533a3c8ba86c3f26c7762871a24af4a. Jan 30 12:58:35.350852 containerd[1481]: time="2025-01-30T12:58:35.350745660Z" level=info msg="StartContainer for \"14359b04126a8ed27f445c29edaeb25b7533a3c8ba86c3f26c7762871a24af4a\" returns successfully" Jan 30 12:58:35.361567 systemd[1]: cri-containerd-14359b04126a8ed27f445c29edaeb25b7533a3c8ba86c3f26c7762871a24af4a.scope: Deactivated successfully. Jan 30 12:58:35.403375 containerd[1481]: time="2025-01-30T12:58:35.403254936Z" level=info msg="shim disconnected" id=14359b04126a8ed27f445c29edaeb25b7533a3c8ba86c3f26c7762871a24af4a namespace=k8s.io Jan 30 12:58:35.403713 containerd[1481]: time="2025-01-30T12:58:35.403325097Z" level=warning msg="cleaning up after shim disconnected" id=14359b04126a8ed27f445c29edaeb25b7533a3c8ba86c3f26c7762871a24af4a namespace=k8s.io Jan 30 12:58:35.403713 containerd[1481]: time="2025-01-30T12:58:35.403450378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:58:35.422862 containerd[1481]: time="2025-01-30T12:58:35.422628819Z" level=warning msg="cleanup warnings time=\"2025-01-30T12:58:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 12:58:35.693014 systemd[1]: run-containerd-runc-k8s.io-14359b04126a8ed27f445c29edaeb25b7533a3c8ba86c3f26c7762871a24af4a-runc.RzFGjT.mount: Deactivated successfully. Jan 30 12:58:35.693135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14359b04126a8ed27f445c29edaeb25b7533a3c8ba86c3f26c7762871a24af4a-rootfs.mount: Deactivated successfully. Jan 30 12:58:35.812587 kubelet[2570]: E0130 12:58:35.812447 2570 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 12:58:36.183103 kubelet[2570]: E0130 12:58:36.183051 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:36.189245 containerd[1481]: time="2025-01-30T12:58:36.188618448Z" level=info msg="CreateContainer within sandbox \"25b3eae16136cfba4fae4e281339df1b7b7c65a3ed5ae41b7612683e1ae09943\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 12:58:36.222406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1953002800.mount: Deactivated successfully. Jan 30 12:58:36.236597 containerd[1481]: time="2025-01-30T12:58:36.236420506Z" level=info msg="CreateContainer within sandbox \"25b3eae16136cfba4fae4e281339df1b7b7c65a3ed5ae41b7612683e1ae09943\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b97835d935a6a3931b5b4dc1780bf6929b6e19d43291da46ca42be34d9f4b8d4\"" Jan 30 12:58:36.237860 containerd[1481]: time="2025-01-30T12:58:36.237061593Z" level=info msg="StartContainer for \"b97835d935a6a3931b5b4dc1780bf6929b6e19d43291da46ca42be34d9f4b8d4\"" Jan 30 12:58:36.295165 systemd[1]: Started cri-containerd-b97835d935a6a3931b5b4dc1780bf6929b6e19d43291da46ca42be34d9f4b8d4.scope - libcontainer container b97835d935a6a3931b5b4dc1780bf6929b6e19d43291da46ca42be34d9f4b8d4. Jan 30 12:58:36.342494 systemd[1]: cri-containerd-b97835d935a6a3931b5b4dc1780bf6929b6e19d43291da46ca42be34d9f4b8d4.scope: Deactivated successfully. Jan 30 12:58:36.348214 containerd[1481]: time="2025-01-30T12:58:36.347708731Z" level=info msg="StartContainer for \"b97835d935a6a3931b5b4dc1780bf6929b6e19d43291da46ca42be34d9f4b8d4\" returns successfully" Jan 30 12:58:36.394630 containerd[1481]: time="2025-01-30T12:58:36.394544370Z" level=info msg="shim disconnected" id=b97835d935a6a3931b5b4dc1780bf6929b6e19d43291da46ca42be34d9f4b8d4 namespace=k8s.io Jan 30 12:58:36.395322 containerd[1481]: time="2025-01-30T12:58:36.395012581Z" level=warning msg="cleaning up after shim disconnected" id=b97835d935a6a3931b5b4dc1780bf6929b6e19d43291da46ca42be34d9f4b8d4 namespace=k8s.io Jan 30 12:58:36.395322 containerd[1481]: time="2025-01-30T12:58:36.395080628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:58:36.693931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b97835d935a6a3931b5b4dc1780bf6929b6e19d43291da46ca42be34d9f4b8d4-rootfs.mount: Deactivated successfully. Jan 30 12:58:37.188921 kubelet[2570]: E0130 12:58:37.188840 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:37.197299 containerd[1481]: time="2025-01-30T12:58:37.197245446Z" level=info msg="CreateContainer within sandbox \"25b3eae16136cfba4fae4e281339df1b7b7c65a3ed5ae41b7612683e1ae09943\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 12:58:37.237613 containerd[1481]: time="2025-01-30T12:58:37.237424720Z" level=info msg="CreateContainer within sandbox \"25b3eae16136cfba4fae4e281339df1b7b7c65a3ed5ae41b7612683e1ae09943\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e24876d55e7524058a3099d344f6fe846c2ab34ffbc5cb64aef59b00b5c56113\"" Jan 30 12:58:37.240265 containerd[1481]: time="2025-01-30T12:58:37.238458717Z" level=info msg="StartContainer for \"e24876d55e7524058a3099d344f6fe846c2ab34ffbc5cb64aef59b00b5c56113\"" Jan 30 12:58:37.297153 systemd[1]: Started cri-containerd-e24876d55e7524058a3099d344f6fe846c2ab34ffbc5cb64aef59b00b5c56113.scope - libcontainer container e24876d55e7524058a3099d344f6fe846c2ab34ffbc5cb64aef59b00b5c56113. Jan 30 12:58:37.350197 containerd[1481]: time="2025-01-30T12:58:37.350118695Z" level=info msg="StartContainer for \"e24876d55e7524058a3099d344f6fe846c2ab34ffbc5cb64aef59b00b5c56113\" returns successfully" Jan 30 12:58:38.004935 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 12:58:38.201060 kubelet[2570]: E0130 12:58:38.201009 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:39.203686 kubelet[2570]: E0130 12:58:39.203199 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:39.367479 systemd[1]: run-containerd-runc-k8s.io-e24876d55e7524058a3099d344f6fe846c2ab34ffbc5cb64aef59b00b5c56113-runc.MptG48.mount: Deactivated successfully. Jan 30 12:58:39.624390 kubelet[2570]: E0130 12:58:39.624225 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:41.967962 systemd-networkd[1370]: lxc_health: Link UP Jan 30 12:58:41.980746 systemd-networkd[1370]: lxc_health: Gained carrier Jan 30 12:58:42.834041 kubelet[2570]: E0130 12:58:42.833628 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:42.877813 kubelet[2570]: I0130 12:58:42.877589 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wfnns" podStartSLOduration=10.877519302 podStartE2EDuration="10.877519302s" podCreationTimestamp="2025-01-30 12:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:58:38.239419234 +0000 UTC m=+147.800348684" watchObservedRunningTime="2025-01-30 12:58:42.877519302 +0000 UTC m=+152.438448754" Jan 30 12:58:43.212873 kubelet[2570]: E0130 12:58:43.212833 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:43.958072 systemd-networkd[1370]: lxc_health: Gained IPv6LL Jan 30 12:58:44.217457 kubelet[2570]: E0130 12:58:44.215948 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:46.256080 systemd[1]: run-containerd-runc-k8s.io-e24876d55e7524058a3099d344f6fe846c2ab34ffbc5cb64aef59b00b5c56113-runc.Xpt22m.mount: Deactivated successfully. Jan 30 12:58:47.625703 kubelet[2570]: E0130 12:58:47.624358 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 12:58:48.610627 kubelet[2570]: E0130 12:58:48.610491 2570 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39844->127.0.0.1:36561: write tcp 127.0.0.1:39844->127.0.0.1:36561: write: connection reset by peer Jan 30 12:58:48.617998 sshd[4384]: Connection closed by 139.178.68.195 port 50936 Jan 30 12:58:48.622181 sshd-session[4378]: pam_unix(sshd:session): session closed for user core Jan 30 12:58:48.627702 systemd[1]: sshd@27-209.38.73.11:22-139.178.68.195:50936.service: Deactivated successfully. Jan 30 12:58:48.631977 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 12:58:48.633835 systemd-logind[1455]: Session 28 logged out. Waiting for processes to exit. Jan 30 12:58:48.635720 systemd-logind[1455]: Removed session 28.