Jan 29 11:09:23.036419 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 29 11:09:23.036452 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 29 11:09:23.036469 kernel: BIOS-provided physical RAM map: Jan 29 11:09:23.036479 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:09:23.036488 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:09:23.036497 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:09:23.036508 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 29 11:09:23.036518 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 29 11:09:23.036527 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:09:23.036537 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:09:23.036550 kernel: NX (Execute Disable) protection: active Jan 29 11:09:23.036559 kernel: APIC: Static calls initialized Jan 29 11:09:23.036573 kernel: SMBIOS 2.8 present. Jan 29 11:09:23.036583 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 29 11:09:23.036595 kernel: Hypervisor detected: KVM Jan 29 11:09:23.036606 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:09:23.036623 kernel: kvm-clock: using sched offset of 3670497933 cycles Jan 29 11:09:23.036635 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:09:23.036647 kernel: tsc: Detected 2494.138 MHz processor Jan 29 11:09:23.036658 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:09:23.036669 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:09:23.036680 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 29 11:09:23.036709 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:09:23.036720 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:09:23.036735 kernel: ACPI: Early table checksum verification disabled Jan 29 11:09:23.036746 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 29 11:09:23.036757 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:09:23.036768 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:09:23.036779 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:09:23.036790 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 29 11:09:23.036801 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:09:23.036811 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:09:23.036822 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:09:23.036837 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:09:23.036847 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 29 11:09:23.036858 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 29 11:09:23.036869 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 29 11:09:23.036880 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 29 11:09:23.036891 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 29 11:09:23.036902 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 29 11:09:23.036920 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 29 11:09:23.036932 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:09:23.036944 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 11:09:23.036956 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 11:09:23.036968 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 11:09:23.036982 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 29 11:09:23.036994 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 29 11:09:23.037009 kernel: Zone ranges: Jan 29 11:09:23.037021 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:09:23.037033 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 29 11:09:23.037044 kernel: Normal empty Jan 29 11:09:23.037056 kernel: Movable zone start for each node Jan 29 11:09:23.037068 kernel: Early memory node ranges Jan 29 11:09:23.037079 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:09:23.037091 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 29 11:09:23.037103 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 29 11:09:23.037114 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:09:23.037131 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:09:23.037144 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 29 11:09:23.037156 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:09:23.037167 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:09:23.039238 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:09:23.039277 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:09:23.039290 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:09:23.039302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:09:23.039314 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:09:23.039333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:09:23.039346 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:09:23.039358 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:09:23.039369 kernel: TSC deadline timer available Jan 29 11:09:23.039381 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 11:09:23.039396 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:09:23.039412 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 29 11:09:23.039434 kernel: Booting paravirtualized kernel on KVM Jan 29 11:09:23.039448 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:09:23.039464 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 11:09:23.039475 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 11:09:23.039487 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 11:09:23.039499 kernel: pcpu-alloc: [0] 0 1 Jan 29 11:09:23.039511 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 11:09:23.039526 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 29 11:09:23.039539 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:09:23.039551 kernel: random: crng init done Jan 29 11:09:23.039567 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:09:23.039580 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:09:23.039592 kernel: Fallback order for Node 0: 0 Jan 29 11:09:23.039605 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 29 11:09:23.039617 kernel: Policy zone: DMA32 Jan 29 11:09:23.039631 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:09:23.039644 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 127196K reserved, 0K cma-reserved) Jan 29 11:09:23.039656 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:09:23.039668 kernel: Kernel/User page tables isolation: enabled Jan 29 11:09:23.039684 kernel: ftrace: allocating 37890 entries in 149 pages Jan 29 11:09:23.039697 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:09:23.039709 kernel: Dynamic Preempt: voluntary Jan 29 11:09:23.039722 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:09:23.039735 kernel: rcu: RCU event tracing is enabled. Jan 29 11:09:23.039748 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:09:23.039761 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:09:23.039773 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:09:23.039786 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:09:23.039823 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:09:23.039836 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:09:23.039849 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 11:09:23.039861 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:09:23.039883 kernel: Console: colour VGA+ 80x25 Jan 29 11:09:23.039896 kernel: printk: console [tty0] enabled Jan 29 11:09:23.039909 kernel: printk: console [ttyS0] enabled Jan 29 11:09:23.039923 kernel: ACPI: Core revision 20230628 Jan 29 11:09:23.039935 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:09:23.039953 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:09:23.039966 kernel: x2apic enabled Jan 29 11:09:23.039978 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:09:23.039991 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:09:23.040004 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 29 11:09:23.040017 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jan 29 11:09:23.040030 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 11:09:23.040042 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 11:09:23.040070 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:09:23.040083 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:09:23.040096 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:09:23.040109 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:09:23.040126 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 29 11:09:23.040140 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:09:23.040153 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:09:23.040166 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 11:09:23.040692 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:09:23.040734 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:09:23.040748 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:09:23.040761 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:09:23.040775 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:09:23.040788 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 11:09:23.040802 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:09:23.040815 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:09:23.040829 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:09:23.040845 kernel: landlock: Up and running. Jan 29 11:09:23.040858 kernel: SELinux: Initializing. Jan 29 11:09:23.040872 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:09:23.040885 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:09:23.040899 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 29 11:09:23.040911 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:09:23.040925 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:09:23.040938 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:09:23.040952 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 29 11:09:23.040969 kernel: signal: max sigframe size: 1776 Jan 29 11:09:23.040982 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:09:23.040996 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:09:23.041009 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:09:23.041022 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:09:23.041035 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:09:23.041047 kernel: .... node #0, CPUs: #1 Jan 29 11:09:23.041061 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:09:23.041076 kernel: smpboot: Max logical packages: 1 Jan 29 11:09:23.041093 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jan 29 11:09:23.041106 kernel: devtmpfs: initialized Jan 29 11:09:23.041120 kernel: x86/mm: Memory block size: 128MB Jan 29 11:09:23.041134 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:09:23.041147 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:09:23.041161 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:09:23.041175 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:09:23.043234 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:09:23.043250 kernel: audit: type=2000 audit(1738148961.143:1): state=initialized audit_enabled=0 res=1 Jan 29 11:09:23.043271 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:09:23.043283 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:09:23.043295 kernel: cpuidle: using governor menu Jan 29 11:09:23.043308 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:09:23.043320 kernel: dca service started, version 1.12.1 Jan 29 11:09:23.043332 kernel: PCI: Using configuration type 1 for base access Jan 29 11:09:23.043344 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:09:23.043357 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:09:23.043370 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:09:23.043386 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:09:23.043397 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:09:23.043410 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:09:23.043422 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:09:23.043434 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:09:23.043447 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:09:23.043460 kernel: ACPI: Interpreter enabled Jan 29 11:09:23.043472 kernel: ACPI: PM: (supports S0 S5) Jan 29 11:09:23.043484 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:09:23.043500 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:09:23.043512 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:09:23.043525 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 11:09:23.043537 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:09:23.043821 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:09:23.043966 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 11:09:23.044092 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 11:09:23.044131 kernel: acpiphp: Slot [3] registered Jan 29 11:09:23.044143 kernel: acpiphp: Slot [4] registered Jan 29 11:09:23.044156 kernel: acpiphp: Slot [5] registered Jan 29 11:09:23.044169 kernel: acpiphp: Slot [6] registered Jan 29 11:09:23.044206 kernel: acpiphp: Slot [7] registered Jan 29 11:09:23.044219 kernel: acpiphp: Slot [8] registered Jan 29 11:09:23.044231 kernel: acpiphp: Slot [9] registered Jan 29 11:09:23.044243 kernel: acpiphp: Slot [10] registered Jan 29 11:09:23.044256 kernel: acpiphp: Slot [11] registered Jan 29 11:09:23.044267 kernel: acpiphp: Slot [12] registered Jan 29 11:09:23.044284 kernel: acpiphp: Slot [13] registered Jan 29 11:09:23.044297 kernel: acpiphp: Slot [14] registered Jan 29 11:09:23.044309 kernel: acpiphp: Slot [15] registered Jan 29 11:09:23.044321 kernel: acpiphp: Slot [16] registered Jan 29 11:09:23.044333 kernel: acpiphp: Slot [17] registered Jan 29 11:09:23.044345 kernel: acpiphp: Slot [18] registered Jan 29 11:09:23.044370 kernel: acpiphp: Slot [19] registered Jan 29 11:09:23.044382 kernel: acpiphp: Slot [20] registered Jan 29 11:09:23.044395 kernel: acpiphp: Slot [21] registered Jan 29 11:09:23.044412 kernel: acpiphp: Slot [22] registered Jan 29 11:09:23.044424 kernel: acpiphp: Slot [23] registered Jan 29 11:09:23.044436 kernel: acpiphp: Slot [24] registered Jan 29 11:09:23.044448 kernel: acpiphp: Slot [25] registered Jan 29 11:09:23.044460 kernel: acpiphp: Slot [26] registered Jan 29 11:09:23.044472 kernel: acpiphp: Slot [27] registered Jan 29 11:09:23.044485 kernel: acpiphp: Slot [28] registered Jan 29 11:09:23.044498 kernel: acpiphp: Slot [29] registered Jan 29 11:09:23.044509 kernel: acpiphp: Slot [30] registered Jan 29 11:09:23.044522 kernel: acpiphp: Slot [31] registered Jan 29 11:09:23.044537 kernel: PCI host bridge to bus 0000:00 Jan 29 11:09:23.044722 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:09:23.044844 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:09:23.044958 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:09:23.045071 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 29 11:09:23.045198 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 29 11:09:23.045340 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:09:23.045513 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 11:09:23.045661 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 11:09:23.045799 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 11:09:23.045925 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 29 11:09:23.046050 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 11:09:23.046174 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 11:09:23.048477 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 11:09:23.048589 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 11:09:23.048709 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 29 11:09:23.048807 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 29 11:09:23.048916 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 11:09:23.049042 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 11:09:23.049144 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 11:09:23.049472 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 11:09:23.049578 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 11:09:23.049673 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 29 11:09:23.049766 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 29 11:09:23.049860 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 29 11:09:23.049955 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:09:23.050068 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:09:23.050266 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 29 11:09:23.050381 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 29 11:09:23.050528 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 29 11:09:23.050663 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:09:23.050762 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 29 11:09:23.050860 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 29 11:09:23.050964 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 29 11:09:23.051074 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 29 11:09:23.051177 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 29 11:09:23.051347 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 29 11:09:23.051443 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 29 11:09:23.051567 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:09:23.051711 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 11:09:23.051826 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 29 11:09:23.051922 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 29 11:09:23.052041 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:09:23.052146 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 29 11:09:23.052257 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 29 11:09:23.052369 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 29 11:09:23.052500 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 11:09:23.052685 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 29 11:09:23.052871 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 29 11:09:23.052896 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:09:23.052911 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:09:23.052927 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:09:23.052941 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:09:23.052954 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 11:09:23.052976 kernel: iommu: Default domain type: Translated Jan 29 11:09:23.052989 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:09:23.052999 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:09:23.053008 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:09:23.053017 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:09:23.053026 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 29 11:09:23.053167 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 11:09:23.055435 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 11:09:23.055589 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:09:23.055604 kernel: vgaarb: loaded Jan 29 11:09:23.055614 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:09:23.055624 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:09:23.055633 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:09:23.055642 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:09:23.055651 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:09:23.055661 kernel: pnp: PnP ACPI init Jan 29 11:09:23.055670 kernel: pnp: PnP ACPI: found 4 devices Jan 29 11:09:23.055684 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:09:23.055693 kernel: NET: Registered PF_INET protocol family Jan 29 11:09:23.055706 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:09:23.055722 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 11:09:23.055735 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:09:23.055748 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:09:23.055761 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:09:23.055773 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 11:09:23.055788 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:09:23.055807 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:09:23.055821 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:09:23.055835 kernel: NET: Registered PF_XDP protocol family Jan 29 11:09:23.055947 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:09:23.056036 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:09:23.056127 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:09:23.056427 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 29 11:09:23.056531 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 29 11:09:23.056643 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 11:09:23.056744 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 11:09:23.056758 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 11:09:23.056865 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 32449 usecs Jan 29 11:09:23.056882 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:09:23.056896 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:09:23.056910 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 29 11:09:23.056924 kernel: Initialise system trusted keyrings Jan 29 11:09:23.056938 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 11:09:23.056959 kernel: Key type asymmetric registered Jan 29 11:09:23.056970 kernel: Asymmetric key parser 'x509' registered Jan 29 11:09:23.056979 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:09:23.056988 kernel: io scheduler mq-deadline registered Jan 29 11:09:23.056997 kernel: io scheduler kyber registered Jan 29 11:09:23.057005 kernel: io scheduler bfq registered Jan 29 11:09:23.057015 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:09:23.057024 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 11:09:23.057033 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 11:09:23.057045 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 11:09:23.057054 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:09:23.057063 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:09:23.057073 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:09:23.057089 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:09:23.057102 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:09:23.057279 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 11:09:23.057304 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:09:23.057421 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 11:09:23.057547 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T11:09:22 UTC (1738148962) Jan 29 11:09:23.057677 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 29 11:09:23.057697 kernel: intel_pstate: CPU model not supported Jan 29 11:09:23.057713 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:09:23.057728 kernel: Segment Routing with IPv6 Jan 29 11:09:23.057744 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:09:23.057757 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:09:23.057771 kernel: Key type dns_resolver registered Jan 29 11:09:23.057780 kernel: IPI shorthand broadcast: enabled Jan 29 11:09:23.057789 kernel: sched_clock: Marking stable (1780004911, 91315008)->(1911010560, -39690641) Jan 29 11:09:23.057798 kernel: registered taskstats version 1 Jan 29 11:09:23.057807 kernel: Loading compiled-in X.509 certificates Jan 29 11:09:23.057816 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 29 11:09:23.057825 kernel: Key type .fscrypt registered Jan 29 11:09:23.057834 kernel: Key type fscrypt-provisioning registered Jan 29 11:09:23.057843 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:09:23.057854 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:09:23.057863 kernel: ima: No architecture policies found Jan 29 11:09:23.057872 kernel: clk: Disabling unused clocks Jan 29 11:09:23.057881 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 29 11:09:23.057911 kernel: Write protecting the kernel read-only data: 38912k Jan 29 11:09:23.057939 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 29 11:09:23.057952 kernel: Run /init as init process Jan 29 11:09:23.057962 kernel: with arguments: Jan 29 11:09:23.057971 kernel: /init Jan 29 11:09:23.057983 kernel: with environment: Jan 29 11:09:23.057992 kernel: HOME=/ Jan 29 11:09:23.058002 kernel: TERM=linux Jan 29 11:09:23.058011 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:09:23.058023 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:09:23.058035 systemd[1]: Detected virtualization kvm. Jan 29 11:09:23.058048 systemd[1]: Detected architecture x86-64. Jan 29 11:09:23.058058 systemd[1]: Running in initrd. Jan 29 11:09:23.058070 systemd[1]: No hostname configured, using default hostname. Jan 29 11:09:23.058081 systemd[1]: Hostname set to . Jan 29 11:09:23.058091 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:09:23.058100 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:09:23.058110 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:09:23.058120 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:09:23.058131 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:09:23.058140 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:09:23.058153 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:09:23.058163 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:09:23.058175 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:09:23.060260 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:09:23.060281 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:09:23.060294 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:09:23.060309 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:09:23.060327 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:09:23.060337 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:09:23.060364 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:09:23.060379 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:09:23.060394 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:09:23.060416 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:09:23.060431 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:09:23.060441 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:09:23.060451 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:09:23.060461 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:09:23.060471 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:09:23.060481 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:09:23.060491 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:09:23.060504 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:09:23.060514 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:09:23.060524 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:09:23.060534 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:09:23.060544 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:09:23.060570 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:09:23.060580 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:09:23.060590 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:09:23.060650 systemd-journald[183]: Collecting audit messages is disabled. Jan 29 11:09:23.060680 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:09:23.060691 systemd-journald[183]: Journal started Jan 29 11:09:23.060713 systemd-journald[183]: Runtime Journal (/run/log/journal/8546a1077dba41c08d5a6132c1334cc8) is 4.9M, max 39.3M, 34.4M free. Jan 29 11:09:23.038672 systemd-modules-load[184]: Inserted module 'overlay' Jan 29 11:09:23.101842 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:09:23.101896 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:09:23.101934 kernel: Bridge firewalling registered Jan 29 11:09:23.088190 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 29 11:09:23.102634 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:09:23.103269 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:09:23.107965 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:09:23.115413 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:09:23.119431 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:09:23.121443 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:09:23.125002 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:09:23.139716 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:09:23.147996 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:09:23.157007 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:09:23.158771 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:09:23.164583 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:09:23.168438 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:09:23.186911 dracut-cmdline[218]: dracut-dracut-053 Jan 29 11:09:23.197212 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 29 11:09:23.211704 systemd-resolved[219]: Positive Trust Anchors: Jan 29 11:09:23.212587 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:09:23.212641 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:09:23.219872 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 29 11:09:23.222487 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:09:23.222931 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:09:23.304244 kernel: SCSI subsystem initialized Jan 29 11:09:23.314233 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:09:23.326223 kernel: iscsi: registered transport (tcp) Jan 29 11:09:23.350226 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:09:23.350324 kernel: QLogic iSCSI HBA Driver Jan 29 11:09:23.405921 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:09:23.417510 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:09:23.445562 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:09:23.447085 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:09:23.447119 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:09:23.495270 kernel: raid6: avx2x4 gen() 16877 MB/s Jan 29 11:09:23.512281 kernel: raid6: avx2x2 gen() 17172 MB/s Jan 29 11:09:23.529546 kernel: raid6: avx2x1 gen() 12830 MB/s Jan 29 11:09:23.529639 kernel: raid6: using algorithm avx2x2 gen() 17172 MB/s Jan 29 11:09:23.547710 kernel: raid6: .... xor() 18918 MB/s, rmw enabled Jan 29 11:09:23.547814 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:09:23.570226 kernel: xor: automatically using best checksumming function avx Jan 29 11:09:23.740227 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:09:23.754268 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:09:23.761483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:09:23.786590 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 29 11:09:23.793449 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:09:23.803801 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:09:23.828941 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jan 29 11:09:23.877693 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:09:23.885595 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:09:23.951580 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:09:23.961469 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:09:23.978948 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:09:23.979484 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:09:23.982517 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:09:23.983438 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:09:23.990423 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:09:24.020885 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:09:24.056288 kernel: scsi host0: Virtio SCSI HBA Jan 29 11:09:24.061217 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 29 11:09:24.122444 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 29 11:09:24.122636 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:09:24.122651 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:09:24.122665 kernel: GPT:9289727 != 125829119 Jan 29 11:09:24.122676 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:09:24.122687 kernel: GPT:9289727 != 125829119 Jan 29 11:09:24.122698 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:09:24.122721 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:09:24.122733 kernel: libata version 3.00 loaded. Jan 29 11:09:24.122744 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 11:09:24.140374 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:09:24.140443 kernel: AES CTR mode by8 optimization enabled Jan 29 11:09:24.140481 kernel: scsi host1: ata_piix Jan 29 11:09:24.140819 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 29 11:09:24.141351 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Jan 29 11:09:24.141512 kernel: scsi host2: ata_piix Jan 29 11:09:24.141671 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 29 11:09:24.141686 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 29 11:09:24.126780 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:09:24.126966 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:09:24.153643 kernel: ACPI: bus type USB registered Jan 29 11:09:24.153675 kernel: usbcore: registered new interface driver usbfs Jan 29 11:09:24.153692 kernel: usbcore: registered new interface driver hub Jan 29 11:09:24.153711 kernel: usbcore: registered new device driver usb Jan 29 11:09:24.127710 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:09:24.128173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:09:24.128400 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:09:24.132008 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:09:24.141463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:09:24.202069 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:09:24.206501 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:09:24.229136 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:09:24.338215 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (453) Jan 29 11:09:24.341214 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (464) Jan 29 11:09:24.343215 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 29 11:09:24.353676 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 29 11:09:24.353912 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 29 11:09:24.354115 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 29 11:09:24.355077 kernel: hub 1-0:1.0: USB hub found Jan 29 11:09:24.356537 kernel: hub 1-0:1.0: 2 ports detected Jan 29 11:09:24.353929 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:09:24.364449 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:09:24.370504 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:09:24.371449 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:09:24.378464 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:09:24.385448 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:09:24.389663 disk-uuid[549]: Primary Header is updated. Jan 29 11:09:24.389663 disk-uuid[549]: Secondary Entries is updated. Jan 29 11:09:24.389663 disk-uuid[549]: Secondary Header is updated. Jan 29 11:09:24.399216 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:09:24.418258 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:09:25.407221 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:09:25.408845 disk-uuid[550]: The operation has completed successfully. Jan 29 11:09:25.453394 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:09:25.453504 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:09:25.464524 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:09:25.469777 sh[561]: Success Jan 29 11:09:25.486014 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:09:25.565323 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:09:25.567760 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:09:25.568322 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:09:25.602349 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 29 11:09:25.602413 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:09:25.602427 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:09:25.602440 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:09:25.602451 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:09:25.611135 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:09:25.612486 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:09:25.617404 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:09:25.620522 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:09:25.632840 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:09:25.632913 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:09:25.632944 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:09:25.636245 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:09:25.647383 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:09:25.648494 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:09:25.654369 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:09:25.660514 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:09:25.770438 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:09:25.779483 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:09:25.805655 systemd-networkd[745]: lo: Link UP Jan 29 11:09:25.805669 systemd-networkd[745]: lo: Gained carrier Jan 29 11:09:25.808363 systemd-networkd[745]: Enumeration completed Jan 29 11:09:25.808520 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:09:25.809415 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 29 11:09:25.809419 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 29 11:09:25.810521 systemd[1]: Reached target network.target - Network. Jan 29 11:09:25.811984 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:09:25.811991 systemd-networkd[745]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:09:25.813731 systemd-networkd[745]: eth0: Link UP Jan 29 11:09:25.813736 systemd-networkd[745]: eth0: Gained carrier Jan 29 11:09:25.813750 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 29 11:09:25.817586 systemd-networkd[745]: eth1: Link UP Jan 29 11:09:25.817591 systemd-networkd[745]: eth1: Gained carrier Jan 29 11:09:25.817604 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:09:25.827346 ignition[651]: Ignition 2.20.0 Jan 29 11:09:25.827359 ignition[651]: Stage: fetch-offline Jan 29 11:09:25.827415 ignition[651]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:09:25.827436 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:09:25.827570 ignition[651]: parsed url from cmdline: "" Jan 29 11:09:25.829685 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:09:25.827576 ignition[651]: no config URL provided Jan 29 11:09:25.827584 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:09:25.831278 systemd-networkd[745]: eth1: DHCPv4 address 10.124.0.4/20 acquired from 169.254.169.253 Jan 29 11:09:25.827595 ignition[651]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:09:25.827604 ignition[651]: failed to fetch config: resource requires networking Jan 29 11:09:25.827948 ignition[651]: Ignition finished successfully Jan 29 11:09:25.835318 systemd-networkd[745]: eth0: DHCPv4 address 143.198.77.23/20, gateway 143.198.64.1 acquired from 169.254.169.253 Jan 29 11:09:25.838466 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:09:25.858729 ignition[752]: Ignition 2.20.0 Jan 29 11:09:25.858749 ignition[752]: Stage: fetch Jan 29 11:09:25.858975 ignition[752]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:09:25.858986 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:09:25.859102 ignition[752]: parsed url from cmdline: "" Jan 29 11:09:25.859106 ignition[752]: no config URL provided Jan 29 11:09:25.859111 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:09:25.859122 ignition[752]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:09:25.859146 ignition[752]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 29 11:09:25.887168 ignition[752]: GET result: OK Jan 29 11:09:25.888056 ignition[752]: parsing config with SHA512: 65454e5a0777f7f83bf363c9dddc7b483190ed3db95e910dc9373b5f226ab70b539b1d6cb784f1589a428332bd6bcfd997482d137bec1ad72e6a1150a7fe606c Jan 29 11:09:25.895356 unknown[752]: fetched base config from "system" Jan 29 11:09:25.895374 unknown[752]: fetched base config from "system" Jan 29 11:09:25.895937 ignition[752]: fetch: fetch complete Jan 29 11:09:25.895383 unknown[752]: fetched user config from "digitalocean" Jan 29 11:09:25.895944 ignition[752]: fetch: fetch passed Jan 29 11:09:25.896009 ignition[752]: Ignition finished successfully Jan 29 11:09:25.897762 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:09:25.908512 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:09:25.927364 ignition[760]: Ignition 2.20.0 Jan 29 11:09:25.927384 ignition[760]: Stage: kargs Jan 29 11:09:25.927640 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:09:25.927652 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:09:25.930775 ignition[760]: kargs: kargs passed Jan 29 11:09:25.930849 ignition[760]: Ignition finished successfully Jan 29 11:09:25.933250 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:09:25.937479 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:09:25.967639 ignition[767]: Ignition 2.20.0 Jan 29 11:09:25.967658 ignition[767]: Stage: disks Jan 29 11:09:25.967949 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:09:25.967967 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:09:25.969541 ignition[767]: disks: disks passed Jan 29 11:09:25.970576 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:09:25.969629 ignition[767]: Ignition finished successfully Jan 29 11:09:25.975863 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:09:25.977082 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:09:25.977958 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:09:25.978798 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:09:25.979461 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:09:25.985507 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:09:26.009242 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:09:26.012157 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:09:26.021202 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:09:26.126218 kernel: EXT4-fs (vda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 29 11:09:26.127475 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:09:26.128790 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:09:26.134345 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:09:26.137328 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:09:26.141557 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jan 29 11:09:26.150221 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (783) Jan 29 11:09:26.152242 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:09:26.152324 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:09:26.157531 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:09:26.157571 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:09:26.157924 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:09:26.157978 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:09:26.164893 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:09:26.170887 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:09:26.172143 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:09:26.179631 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:09:26.245208 coreos-metadata[786]: Jan 29 11:09:26.243 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:09:26.248887 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:09:26.254967 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:09:26.258215 coreos-metadata[786]: Jan 29 11:09:26.256 INFO Fetch successful Jan 29 11:09:26.260358 coreos-metadata[785]: Jan 29 11:09:26.260 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:09:26.262207 coreos-metadata[786]: Jan 29 11:09:26.261 INFO wrote hostname ci-4186.1.0-5-b8e0b24f92 to /sysroot/etc/hostname Jan 29 11:09:26.262640 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:09:26.269431 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:09:26.273948 coreos-metadata[785]: Jan 29 11:09:26.273 INFO Fetch successful Jan 29 11:09:26.275383 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:09:26.280583 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jan 29 11:09:26.280699 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jan 29 11:09:26.387573 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:09:26.395364 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:09:26.399469 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:09:26.412256 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:09:26.443632 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:09:26.450659 ignition[903]: INFO : Ignition 2.20.0 Jan 29 11:09:26.450659 ignition[903]: INFO : Stage: mount Jan 29 11:09:26.451877 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:09:26.451877 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:09:26.453123 ignition[903]: INFO : mount: mount passed Jan 29 11:09:26.453123 ignition[903]: INFO : Ignition finished successfully Jan 29 11:09:26.454001 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:09:26.460521 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:09:26.599354 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:09:26.606659 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:09:26.617224 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (916) Jan 29 11:09:26.619436 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:09:26.619529 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:09:26.620220 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:09:26.623210 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:09:26.625855 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:09:26.666077 ignition[933]: INFO : Ignition 2.20.0 Jan 29 11:09:26.666077 ignition[933]: INFO : Stage: files Jan 29 11:09:26.666077 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:09:26.666077 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:09:26.668945 ignition[933]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:09:26.669701 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:09:26.669701 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:09:26.673231 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:09:26.674019 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:09:26.674893 unknown[933]: wrote ssh authorized keys file for user: core Jan 29 11:09:26.675697 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:09:26.679352 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:09:26.680506 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:09:26.717308 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:09:26.821398 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:09:26.821398 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:09:26.823066 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 11:09:27.262201 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:09:27.434525 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:09:27.434525 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:09:27.436416 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:09:27.436416 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:09:27.436416 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:09:27.436416 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:09:27.436416 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:09:27.436416 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:09:27.436416 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:09:27.436416 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:09:27.436416 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:09:27.436416 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:09:27.436416 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:09:27.436416 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:09:27.436416 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 11:09:27.652562 systemd-networkd[745]: eth0: Gained IPv6LL Jan 29 11:09:27.835232 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:09:27.844569 systemd-networkd[745]: eth1: Gained IPv6LL Jan 29 11:09:28.103258 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:09:28.103258 ignition[933]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:09:28.105009 ignition[933]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:09:28.105009 ignition[933]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:09:28.105009 ignition[933]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:09:28.105009 ignition[933]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:09:28.105009 ignition[933]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:09:28.105009 ignition[933]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:09:28.109168 ignition[933]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:09:28.109168 ignition[933]: INFO : files: files passed Jan 29 11:09:28.109168 ignition[933]: INFO : Ignition finished successfully Jan 29 11:09:28.106269 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:09:28.124477 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:09:28.127262 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:09:28.128098 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:09:28.128288 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:09:28.156570 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:09:28.156570 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:09:28.159381 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:09:28.161237 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:09:28.163291 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:09:28.171438 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:09:28.203619 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:09:28.203780 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:09:28.205112 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:09:28.205565 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:09:28.206367 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:09:28.212554 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:09:28.227403 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:09:28.240652 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:09:28.253039 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:09:28.253796 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:09:28.254598 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:09:28.255399 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:09:28.255575 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:09:28.257004 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:09:28.257836 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:09:28.258877 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:09:28.259520 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:09:28.260430 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:09:28.261346 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:09:28.262040 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:09:28.262774 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:09:28.263540 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:09:28.264169 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:09:28.264858 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:09:28.265060 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:09:28.266253 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:09:28.266651 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:09:28.267394 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:09:28.268154 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:09:28.269048 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:09:28.269237 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:09:28.270246 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:09:28.270443 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:09:28.271210 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:09:28.271332 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:09:28.271943 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:09:28.272040 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:09:28.282659 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:09:28.284562 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:09:28.284829 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:09:28.293494 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:09:28.295403 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:09:28.295728 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:09:28.298823 ignition[985]: INFO : Ignition 2.20.0 Jan 29 11:09:28.298823 ignition[985]: INFO : Stage: umount Jan 29 11:09:28.298823 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:09:28.298823 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:09:28.296559 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:09:28.305301 ignition[985]: INFO : umount: umount passed Jan 29 11:09:28.305301 ignition[985]: INFO : Ignition finished successfully Jan 29 11:09:28.296699 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:09:28.302893 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:09:28.303136 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:09:28.305130 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:09:28.305274 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:09:28.315062 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:09:28.315142 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:09:28.315678 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:09:28.315757 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:09:28.316730 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:09:28.316783 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:09:28.317663 systemd[1]: Stopped target network.target - Network. Jan 29 11:09:28.318299 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:09:28.318362 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:09:28.318881 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:09:28.319148 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:09:28.320026 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:09:28.320781 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:09:28.323370 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:09:28.323787 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:09:28.323840 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:09:28.324256 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:09:28.324293 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:09:28.324808 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:09:28.324863 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:09:28.325819 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:09:28.325884 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:09:28.326423 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:09:28.328050 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:09:28.332266 systemd-networkd[745]: eth1: DHCPv6 lease lost Jan 29 11:09:28.334072 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:09:28.334755 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:09:28.334846 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:09:28.335892 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:09:28.336051 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:09:28.336312 systemd-networkd[745]: eth0: DHCPv6 lease lost Jan 29 11:09:28.338552 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:09:28.338729 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:09:28.340014 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:09:28.340130 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:09:28.344695 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:09:28.344794 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:09:28.352432 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:09:28.352913 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:09:28.353016 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:09:28.354641 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:09:28.354714 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:09:28.355198 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:09:28.355280 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:09:28.355751 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:09:28.355808 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:09:28.357424 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:09:28.377430 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:09:28.378280 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:09:28.379727 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:09:28.379864 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:09:28.381580 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:09:28.381672 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:09:28.382095 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:09:28.382135 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:09:28.382932 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:09:28.382990 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:09:28.384451 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:09:28.384517 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:09:28.385250 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:09:28.385318 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:09:28.392622 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:09:28.393177 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:09:28.394963 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:09:28.395471 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:09:28.395524 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:09:28.401385 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:09:28.401584 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:09:28.403242 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:09:28.408613 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:09:28.420722 systemd[1]: Switching root. Jan 29 11:09:28.448297 systemd-journald[183]: Journal stopped Jan 29 11:09:29.722246 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 29 11:09:29.722381 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:09:29.722408 kernel: SELinux: policy capability open_perms=1 Jan 29 11:09:29.722427 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:09:29.722446 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:09:29.722458 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:09:29.722480 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:09:29.722498 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:09:29.722516 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:09:29.722533 kernel: audit: type=1403 audit(1738148968.629:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:09:29.722555 systemd[1]: Successfully loaded SELinux policy in 38.594ms. Jan 29 11:09:29.722592 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.663ms. Jan 29 11:09:29.722612 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:09:29.722630 systemd[1]: Detected virtualization kvm. Jan 29 11:09:29.722648 systemd[1]: Detected architecture x86-64. Jan 29 11:09:29.722671 systemd[1]: Detected first boot. Jan 29 11:09:29.722692 systemd[1]: Hostname set to . Jan 29 11:09:29.722710 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:09:29.722724 zram_generator::config[1028]: No configuration found. Jan 29 11:09:29.722743 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:09:29.722759 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:09:29.722772 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:09:29.722784 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:09:29.722806 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:09:29.722825 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:09:29.722843 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:09:29.722862 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:09:29.722875 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:09:29.722895 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:09:29.722913 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:09:29.722932 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:09:29.722956 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:09:29.722975 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:09:29.722996 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:09:29.723014 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:09:29.723032 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:09:29.723050 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:09:29.723068 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:09:29.723086 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:09:29.723104 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:09:29.723131 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:09:29.723153 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:09:29.723171 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:09:29.726273 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:09:29.726309 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:09:29.726324 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:09:29.726338 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:09:29.726356 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:09:29.726369 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:09:29.726382 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:09:29.726394 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:09:29.726406 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:09:29.726419 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:09:29.726432 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:09:29.726445 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:09:29.726457 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:09:29.726473 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:29.726487 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:09:29.726500 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:09:29.726513 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:09:29.726526 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:09:29.726539 systemd[1]: Reached target machines.target - Containers. Jan 29 11:09:29.726555 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:09:29.726568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:09:29.726585 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:09:29.726598 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:09:29.726610 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:09:29.726623 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:09:29.726636 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:09:29.726649 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:09:29.726662 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:09:29.726675 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:09:29.726691 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:09:29.726704 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:09:29.726716 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:09:29.726729 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:09:29.726742 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:09:29.726755 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:09:29.726768 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:09:29.726787 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:09:29.726800 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:09:29.726816 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:09:29.726829 systemd[1]: Stopped verity-setup.service. Jan 29 11:09:29.726843 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:29.726856 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:09:29.726868 kernel: loop: module loaded Jan 29 11:09:29.726882 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:09:29.726894 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:09:29.726907 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:09:29.726924 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:09:29.726937 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:09:29.726950 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:09:29.726962 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:09:29.726975 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:09:29.726987 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:09:29.727003 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:09:29.727016 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:09:29.727028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:09:29.727041 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:09:29.727054 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:09:29.727069 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:09:29.727082 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:09:29.727095 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:09:29.727108 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:09:29.727123 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:09:29.727138 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:09:29.727151 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:09:29.727168 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:09:29.729800 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:09:29.729835 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:09:29.729849 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:09:29.729862 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:09:29.729877 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:09:29.729925 systemd-journald[1101]: Collecting audit messages is disabled. Jan 29 11:09:29.729954 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:09:29.729968 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:09:29.729987 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:09:29.730000 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:09:29.730015 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:09:29.730031 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:09:29.730051 systemd-journald[1101]: Journal started Jan 29 11:09:29.730084 systemd-journald[1101]: Runtime Journal (/run/log/journal/8546a1077dba41c08d5a6132c1334cc8) is 4.9M, max 39.3M, 34.4M free. Jan 29 11:09:29.739046 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:09:29.360522 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:09:29.382663 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:09:29.383201 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:09:29.743469 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:09:29.755425 kernel: fuse: init (API version 7.39) Jan 29 11:09:29.779669 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:09:29.779846 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:09:29.783262 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:09:29.793314 kernel: loop0: detected capacity change from 0 to 205544 Jan 29 11:09:29.790724 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:09:29.798529 systemd-journald[1101]: Time spent on flushing to /var/log/journal/8546a1077dba41c08d5a6132c1334cc8 is 102.700ms for 986 entries. Jan 29 11:09:29.798529 systemd-journald[1101]: System Journal (/var/log/journal/8546a1077dba41c08d5a6132c1334cc8) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:09:29.913937 systemd-journald[1101]: Received client request to flush runtime journal. Jan 29 11:09:29.913983 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:09:29.914000 kernel: loop1: detected capacity change from 0 to 141000 Jan 29 11:09:29.914014 kernel: ACPI: bus type drm_connector registered Jan 29 11:09:29.803592 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:09:29.818323 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:09:29.837876 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:09:29.841488 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:09:29.844164 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:09:29.885240 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:09:29.888959 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:09:29.905148 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:09:29.907357 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:09:29.907560 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:09:29.918157 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:09:29.965623 kernel: loop2: detected capacity change from 0 to 138184 Jan 29 11:09:29.965175 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:09:29.979522 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:09:30.007745 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:09:30.014467 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:09:30.025487 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:09:30.027309 kernel: loop3: detected capacity change from 0 to 8 Jan 29 11:09:30.055235 kernel: loop4: detected capacity change from 0 to 205544 Jan 29 11:09:30.095383 kernel: loop5: detected capacity change from 0 to 141000 Jan 29 11:09:30.101699 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jan 29 11:09:30.101731 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jan 29 11:09:30.112475 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:09:30.125582 kernel: loop6: detected capacity change from 0 to 138184 Jan 29 11:09:30.160212 kernel: loop7: detected capacity change from 0 to 8 Jan 29 11:09:30.162410 (sd-merge)[1172]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 29 11:09:30.164494 (sd-merge)[1172]: Merged extensions into '/usr'. Jan 29 11:09:30.175998 systemd[1]: Reloading requested from client PID 1126 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:09:30.176024 systemd[1]: Reloading... Jan 29 11:09:30.343229 zram_generator::config[1200]: No configuration found. Jan 29 11:09:30.549263 ldconfig[1118]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:09:30.583052 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:09:30.639073 systemd[1]: Reloading finished in 461 ms. Jan 29 11:09:30.683510 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:09:30.685016 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:09:30.698955 systemd[1]: Starting ensure-sysext.service... Jan 29 11:09:30.712554 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:09:30.744081 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:09:30.744107 systemd[1]: Reloading... Jan 29 11:09:30.788990 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:09:30.790013 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:09:30.792919 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:09:30.793472 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 29 11:09:30.793555 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 29 11:09:30.805082 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:09:30.805101 systemd-tmpfiles[1244]: Skipping /boot Jan 29 11:09:30.854721 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:09:30.854741 systemd-tmpfiles[1244]: Skipping /boot Jan 29 11:09:30.895212 zram_generator::config[1277]: No configuration found. Jan 29 11:09:31.036816 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:09:31.096955 systemd[1]: Reloading finished in 352 ms. Jan 29 11:09:31.112758 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:09:31.117854 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:09:31.129480 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:09:31.134508 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:09:31.140559 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:09:31.154449 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:09:31.159496 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:09:31.166431 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:09:31.178546 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:09:31.182126 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:31.182341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:09:31.190894 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:09:31.204721 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:09:31.210160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:09:31.210799 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:09:31.210938 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:31.217130 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:31.218448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:09:31.218689 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:09:31.218832 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:31.219666 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:09:31.226068 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:31.226568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:09:31.236059 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:09:31.236920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:09:31.237146 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:31.246788 systemd[1]: Finished ensure-sysext.service. Jan 29 11:09:31.248027 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:09:31.263483 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:09:31.273133 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:09:31.275753 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:09:31.275919 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Jan 29 11:09:31.276043 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:09:31.277083 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:09:31.277657 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:09:31.284536 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:09:31.305517 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:09:31.305774 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:09:31.307345 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:09:31.315695 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:09:31.315990 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:09:31.328122 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:09:31.329592 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:09:31.339411 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:09:31.339827 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:09:31.340302 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:09:31.376073 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:09:31.376887 augenrules[1374]: No rules Jan 29 11:09:31.379602 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:09:31.379859 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:09:31.511850 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:09:31.513152 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:09:31.538366 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:09:31.559365 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 29 11:09:31.560349 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:31.560560 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:09:31.571558 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:09:31.577486 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:09:31.586457 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:09:31.588576 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:09:31.588633 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:09:31.588649 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:31.590209 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1367) Jan 29 11:09:31.609545 systemd-networkd[1359]: lo: Link UP Jan 29 11:09:31.609558 systemd-networkd[1359]: lo: Gained carrier Jan 29 11:09:31.643371 kernel: ISO 9660 Extensions: RRIP_1991A Jan 29 11:09:31.644484 systemd-networkd[1359]: Enumeration completed Jan 29 11:09:31.647707 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:09:31.649517 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 29 11:09:31.656453 systemd-resolved[1319]: Positive Trust Anchors: Jan 29 11:09:31.656472 systemd-resolved[1319]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:09:31.656529 systemd-resolved[1319]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:09:31.665850 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:09:31.666120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:09:31.667815 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:09:31.669486 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:09:31.673385 systemd-networkd[1359]: eth0: Configuring with /run/systemd/network/10-42:99:73:12:ca:04.network. Jan 29 11:09:31.674307 systemd-networkd[1359]: eth1: Configuring with /run/systemd/network/10-96:de:c5:6e:95:5d.network. Jan 29 11:09:31.675356 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:09:31.676300 systemd-networkd[1359]: eth0: Link UP Jan 29 11:09:31.676309 systemd-networkd[1359]: eth0: Gained carrier Jan 29 11:09:31.676364 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:09:31.677941 systemd-resolved[1319]: Using system hostname 'ci-4186.1.0-5-b8e0b24f92'. Jan 29 11:09:31.680676 systemd-networkd[1359]: eth1: Link UP Jan 29 11:09:31.680832 systemd-networkd[1359]: eth1: Gained carrier Jan 29 11:09:31.681847 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:09:31.685985 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Jan 29 11:09:31.686328 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Jan 29 11:09:31.690620 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:09:31.701623 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:09:31.703137 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:09:31.703223 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:09:31.703488 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:09:31.704170 systemd[1]: Reached target network.target - Network. Jan 29 11:09:31.705346 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:09:31.736460 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:09:31.746246 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:09:31.756462 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:09:31.780224 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 11:09:31.795254 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:09:31.846452 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:09:31.853209 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:09:31.886220 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 11:09:31.909999 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 11:09:31.925618 kernel: Console: switching to colour dummy device 80x25 Jan 29 11:09:31.925716 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 11:09:31.925746 kernel: [drm] features: -context_init Jan 29 11:09:31.929715 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:09:31.930266 kernel: [drm] number of scanouts: 1 Jan 29 11:09:31.930306 kernel: [drm] number of cap sets: 0 Jan 29 11:09:31.930452 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:09:31.952596 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 11:09:31.967337 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 11:09:31.967471 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 11:09:31.971226 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 11:09:31.980498 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:09:31.993958 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:09:31.994276 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:09:32.013677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:09:32.045142 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:09:32.097268 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:09:32.110504 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:09:32.111825 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:09:32.128244 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:09:32.161780 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:09:32.163428 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:09:32.163594 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:09:32.163798 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:09:32.163900 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:09:32.164234 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:09:32.164726 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:09:32.164871 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:09:32.164952 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:09:32.164995 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:09:32.165080 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:09:32.167968 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:09:32.171835 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:09:32.179981 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:09:32.183924 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:09:32.200685 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:09:32.201655 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:09:32.203876 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:09:32.205722 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:09:32.205768 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:09:32.208042 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:09:32.214523 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:09:32.227410 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:09:32.232511 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:09:32.243421 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:09:32.246841 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:09:32.247410 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:09:32.254804 jq[1442]: false Jan 29 11:09:32.257913 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:09:32.261015 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:09:32.268483 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:09:32.278118 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:09:32.290396 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:09:32.292812 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:09:32.293989 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:09:32.303450 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:09:32.313071 dbus-daemon[1439]: [system] SELinux support is enabled Jan 29 11:09:32.330010 coreos-metadata[1438]: Jan 29 11:09:32.325 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:09:32.321045 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:09:32.322431 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:09:32.330324 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:09:32.338043 coreos-metadata[1438]: Jan 29 11:09:32.336 INFO Fetch successful Jan 29 11:09:32.341746 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:09:32.342334 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:09:32.349749 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:09:32.349936 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:09:32.354380 jq[1451]: true Jan 29 11:09:32.356463 update_engine[1450]: I20250129 11:09:32.356309 1450 main.cc:92] Flatcar Update Engine starting Jan 29 11:09:32.359742 update_engine[1450]: I20250129 11:09:32.359676 1450 update_check_scheduler.cc:74] Next update check in 4m19s Jan 29 11:09:32.377908 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:09:32.378017 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:09:32.384673 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:09:32.384770 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 29 11:09:32.384802 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:09:32.385751 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:09:32.404539 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:09:32.420996 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:09:32.421362 extend-filesystems[1443]: Found loop4 Jan 29 11:09:32.421362 extend-filesystems[1443]: Found loop5 Jan 29 11:09:32.421362 extend-filesystems[1443]: Found loop6 Jan 29 11:09:32.421362 extend-filesystems[1443]: Found loop7 Jan 29 11:09:32.421362 extend-filesystems[1443]: Found vda Jan 29 11:09:32.421362 extend-filesystems[1443]: Found vda1 Jan 29 11:09:32.421362 extend-filesystems[1443]: Found vda2 Jan 29 11:09:32.421362 extend-filesystems[1443]: Found vda3 Jan 29 11:09:32.421362 extend-filesystems[1443]: Found usr Jan 29 11:09:32.421362 extend-filesystems[1443]: Found vda4 Jan 29 11:09:32.421362 extend-filesystems[1443]: Found vda6 Jan 29 11:09:32.421362 extend-filesystems[1443]: Found vda7 Jan 29 11:09:32.421362 extend-filesystems[1443]: Found vda9 Jan 29 11:09:32.421362 extend-filesystems[1443]: Checking size of /dev/vda9 Jan 29 11:09:32.503491 extend-filesystems[1443]: Resized partition /dev/vda9 Jan 29 11:09:32.421960 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:09:32.522169 jq[1462]: true Jan 29 11:09:32.522345 tar[1459]: linux-amd64/helm Jan 29 11:09:32.440799 systemd-logind[1448]: New seat seat0. Jan 29 11:09:32.485236 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:09:32.485263 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:09:32.488589 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:09:32.503574 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:09:32.521074 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:09:32.523597 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:09:32.530284 extend-filesystems[1490]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:09:32.540223 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 29 11:09:32.591279 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1367) Jan 29 11:09:32.630556 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:09:32.654250 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:09:32.664690 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:09:32.676647 systemd[1]: Starting sshkeys.service... Jan 29 11:09:32.761839 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 29 11:09:32.774300 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:09:32.789736 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:09:32.799746 extend-filesystems[1490]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:09:32.799746 extend-filesystems[1490]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 29 11:09:32.799746 extend-filesystems[1490]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 29 11:09:32.810691 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Jan 29 11:09:32.810691 extend-filesystems[1443]: Found vdb Jan 29 11:09:32.803628 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:09:32.803850 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:09:32.921051 coreos-metadata[1509]: Jan 29 11:09:32.920 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:09:32.939145 coreos-metadata[1509]: Jan 29 11:09:32.939 INFO Fetch successful Jan 29 11:09:32.957327 unknown[1509]: wrote ssh authorized keys file for user: core Jan 29 11:09:32.958257 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:09:33.004092 update-ssh-keys[1519]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:09:33.007235 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:09:33.013546 systemd[1]: Finished sshkeys.service. Jan 29 11:09:33.084862 containerd[1472]: time="2025-01-29T11:09:33.084676180Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:09:33.092928 systemd-networkd[1359]: eth1: Gained IPv6LL Jan 29 11:09:33.095362 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Jan 29 11:09:33.101848 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:09:33.102772 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:09:33.118581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:33.126836 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:09:33.180987 containerd[1472]: time="2025-01-29T11:09:33.180881573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:09:33.191990 containerd[1472]: time="2025-01-29T11:09:33.191935764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:09:33.191990 containerd[1472]: time="2025-01-29T11:09:33.191979232Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:09:33.191990 containerd[1472]: time="2025-01-29T11:09:33.192006637Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:09:33.193064 containerd[1472]: time="2025-01-29T11:09:33.192217873Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:09:33.193064 containerd[1472]: time="2025-01-29T11:09:33.192244450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:09:33.193064 containerd[1472]: time="2025-01-29T11:09:33.192305456Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:09:33.193064 containerd[1472]: time="2025-01-29T11:09:33.192335476Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:09:33.193064 containerd[1472]: time="2025-01-29T11:09:33.192542773Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:09:33.193064 containerd[1472]: time="2025-01-29T11:09:33.192561251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:09:33.193064 containerd[1472]: time="2025-01-29T11:09:33.192573790Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:09:33.193064 containerd[1472]: time="2025-01-29T11:09:33.192582505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:09:33.193064 containerd[1472]: time="2025-01-29T11:09:33.192675644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:09:33.193064 containerd[1472]: time="2025-01-29T11:09:33.192911135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:09:33.193064 containerd[1472]: time="2025-01-29T11:09:33.193053867Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:09:33.193486 containerd[1472]: time="2025-01-29T11:09:33.193069425Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:09:33.193486 containerd[1472]: time="2025-01-29T11:09:33.193151596Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:09:33.202217 containerd[1472]: time="2025-01-29T11:09:33.201291435Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:09:33.211581 containerd[1472]: time="2025-01-29T11:09:33.211524330Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:09:33.211581 containerd[1472]: time="2025-01-29T11:09:33.211602766Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:09:33.211772 containerd[1472]: time="2025-01-29T11:09:33.211620622Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:09:33.211772 containerd[1472]: time="2025-01-29T11:09:33.211642159Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:09:33.211772 containerd[1472]: time="2025-01-29T11:09:33.211659862Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:09:33.211894 containerd[1472]: time="2025-01-29T11:09:33.211854983Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:09:33.213060 containerd[1472]: time="2025-01-29T11:09:33.212135315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:09:33.213060 containerd[1472]: time="2025-01-29T11:09:33.212338654Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:09:33.213060 containerd[1472]: time="2025-01-29T11:09:33.212358242Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:09:33.213060 containerd[1472]: time="2025-01-29T11:09:33.212373504Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:09:33.213060 containerd[1472]: time="2025-01-29T11:09:33.212387872Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:09:33.213060 containerd[1472]: time="2025-01-29T11:09:33.212400953Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:09:33.213060 containerd[1472]: time="2025-01-29T11:09:33.212413604Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:09:33.213060 containerd[1472]: time="2025-01-29T11:09:33.212429212Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:09:33.213060 containerd[1472]: time="2025-01-29T11:09:33.212445701Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:09:33.213060 containerd[1472]: time="2025-01-29T11:09:33.212459882Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:09:33.213060 containerd[1472]: time="2025-01-29T11:09:33.212473098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:09:33.213060 containerd[1472]: time="2025-01-29T11:09:33.212483775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:09:33.213060 containerd[1472]: time="2025-01-29T11:09:33.212507787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.213060 containerd[1472]: time="2025-01-29T11:09:33.212530368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.213966 containerd[1472]: time="2025-01-29T11:09:33.212544616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.213966 containerd[1472]: time="2025-01-29T11:09:33.212557985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.213966 containerd[1472]: time="2025-01-29T11:09:33.212569634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.213966 containerd[1472]: time="2025-01-29T11:09:33.212582904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.213966 containerd[1472]: time="2025-01-29T11:09:33.212596404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.213966 containerd[1472]: time="2025-01-29T11:09:33.212617630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.213966 containerd[1472]: time="2025-01-29T11:09:33.212638420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.213966 containerd[1472]: time="2025-01-29T11:09:33.212656345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.213966 containerd[1472]: time="2025-01-29T11:09:33.212667941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.213966 containerd[1472]: time="2025-01-29T11:09:33.212680244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.213966 containerd[1472]: time="2025-01-29T11:09:33.212692361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.213966 containerd[1472]: time="2025-01-29T11:09:33.212707419Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:09:33.213966 containerd[1472]: time="2025-01-29T11:09:33.212728873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.213966 containerd[1472]: time="2025-01-29T11:09:33.212741335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.213966 containerd[1472]: time="2025-01-29T11:09:33.212954522Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:09:33.214485 containerd[1472]: time="2025-01-29T11:09:33.213017478Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:09:33.214485 containerd[1472]: time="2025-01-29T11:09:33.213046439Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:09:33.214485 containerd[1472]: time="2025-01-29T11:09:33.213063802Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:09:33.214485 containerd[1472]: time="2025-01-29T11:09:33.213091624Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:09:33.214485 containerd[1472]: time="2025-01-29T11:09:33.213112165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.214485 containerd[1472]: time="2025-01-29T11:09:33.213140769Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:09:33.214485 containerd[1472]: time="2025-01-29T11:09:33.213165103Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:09:33.228349 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:09:33.234347 containerd[1472]: time="2025-01-29T11:09:33.224856914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:09:33.234413 containerd[1472]: time="2025-01-29T11:09:33.225390933Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:09:33.234413 containerd[1472]: time="2025-01-29T11:09:33.225473858Z" level=info msg="Connect containerd service" Jan 29 11:09:33.234413 containerd[1472]: time="2025-01-29T11:09:33.225533797Z" level=info msg="using legacy CRI server" Jan 29 11:09:33.234413 containerd[1472]: time="2025-01-29T11:09:33.225544388Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:09:33.234413 containerd[1472]: time="2025-01-29T11:09:33.225729684Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:09:33.234413 containerd[1472]: time="2025-01-29T11:09:33.226615945Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:09:33.234413 containerd[1472]: time="2025-01-29T11:09:33.227104633Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:09:33.234413 containerd[1472]: time="2025-01-29T11:09:33.227179060Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:09:33.234413 containerd[1472]: time="2025-01-29T11:09:33.233447681Z" level=info msg="Start subscribing containerd event" Jan 29 11:09:33.234413 containerd[1472]: time="2025-01-29T11:09:33.233604846Z" level=info msg="Start recovering state" Jan 29 11:09:33.234413 containerd[1472]: time="2025-01-29T11:09:33.233768078Z" level=info msg="Start event monitor" Jan 29 11:09:33.234413 containerd[1472]: time="2025-01-29T11:09:33.233798712Z" level=info msg="Start snapshots syncer" Jan 29 11:09:33.234413 containerd[1472]: time="2025-01-29T11:09:33.233814588Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:09:33.234413 containerd[1472]: time="2025-01-29T11:09:33.233826453Z" level=info msg="Start streaming server" Jan 29 11:09:33.241473 containerd[1472]: time="2025-01-29T11:09:33.235728754Z" level=info msg="containerd successfully booted in 0.152929s" Jan 29 11:09:33.236792 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:09:33.309702 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:09:33.362503 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:09:33.373965 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:09:33.387626 systemd[1]: Started sshd@0-143.198.77.23:22-139.178.89.65:39064.service - OpenSSH per-connection server daemon (139.178.89.65:39064). Jan 29 11:09:33.413819 systemd-networkd[1359]: eth0: Gained IPv6LL Jan 29 11:09:33.416810 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Jan 29 11:09:33.424202 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:09:33.424757 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:09:33.438965 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:09:33.473506 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:09:33.486944 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:09:33.503056 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:09:33.505584 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:09:33.543876 sshd[1547]: Accepted publickey for core from 139.178.89.65 port 39064 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:09:33.549631 sshd-session[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:33.580884 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:09:33.592884 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:09:33.603578 systemd-logind[1448]: New session 1 of user core. Jan 29 11:09:33.631451 tar[1459]: linux-amd64/LICENSE Jan 29 11:09:33.631451 tar[1459]: linux-amd64/README.md Jan 29 11:09:33.661176 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:09:33.673684 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:09:33.678475 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:09:33.692021 (systemd)[1561]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:09:33.835083 systemd[1561]: Queued start job for default target default.target. Jan 29 11:09:33.853095 systemd[1561]: Created slice app.slice - User Application Slice. Jan 29 11:09:33.853332 systemd[1561]: Reached target paths.target - Paths. Jan 29 11:09:33.853406 systemd[1561]: Reached target timers.target - Timers. Jan 29 11:09:33.857438 systemd[1561]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:09:33.872623 systemd[1561]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:09:33.873494 systemd[1561]: Reached target sockets.target - Sockets. Jan 29 11:09:33.873517 systemd[1561]: Reached target basic.target - Basic System. Jan 29 11:09:33.873571 systemd[1561]: Reached target default.target - Main User Target. Jan 29 11:09:33.873605 systemd[1561]: Startup finished in 170ms. Jan 29 11:09:33.873638 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:09:33.885518 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:09:33.970534 systemd[1]: Started sshd@1-143.198.77.23:22-139.178.89.65:39076.service - OpenSSH per-connection server daemon (139.178.89.65:39076). Jan 29 11:09:34.037019 sshd[1573]: Accepted publickey for core from 139.178.89.65 port 39076 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:09:34.038040 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:34.046857 systemd-logind[1448]: New session 2 of user core. Jan 29 11:09:34.052515 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:09:34.122456 sshd[1575]: Connection closed by 139.178.89.65 port 39076 Jan 29 11:09:34.123737 sshd-session[1573]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:34.133246 systemd[1]: sshd@1-143.198.77.23:22-139.178.89.65:39076.service: Deactivated successfully. Jan 29 11:09:34.135463 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:09:34.138438 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:09:34.144616 systemd[1]: Started sshd@2-143.198.77.23:22-139.178.89.65:39082.service - OpenSSH per-connection server daemon (139.178.89.65:39082). Jan 29 11:09:34.150236 systemd-logind[1448]: Removed session 2. Jan 29 11:09:34.203764 sshd[1580]: Accepted publickey for core from 139.178.89.65 port 39082 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:09:34.205278 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:34.211305 systemd-logind[1448]: New session 3 of user core. Jan 29 11:09:34.217476 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:09:34.284160 sshd[1582]: Connection closed by 139.178.89.65 port 39082 Jan 29 11:09:34.285902 sshd-session[1580]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:34.290903 systemd[1]: sshd@2-143.198.77.23:22-139.178.89.65:39082.service: Deactivated successfully. Jan 29 11:09:34.292990 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:09:34.295879 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:09:34.297607 systemd-logind[1448]: Removed session 3. Jan 29 11:09:34.404867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:34.406102 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:09:34.408071 systemd[1]: Startup finished in 1.956s (kernel) + 5.892s (initrd) + 5.816s (userspace) = 13.664s. Jan 29 11:09:34.419802 (kubelet)[1591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:09:34.432333 agetty[1555]: failed to open credentials directory Jan 29 11:09:34.437956 agetty[1556]: failed to open credentials directory Jan 29 11:09:35.075762 kubelet[1591]: E0129 11:09:35.075674 1591 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:09:35.078893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:09:35.079042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:09:35.079443 systemd[1]: kubelet.service: Consumed 1.100s CPU time. Jan 29 11:09:44.314702 systemd[1]: Started sshd@3-143.198.77.23:22-139.178.89.65:60108.service - OpenSSH per-connection server daemon (139.178.89.65:60108). Jan 29 11:09:44.369220 sshd[1603]: Accepted publickey for core from 139.178.89.65 port 60108 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:09:44.371392 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:44.379889 systemd-logind[1448]: New session 4 of user core. Jan 29 11:09:44.382487 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:09:44.447635 sshd[1605]: Connection closed by 139.178.89.65 port 60108 Jan 29 11:09:44.448547 sshd-session[1603]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:44.463150 systemd[1]: sshd@3-143.198.77.23:22-139.178.89.65:60108.service: Deactivated successfully. Jan 29 11:09:44.465272 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:09:44.465973 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:09:44.479694 systemd[1]: Started sshd@4-143.198.77.23:22-139.178.89.65:60112.service - OpenSSH per-connection server daemon (139.178.89.65:60112). Jan 29 11:09:44.482397 systemd-logind[1448]: Removed session 4. Jan 29 11:09:44.536668 sshd[1610]: Accepted publickey for core from 139.178.89.65 port 60112 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:09:44.538544 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:44.546111 systemd-logind[1448]: New session 5 of user core. Jan 29 11:09:44.551578 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:09:44.611266 sshd[1612]: Connection closed by 139.178.89.65 port 60112 Jan 29 11:09:44.613698 sshd-session[1610]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:44.623299 systemd[1]: sshd@4-143.198.77.23:22-139.178.89.65:60112.service: Deactivated successfully. Jan 29 11:09:44.625738 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:09:44.628666 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:09:44.635764 systemd[1]: Started sshd@5-143.198.77.23:22-139.178.89.65:60118.service - OpenSSH per-connection server daemon (139.178.89.65:60118). Jan 29 11:09:44.638337 systemd-logind[1448]: Removed session 5. Jan 29 11:09:44.694809 sshd[1617]: Accepted publickey for core from 139.178.89.65 port 60118 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:09:44.697539 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:44.703457 systemd-logind[1448]: New session 6 of user core. Jan 29 11:09:44.715520 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:09:44.779991 sshd[1619]: Connection closed by 139.178.89.65 port 60118 Jan 29 11:09:44.779871 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:44.792820 systemd[1]: sshd@5-143.198.77.23:22-139.178.89.65:60118.service: Deactivated successfully. Jan 29 11:09:44.794835 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:09:44.795821 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:09:44.806692 systemd[1]: Started sshd@6-143.198.77.23:22-139.178.89.65:60126.service - OpenSSH per-connection server daemon (139.178.89.65:60126). Jan 29 11:09:44.808841 systemd-logind[1448]: Removed session 6. Jan 29 11:09:44.858089 sshd[1624]: Accepted publickey for core from 139.178.89.65 port 60126 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:09:44.859957 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:44.865843 systemd-logind[1448]: New session 7 of user core. Jan 29 11:09:44.876648 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:09:44.947919 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:09:44.948312 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:09:44.965424 sudo[1627]: pam_unix(sudo:session): session closed for user root Jan 29 11:09:44.968737 sshd[1626]: Connection closed by 139.178.89.65 port 60126 Jan 29 11:09:44.969575 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:44.979865 systemd[1]: sshd@6-143.198.77.23:22-139.178.89.65:60126.service: Deactivated successfully. Jan 29 11:09:44.984048 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:09:44.987042 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:09:44.997723 systemd[1]: Started sshd@7-143.198.77.23:22-139.178.89.65:60138.service - OpenSSH per-connection server daemon (139.178.89.65:60138). Jan 29 11:09:45.000110 systemd-logind[1448]: Removed session 7. Jan 29 11:09:45.061850 sshd[1633]: Accepted publickey for core from 139.178.89.65 port 60138 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:09:45.064107 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:45.073838 systemd-logind[1448]: New session 8 of user core. Jan 29 11:09:45.078530 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:09:45.080084 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:09:45.088759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:45.151468 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:09:45.151809 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:09:45.159732 sudo[1640]: pam_unix(sudo:session): session closed for user root Jan 29 11:09:45.169825 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:09:45.170762 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:09:45.194891 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:09:45.269516 augenrules[1666]: No rules Jan 29 11:09:45.271684 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:09:45.272223 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:09:45.276961 sudo[1639]: pam_unix(sudo:session): session closed for user root Jan 29 11:09:45.279585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:45.281421 sshd[1636]: Connection closed by 139.178.89.65 port 60138 Jan 29 11:09:45.283223 sshd-session[1633]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:45.290994 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:09:45.295502 systemd[1]: sshd@7-143.198.77.23:22-139.178.89.65:60138.service: Deactivated successfully. Jan 29 11:09:45.299565 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:09:45.304431 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:09:45.318981 systemd[1]: Started sshd@8-143.198.77.23:22-139.178.89.65:60140.service - OpenSSH per-connection server daemon (139.178.89.65:60140). Jan 29 11:09:45.321620 systemd-logind[1448]: Removed session 8. Jan 29 11:09:45.356028 kubelet[1672]: E0129 11:09:45.355932 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:09:45.362094 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:09:45.362484 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:09:45.382086 sshd[1680]: Accepted publickey for core from 139.178.89.65 port 60140 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:09:45.384415 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:45.391360 systemd-logind[1448]: New session 9 of user core. Jan 29 11:09:45.399589 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:09:45.464998 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:09:45.466144 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:09:45.949712 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:09:45.952027 (dockerd)[1703]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:09:46.448047 dockerd[1703]: time="2025-01-29T11:09:46.447833572Z" level=info msg="Starting up" Jan 29 11:09:46.576003 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport609020807-merged.mount: Deactivated successfully. Jan 29 11:09:46.616968 dockerd[1703]: time="2025-01-29T11:09:46.616614373Z" level=info msg="Loading containers: start." Jan 29 11:09:46.848244 kernel: Initializing XFRM netlink socket Jan 29 11:09:46.884565 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Jan 29 11:09:46.961291 systemd-networkd[1359]: docker0: Link UP Jan 29 11:09:47.402708 systemd-resolved[1319]: Clock change detected. Flushing caches. Jan 29 11:09:47.403244 systemd-timesyncd[1341]: Contacted time server 50.218.103.254:123 (2.flatcar.pool.ntp.org). Jan 29 11:09:47.403339 systemd-timesyncd[1341]: Initial clock synchronization to Wed 2025-01-29 11:09:47.402535 UTC. Jan 29 11:09:47.442469 dockerd[1703]: time="2025-01-29T11:09:47.442403393Z" level=info msg="Loading containers: done." Jan 29 11:09:47.462658 dockerd[1703]: time="2025-01-29T11:09:47.462596460Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:09:47.462894 dockerd[1703]: time="2025-01-29T11:09:47.462760894Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 11:09:47.462969 dockerd[1703]: time="2025-01-29T11:09:47.462926909Z" level=info msg="Daemon has completed initialization" Jan 29 11:09:47.501142 dockerd[1703]: time="2025-01-29T11:09:47.500706687Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:09:47.500917 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:09:48.378008 containerd[1472]: time="2025-01-29T11:09:48.377534885Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 11:09:48.938830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1063249223.mount: Deactivated successfully. Jan 29 11:09:50.190336 containerd[1472]: time="2025-01-29T11:09:50.190272360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:50.192140 containerd[1472]: time="2025-01-29T11:09:50.191482605Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 29 11:09:50.192140 containerd[1472]: time="2025-01-29T11:09:50.191544934Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:50.196131 containerd[1472]: time="2025-01-29T11:09:50.196032910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:50.198050 containerd[1472]: time="2025-01-29T11:09:50.197755242Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 1.820177515s" Jan 29 11:09:50.198050 containerd[1472]: time="2025-01-29T11:09:50.197816501Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 11:09:50.199835 containerd[1472]: time="2025-01-29T11:09:50.199793630Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 11:09:51.617960 containerd[1472]: time="2025-01-29T11:09:51.617873804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:51.619222 containerd[1472]: time="2025-01-29T11:09:51.619154187Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 29 11:09:51.620466 containerd[1472]: time="2025-01-29T11:09:51.620377845Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:51.623629 containerd[1472]: time="2025-01-29T11:09:51.623566242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:51.625539 containerd[1472]: time="2025-01-29T11:09:51.625349651Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.425511318s" Jan 29 11:09:51.625539 containerd[1472]: time="2025-01-29T11:09:51.625406598Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 11:09:51.626780 containerd[1472]: time="2025-01-29T11:09:51.626717733Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 11:09:52.743463 containerd[1472]: time="2025-01-29T11:09:52.743402277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:52.744632 containerd[1472]: time="2025-01-29T11:09:52.744499659Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 29 11:09:52.745318 containerd[1472]: time="2025-01-29T11:09:52.745279247Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:52.750419 containerd[1472]: time="2025-01-29T11:09:52.749279735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:52.750419 containerd[1472]: time="2025-01-29T11:09:52.750274000Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.123514332s" Jan 29 11:09:52.750419 containerd[1472]: time="2025-01-29T11:09:52.750312465Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 11:09:52.751707 containerd[1472]: time="2025-01-29T11:09:52.751477543Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:09:53.897585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount138279504.mount: Deactivated successfully. Jan 29 11:09:54.415180 containerd[1472]: time="2025-01-29T11:09:54.415089267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:54.416256 containerd[1472]: time="2025-01-29T11:09:54.415937088Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 11:09:54.416962 containerd[1472]: time="2025-01-29T11:09:54.416883132Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:54.420609 containerd[1472]: time="2025-01-29T11:09:54.419479089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:54.420609 containerd[1472]: time="2025-01-29T11:09:54.420440621Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.668925747s" Jan 29 11:09:54.420609 containerd[1472]: time="2025-01-29T11:09:54.420475562Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 11:09:54.421182 containerd[1472]: time="2025-01-29T11:09:54.421142595Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:09:54.423347 systemd-resolved[1319]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 29 11:09:54.896685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount638833783.mount: Deactivated successfully. Jan 29 11:09:55.785147 containerd[1472]: time="2025-01-29T11:09:55.784561807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:55.786819 containerd[1472]: time="2025-01-29T11:09:55.786770165Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:09:55.788390 containerd[1472]: time="2025-01-29T11:09:55.787164560Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:55.790893 containerd[1472]: time="2025-01-29T11:09:55.790853058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:55.793611 containerd[1472]: time="2025-01-29T11:09:55.793416569Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.372231633s" Jan 29 11:09:55.793611 containerd[1472]: time="2025-01-29T11:09:55.793472601Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:09:55.794308 containerd[1472]: time="2025-01-29T11:09:55.794278878Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:09:55.856921 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:09:55.871889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:55.997942 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:56.010688 (kubelet)[2021]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:09:56.071998 kubelet[2021]: E0129 11:09:56.071792 2021 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:09:56.074577 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:09:56.074785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:09:56.289553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486154343.mount: Deactivated successfully. Jan 29 11:09:56.292581 containerd[1472]: time="2025-01-29T11:09:56.292523528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:56.293608 containerd[1472]: time="2025-01-29T11:09:56.293549814Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 11:09:56.294362 containerd[1472]: time="2025-01-29T11:09:56.294300233Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:56.297532 containerd[1472]: time="2025-01-29T11:09:56.297465070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:56.298133 containerd[1472]: time="2025-01-29T11:09:56.298081875Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 503.769837ms" Jan 29 11:09:56.298211 containerd[1472]: time="2025-01-29T11:09:56.298136913Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 11:09:56.298869 containerd[1472]: time="2025-01-29T11:09:56.298640138Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 11:09:56.784794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3784413460.mount: Deactivated successfully. Jan 29 11:09:57.532339 systemd-resolved[1319]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 29 11:09:58.477778 containerd[1472]: time="2025-01-29T11:09:58.475961407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:58.477778 containerd[1472]: time="2025-01-29T11:09:58.477209353Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 29 11:09:58.477778 containerd[1472]: time="2025-01-29T11:09:58.477706007Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:58.481987 containerd[1472]: time="2025-01-29T11:09:58.481922982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:58.483811 containerd[1472]: time="2025-01-29T11:09:58.483751828Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.185078674s" Jan 29 11:09:58.483811 containerd[1472]: time="2025-01-29T11:09:58.483808192Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 11:10:01.164157 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:01.187619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:01.260186 systemd[1]: Reloading requested from client PID 2112 ('systemctl') (unit session-9.scope)... Jan 29 11:10:01.260206 systemd[1]: Reloading... Jan 29 11:10:01.464176 zram_generator::config[2155]: No configuration found. Jan 29 11:10:01.730185 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:10:01.922693 systemd[1]: Reloading finished in 661 ms. Jan 29 11:10:02.031585 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:10:02.031964 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:10:02.032511 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:02.044829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:02.388644 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:02.400919 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:10:02.607151 kubelet[2202]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:10:02.607151 kubelet[2202]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:10:02.607151 kubelet[2202]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:10:02.607151 kubelet[2202]: I0129 11:10:02.606764 2202 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:10:03.196588 kubelet[2202]: I0129 11:10:03.190808 2202 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:10:03.196588 kubelet[2202]: I0129 11:10:03.190860 2202 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:10:03.196588 kubelet[2202]: I0129 11:10:03.194916 2202 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:10:03.255703 kubelet[2202]: I0129 11:10:03.255625 2202 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:10:03.259555 kubelet[2202]: E0129 11:10:03.259061 2202 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://143.198.77.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.77.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:03.275487 kubelet[2202]: E0129 11:10:03.274983 2202 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:10:03.275487 kubelet[2202]: I0129 11:10:03.275028 2202 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:10:03.283255 kubelet[2202]: I0129 11:10:03.283200 2202 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:10:03.291302 kubelet[2202]: I0129 11:10:03.290373 2202 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:10:03.291302 kubelet[2202]: I0129 11:10:03.291044 2202 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:10:03.291551 kubelet[2202]: I0129 11:10:03.291137 2202 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-5-b8e0b24f92","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:10:03.291551 kubelet[2202]: I0129 11:10:03.291451 2202 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:10:03.291551 kubelet[2202]: I0129 11:10:03.291465 2202 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:10:03.291830 kubelet[2202]: I0129 11:10:03.291789 2202 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:10:03.298250 kubelet[2202]: I0129 11:10:03.297613 2202 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:10:03.298250 kubelet[2202]: I0129 11:10:03.297738 2202 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:10:03.298250 kubelet[2202]: I0129 11:10:03.297806 2202 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:10:03.298250 kubelet[2202]: I0129 11:10:03.297831 2202 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:10:03.305597 kubelet[2202]: W0129 11:10:03.305499 2202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.77.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-5-b8e0b24f92&limit=500&resourceVersion=0": dial tcp 143.198.77.23:6443: connect: connection refused Jan 29 11:10:03.306354 kubelet[2202]: E0129 11:10:03.305745 2202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.77.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-5-b8e0b24f92&limit=500&resourceVersion=0\": dial tcp 143.198.77.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:03.309541 kubelet[2202]: W0129 11:10:03.309458 2202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.77.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.77.23:6443: connect: connection refused Jan 29 11:10:03.309814 kubelet[2202]: E0129 11:10:03.309783 2202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.77.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.77.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:03.311324 kubelet[2202]: I0129 11:10:03.310631 2202 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:10:03.315224 kubelet[2202]: I0129 11:10:03.315113 2202 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:10:03.315441 kubelet[2202]: W0129 11:10:03.315323 2202 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:10:03.316906 kubelet[2202]: I0129 11:10:03.316833 2202 server.go:1269] "Started kubelet" Jan 29 11:10:03.330644 kubelet[2202]: I0129 11:10:03.329600 2202 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:10:03.341530 kubelet[2202]: I0129 11:10:03.339669 2202 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:10:03.344456 kubelet[2202]: E0129 11:10:03.331053 2202 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.77.23:6443/api/v1/namespaces/default/events\": dial tcp 143.198.77.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-5-b8e0b24f92.181f254f356590dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-5-b8e0b24f92,UID:ci-4186.1.0-5-b8e0b24f92,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-5-b8e0b24f92,},FirstTimestamp:2025-01-29 11:10:03.316793564 +0000 UTC m=+0.883513185,LastTimestamp:2025-01-29 11:10:03.316793564 +0000 UTC m=+0.883513185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-5-b8e0b24f92,}" Jan 29 11:10:03.355151 kubelet[2202]: I0129 11:10:03.344879 2202 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:10:03.355151 kubelet[2202]: I0129 11:10:03.345054 2202 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:10:03.355594 kubelet[2202]: I0129 11:10:03.355553 2202 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:10:03.356265 kubelet[2202]: I0129 11:10:03.349905 2202 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:10:03.356650 kubelet[2202]: I0129 11:10:03.346639 2202 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:10:03.358738 kubelet[2202]: I0129 11:10:03.349925 2202 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:10:03.358738 kubelet[2202]: E0129 11:10:03.351692 2202 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-5-b8e0b24f92\" not found" Jan 29 11:10:03.358738 kubelet[2202]: W0129 11:10:03.358261 2202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.77.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.77.23:6443: connect: connection refused Jan 29 11:10:03.358738 kubelet[2202]: E0129 11:10:03.358642 2202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.77.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.77.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:03.360260 kubelet[2202]: E0129 11:10:03.359192 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.77.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-5-b8e0b24f92?timeout=10s\": dial tcp 143.198.77.23:6443: connect: connection refused" interval="200ms" Jan 29 11:10:03.360260 kubelet[2202]: I0129 11:10:03.359688 2202 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:10:03.362274 kubelet[2202]: I0129 11:10:03.362232 2202 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:10:03.362930 kubelet[2202]: I0129 11:10:03.362675 2202 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:10:03.366731 kubelet[2202]: E0129 11:10:03.366204 2202 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:10:03.366731 kubelet[2202]: I0129 11:10:03.366484 2202 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:10:03.395727 kubelet[2202]: I0129 11:10:03.395688 2202 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:10:03.396046 kubelet[2202]: I0129 11:10:03.396023 2202 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:10:03.396355 kubelet[2202]: I0129 11:10:03.396324 2202 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:10:03.411995 kubelet[2202]: I0129 11:10:03.411605 2202 policy_none.go:49] "None policy: Start" Jan 29 11:10:03.415717 kubelet[2202]: I0129 11:10:03.415647 2202 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:10:03.416484 kubelet[2202]: I0129 11:10:03.416098 2202 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:10:03.418597 kubelet[2202]: I0129 11:10:03.418461 2202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:10:03.423708 kubelet[2202]: I0129 11:10:03.423588 2202 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:10:03.423708 kubelet[2202]: I0129 11:10:03.423645 2202 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:10:03.423708 kubelet[2202]: I0129 11:10:03.423669 2202 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:10:03.424203 kubelet[2202]: E0129 11:10:03.423957 2202 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:10:03.433598 kubelet[2202]: W0129 11:10:03.433321 2202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.77.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.77.23:6443: connect: connection refused Jan 29 11:10:03.435405 kubelet[2202]: E0129 11:10:03.435278 2202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.77.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.77.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:03.441826 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:10:03.459081 kubelet[2202]: E0129 11:10:03.458509 2202 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-5-b8e0b24f92\" not found" Jan 29 11:10:03.463028 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:10:03.477068 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:10:03.487983 kubelet[2202]: I0129 11:10:03.487936 2202 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:10:03.489799 kubelet[2202]: I0129 11:10:03.489572 2202 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:10:03.492869 kubelet[2202]: I0129 11:10:03.490966 2202 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:10:03.496035 kubelet[2202]: I0129 11:10:03.495771 2202 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:10:03.501346 kubelet[2202]: E0129 11:10:03.501026 2202 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-5-b8e0b24f92\" not found" Jan 29 11:10:03.539796 systemd[1]: Created slice kubepods-burstable-pod9f8444b614ea97ec6c6906869feb6016.slice - libcontainer container kubepods-burstable-pod9f8444b614ea97ec6c6906869feb6016.slice. Jan 29 11:10:03.562249 kubelet[2202]: E0129 11:10:03.561919 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.77.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-5-b8e0b24f92?timeout=10s\": dial tcp 143.198.77.23:6443: connect: connection refused" interval="400ms" Jan 29 11:10:03.582530 systemd[1]: Created slice kubepods-burstable-poda789b695984095d4d7ca4df52323bed9.slice - libcontainer container kubepods-burstable-poda789b695984095d4d7ca4df52323bed9.slice. Jan 29 11:10:03.595481 kubelet[2202]: I0129 11:10:03.594827 2202 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:03.597887 kubelet[2202]: E0129 11:10:03.595992 2202 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.77.23:6443/api/v1/nodes\": dial tcp 143.198.77.23:6443: connect: connection refused" node="ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:03.599528 systemd[1]: Created slice kubepods-burstable-pod7e5cd8a82ed2e0c43ae3d091536029f3.slice - libcontainer container kubepods-burstable-pod7e5cd8a82ed2e0c43ae3d091536029f3.slice. Jan 29 11:10:03.662780 kubelet[2202]: I0129 11:10:03.662350 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f8444b614ea97ec6c6906869feb6016-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-5-b8e0b24f92\" (UID: \"9f8444b614ea97ec6c6906869feb6016\") " pod="kube-system/kube-apiserver-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:03.662780 kubelet[2202]: I0129 11:10:03.662421 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f8444b614ea97ec6c6906869feb6016-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-5-b8e0b24f92\" (UID: \"9f8444b614ea97ec6c6906869feb6016\") " pod="kube-system/kube-apiserver-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:03.662780 kubelet[2202]: I0129 11:10:03.662468 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e5cd8a82ed2e0c43ae3d091536029f3-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-5-b8e0b24f92\" (UID: \"7e5cd8a82ed2e0c43ae3d091536029f3\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:03.662780 kubelet[2202]: I0129 11:10:03.662497 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e5cd8a82ed2e0c43ae3d091536029f3-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-5-b8e0b24f92\" (UID: \"7e5cd8a82ed2e0c43ae3d091536029f3\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:03.662780 kubelet[2202]: I0129 11:10:03.662524 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e5cd8a82ed2e0c43ae3d091536029f3-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-5-b8e0b24f92\" (UID: \"7e5cd8a82ed2e0c43ae3d091536029f3\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:03.663907 kubelet[2202]: I0129 11:10:03.662548 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f8444b614ea97ec6c6906869feb6016-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-5-b8e0b24f92\" (UID: \"9f8444b614ea97ec6c6906869feb6016\") " pod="kube-system/kube-apiserver-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:03.663907 kubelet[2202]: I0129 11:10:03.662572 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e5cd8a82ed2e0c43ae3d091536029f3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-5-b8e0b24f92\" (UID: \"7e5cd8a82ed2e0c43ae3d091536029f3\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:03.663907 kubelet[2202]: I0129 11:10:03.662594 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a789b695984095d4d7ca4df52323bed9-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-5-b8e0b24f92\" (UID: \"a789b695984095d4d7ca4df52323bed9\") " pod="kube-system/kube-scheduler-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:03.663907 kubelet[2202]: I0129 11:10:03.662617 2202 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e5cd8a82ed2e0c43ae3d091536029f3-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-5-b8e0b24f92\" (UID: \"7e5cd8a82ed2e0c43ae3d091536029f3\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:03.803421 kubelet[2202]: I0129 11:10:03.803178 2202 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:03.804437 kubelet[2202]: E0129 11:10:03.803677 2202 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.77.23:6443/api/v1/nodes\": dial tcp 143.198.77.23:6443: connect: connection refused" node="ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:03.862891 kubelet[2202]: E0129 11:10:03.862750 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:03.865152 containerd[1472]: time="2025-01-29T11:10:03.864986457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-5-b8e0b24f92,Uid:9f8444b614ea97ec6c6906869feb6016,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:03.868370 systemd-resolved[1319]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 29 11:10:03.891778 kubelet[2202]: E0129 11:10:03.889047 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:03.893099 containerd[1472]: time="2025-01-29T11:10:03.892518188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-5-b8e0b24f92,Uid:a789b695984095d4d7ca4df52323bed9,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:03.906885 kubelet[2202]: E0129 11:10:03.906478 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:03.907939 containerd[1472]: time="2025-01-29T11:10:03.907390021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-5-b8e0b24f92,Uid:7e5cd8a82ed2e0c43ae3d091536029f3,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:03.962965 kubelet[2202]: E0129 11:10:03.962895 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.77.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-5-b8e0b24f92?timeout=10s\": dial tcp 143.198.77.23:6443: connect: connection refused" interval="800ms" Jan 29 11:10:04.206648 kubelet[2202]: I0129 11:10:04.206291 2202 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:04.207997 kubelet[2202]: E0129 11:10:04.207449 2202 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.77.23:6443/api/v1/nodes\": dial tcp 143.198.77.23:6443: connect: connection refused" node="ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:04.207997 kubelet[2202]: W0129 11:10:04.207864 2202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.77.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.77.23:6443: connect: connection refused Jan 29 11:10:04.207997 kubelet[2202]: E0129 11:10:04.207951 2202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.77.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.77.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:04.417755 kubelet[2202]: W0129 11:10:04.415626 2202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.77.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.77.23:6443: connect: connection refused Jan 29 11:10:04.417755 kubelet[2202]: E0129 11:10:04.415845 2202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.77.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.77.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:04.488716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4018659128.mount: Deactivated successfully. Jan 29 11:10:04.511864 containerd[1472]: time="2025-01-29T11:10:04.509534111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:10:04.517850 containerd[1472]: time="2025-01-29T11:10:04.517619983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:10:04.523159 containerd[1472]: time="2025-01-29T11:10:04.520348521Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:10:04.527909 containerd[1472]: time="2025-01-29T11:10:04.527648736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:10:04.527909 containerd[1472]: time="2025-01-29T11:10:04.527848664Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:10:04.530616 containerd[1472]: time="2025-01-29T11:10:04.530519313Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:10:04.531701 kubelet[2202]: W0129 11:10:04.531599 2202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.77.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-5-b8e0b24f92&limit=500&resourceVersion=0": dial tcp 143.198.77.23:6443: connect: connection refused Jan 29 11:10:04.531867 kubelet[2202]: E0129 11:10:04.531715 2202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.77.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-5-b8e0b24f92&limit=500&resourceVersion=0\": dial tcp 143.198.77.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:04.538771 containerd[1472]: time="2025-01-29T11:10:04.537500666Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:10:04.544640 containerd[1472]: time="2025-01-29T11:10:04.543202509Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 677.350112ms" Jan 29 11:10:04.548957 containerd[1472]: time="2025-01-29T11:10:04.546746348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:10:04.550647 containerd[1472]: time="2025-01-29T11:10:04.550411830Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 657.740262ms" Jan 29 11:10:04.560798 containerd[1472]: time="2025-01-29T11:10:04.560558679Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 653.020506ms" Jan 29 11:10:04.739051 containerd[1472]: time="2025-01-29T11:10:04.737619154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:04.739051 containerd[1472]: time="2025-01-29T11:10:04.737775665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:04.739051 containerd[1472]: time="2025-01-29T11:10:04.737798088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:04.740145 containerd[1472]: time="2025-01-29T11:10:04.738906460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:04.764176 kubelet[2202]: E0129 11:10:04.763706 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.77.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-5-b8e0b24f92?timeout=10s\": dial tcp 143.198.77.23:6443: connect: connection refused" interval="1.6s" Jan 29 11:10:04.771006 containerd[1472]: time="2025-01-29T11:10:04.770728938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:04.771006 containerd[1472]: time="2025-01-29T11:10:04.770836063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:04.771006 containerd[1472]: time="2025-01-29T11:10:04.770861797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:04.772147 containerd[1472]: time="2025-01-29T11:10:04.771016528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:04.774089 containerd[1472]: time="2025-01-29T11:10:04.773019910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:04.774089 containerd[1472]: time="2025-01-29T11:10:04.773125709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:04.775147 containerd[1472]: time="2025-01-29T11:10:04.774427706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:04.775147 containerd[1472]: time="2025-01-29T11:10:04.774672250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:04.791799 systemd[1]: Started cri-containerd-95f116696691ab65572ec8c9dc01386b35948cb5f4a69e144fa55d3d0d3c720e.scope - libcontainer container 95f116696691ab65572ec8c9dc01386b35948cb5f4a69e144fa55d3d0d3c720e. Jan 29 11:10:04.799383 kubelet[2202]: E0129 11:10:04.798858 2202 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.77.23:6443/api/v1/namespaces/default/events\": dial tcp 143.198.77.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-5-b8e0b24f92.181f254f356590dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-5-b8e0b24f92,UID:ci-4186.1.0-5-b8e0b24f92,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-5-b8e0b24f92,},FirstTimestamp:2025-01-29 11:10:03.316793564 +0000 UTC m=+0.883513185,LastTimestamp:2025-01-29 11:10:03.316793564 +0000 UTC m=+0.883513185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-5-b8e0b24f92,}" Jan 29 11:10:04.831420 systemd[1]: Started cri-containerd-5b145c4bc0bc8074d481fd438889c58007d40ef218b544dd949a6296bd6f2b35.scope - libcontainer container 5b145c4bc0bc8074d481fd438889c58007d40ef218b544dd949a6296bd6f2b35. Jan 29 11:10:04.857939 systemd[1]: Started cri-containerd-c249a731ad1f2258597e0f9a7210a247982ab682bb8b21ff390a16e40d9ca9b7.scope - libcontainer container c249a731ad1f2258597e0f9a7210a247982ab682bb8b21ff390a16e40d9ca9b7. Jan 29 11:10:04.865784 kubelet[2202]: W0129 11:10:04.865691 2202 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.77.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.77.23:6443: connect: connection refused Jan 29 11:10:04.865937 kubelet[2202]: E0129 11:10:04.865794 2202 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.77.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.77.23:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:04.947600 containerd[1472]: time="2025-01-29T11:10:04.947524590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-5-b8e0b24f92,Uid:9f8444b614ea97ec6c6906869feb6016,Namespace:kube-system,Attempt:0,} returns sandbox id \"95f116696691ab65572ec8c9dc01386b35948cb5f4a69e144fa55d3d0d3c720e\"" Jan 29 11:10:04.953583 kubelet[2202]: E0129 11:10:04.953488 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:04.962661 containerd[1472]: time="2025-01-29T11:10:04.962525022Z" level=info msg="CreateContainer within sandbox \"95f116696691ab65572ec8c9dc01386b35948cb5f4a69e144fa55d3d0d3c720e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:10:04.966150 containerd[1472]: time="2025-01-29T11:10:04.964553818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-5-b8e0b24f92,Uid:7e5cd8a82ed2e0c43ae3d091536029f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b145c4bc0bc8074d481fd438889c58007d40ef218b544dd949a6296bd6f2b35\"" Jan 29 11:10:04.967134 kubelet[2202]: E0129 11:10:04.967082 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:04.969440 containerd[1472]: time="2025-01-29T11:10:04.969388928Z" level=info msg="CreateContainer within sandbox \"5b145c4bc0bc8074d481fd438889c58007d40ef218b544dd949a6296bd6f2b35\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:10:04.974045 containerd[1472]: time="2025-01-29T11:10:04.973986397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-5-b8e0b24f92,Uid:a789b695984095d4d7ca4df52323bed9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c249a731ad1f2258597e0f9a7210a247982ab682bb8b21ff390a16e40d9ca9b7\"" Jan 29 11:10:04.975607 kubelet[2202]: E0129 11:10:04.975570 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:04.979643 containerd[1472]: time="2025-01-29T11:10:04.979554344Z" level=info msg="CreateContainer within sandbox \"c249a731ad1f2258597e0f9a7210a247982ab682bb8b21ff390a16e40d9ca9b7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:10:05.005053 containerd[1472]: time="2025-01-29T11:10:05.003254815Z" level=info msg="CreateContainer within sandbox \"5b145c4bc0bc8074d481fd438889c58007d40ef218b544dd949a6296bd6f2b35\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e48736c09854a4ebed0fc68cd993c16d970f4e5b24f0c58e3c7ea263d5a98238\"" Jan 29 11:10:05.005441 containerd[1472]: time="2025-01-29T11:10:05.005384981Z" level=info msg="StartContainer for \"e48736c09854a4ebed0fc68cd993c16d970f4e5b24f0c58e3c7ea263d5a98238\"" Jan 29 11:10:05.005677 containerd[1472]: time="2025-01-29T11:10:05.005648400Z" level=info msg="CreateContainer within sandbox \"95f116696691ab65572ec8c9dc01386b35948cb5f4a69e144fa55d3d0d3c720e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"242b5b3de64c8d2d5846e4f46d378dd5acd3abc0b8aaac09920c540f0297ac9f\"" Jan 29 11:10:05.006525 containerd[1472]: time="2025-01-29T11:10:05.006485180Z" level=info msg="StartContainer for \"242b5b3de64c8d2d5846e4f46d378dd5acd3abc0b8aaac09920c540f0297ac9f\"" Jan 29 11:10:05.010583 containerd[1472]: time="2025-01-29T11:10:05.006964694Z" level=info msg="CreateContainer within sandbox \"c249a731ad1f2258597e0f9a7210a247982ab682bb8b21ff390a16e40d9ca9b7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4b817497e2d77ec46adef4fa2fa0c735d1be4899781f4e37af04c4b44e62b53\"" Jan 29 11:10:05.017001 kubelet[2202]: I0129 11:10:05.016943 2202 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:05.017566 kubelet[2202]: E0129 11:10:05.017514 2202 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.77.23:6443/api/v1/nodes\": dial tcp 143.198.77.23:6443: connect: connection refused" node="ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:05.017889 containerd[1472]: time="2025-01-29T11:10:05.017656726Z" level=info msg="StartContainer for \"c4b817497e2d77ec46adef4fa2fa0c735d1be4899781f4e37af04c4b44e62b53\"" Jan 29 11:10:05.072843 systemd[1]: Started cri-containerd-242b5b3de64c8d2d5846e4f46d378dd5acd3abc0b8aaac09920c540f0297ac9f.scope - libcontainer container 242b5b3de64c8d2d5846e4f46d378dd5acd3abc0b8aaac09920c540f0297ac9f. Jan 29 11:10:05.093476 systemd[1]: Started cri-containerd-c4b817497e2d77ec46adef4fa2fa0c735d1be4899781f4e37af04c4b44e62b53.scope - libcontainer container c4b817497e2d77ec46adef4fa2fa0c735d1be4899781f4e37af04c4b44e62b53. Jan 29 11:10:05.102386 systemd[1]: Started cri-containerd-e48736c09854a4ebed0fc68cd993c16d970f4e5b24f0c58e3c7ea263d5a98238.scope - libcontainer container e48736c09854a4ebed0fc68cd993c16d970f4e5b24f0c58e3c7ea263d5a98238. Jan 29 11:10:05.192070 containerd[1472]: time="2025-01-29T11:10:05.191427756Z" level=info msg="StartContainer for \"242b5b3de64c8d2d5846e4f46d378dd5acd3abc0b8aaac09920c540f0297ac9f\" returns successfully" Jan 29 11:10:05.238758 containerd[1472]: time="2025-01-29T11:10:05.238466506Z" level=info msg="StartContainer for \"c4b817497e2d77ec46adef4fa2fa0c735d1be4899781f4e37af04c4b44e62b53\" returns successfully" Jan 29 11:10:05.246132 containerd[1472]: time="2025-01-29T11:10:05.245558490Z" level=info msg="StartContainer for \"e48736c09854a4ebed0fc68cd993c16d970f4e5b24f0c58e3c7ea263d5a98238\" returns successfully" Jan 29 11:10:05.465616 kubelet[2202]: E0129 11:10:05.465567 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:05.466558 kubelet[2202]: E0129 11:10:05.466517 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:05.473270 kubelet[2202]: E0129 11:10:05.473221 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:06.477691 kubelet[2202]: E0129 11:10:06.476034 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:06.619340 kubelet[2202]: I0129 11:10:06.619298 2202 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:07.431931 kubelet[2202]: E0129 11:10:07.431869 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:07.481195 kubelet[2202]: E0129 11:10:07.479794 2202 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:07.585404 kubelet[2202]: I0129 11:10:07.585337 2202 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:07.585404 kubelet[2202]: E0129 11:10:07.585404 2202 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4186.1.0-5-b8e0b24f92\": node \"ci-4186.1.0-5-b8e0b24f92\" not found" Jan 29 11:10:07.686869 kubelet[2202]: E0129 11:10:07.686694 2202 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 29 11:10:08.312395 kubelet[2202]: I0129 11:10:08.312343 2202 apiserver.go:52] "Watching apiserver" Jan 29 11:10:08.357120 kubelet[2202]: I0129 11:10:08.357035 2202 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:10:09.895810 systemd[1]: Reloading requested from client PID 2477 ('systemctl') (unit session-9.scope)... Jan 29 11:10:09.896238 systemd[1]: Reloading... Jan 29 11:10:10.025177 zram_generator::config[2513]: No configuration found. Jan 29 11:10:10.213140 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:10:10.329057 systemd[1]: Reloading finished in 432 ms. Jan 29 11:10:10.387994 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:10.410920 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:10:10.411752 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:10.412234 systemd[1]: kubelet.service: Consumed 1.234s CPU time, 114.1M memory peak, 0B memory swap peak. Jan 29 11:10:10.420639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:10.580430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:10.589674 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:10:10.691071 kubelet[2567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:10:10.692474 kubelet[2567]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:10:10.692474 kubelet[2567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:10:10.716438 kubelet[2567]: I0129 11:10:10.716318 2567 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:10:10.732280 kubelet[2567]: I0129 11:10:10.732073 2567 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:10:10.732280 kubelet[2567]: I0129 11:10:10.732278 2567 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:10:10.732774 kubelet[2567]: I0129 11:10:10.732753 2567 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:10:10.735562 kubelet[2567]: I0129 11:10:10.735516 2567 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:10:10.744849 kubelet[2567]: I0129 11:10:10.744381 2567 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:10:10.750046 kubelet[2567]: E0129 11:10:10.749965 2567 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:10:10.750266 kubelet[2567]: I0129 11:10:10.750249 2567 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:10:10.758868 kubelet[2567]: I0129 11:10:10.758809 2567 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:10:10.759381 kubelet[2567]: I0129 11:10:10.759318 2567 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:10:10.759862 kubelet[2567]: I0129 11:10:10.759788 2567 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:10:10.761127 kubelet[2567]: I0129 11:10:10.760055 2567 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-5-b8e0b24f92","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:10:10.761127 kubelet[2567]: I0129 11:10:10.760605 2567 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:10:10.761127 kubelet[2567]: I0129 11:10:10.760622 2567 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:10:10.761127 kubelet[2567]: I0129 11:10:10.760665 2567 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:10:10.761127 kubelet[2567]: I0129 11:10:10.760851 2567 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:10:10.761433 kubelet[2567]: I0129 11:10:10.760872 2567 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:10:10.761433 kubelet[2567]: I0129 11:10:10.760903 2567 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:10:10.761433 kubelet[2567]: I0129 11:10:10.760918 2567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:10:10.764558 kubelet[2567]: I0129 11:10:10.764529 2567 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:10:10.765204 kubelet[2567]: I0129 11:10:10.765179 2567 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:10:10.767584 kubelet[2567]: I0129 11:10:10.767544 2567 server.go:1269] "Started kubelet" Jan 29 11:10:10.769766 kubelet[2567]: I0129 11:10:10.769720 2567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:10:10.770554 kubelet[2567]: I0129 11:10:10.770412 2567 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:10:10.773956 kubelet[2567]: I0129 11:10:10.771986 2567 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:10:10.778347 kubelet[2567]: I0129 11:10:10.777908 2567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:10:10.779026 kubelet[2567]: I0129 11:10:10.778998 2567 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:10:10.779194 kubelet[2567]: I0129 11:10:10.779173 2567 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:10:10.781595 kubelet[2567]: I0129 11:10:10.781456 2567 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:10:10.782371 kubelet[2567]: E0129 11:10:10.781812 2567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-5-b8e0b24f92\" not found" Jan 29 11:10:10.788278 kubelet[2567]: I0129 11:10:10.784309 2567 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:10:10.788278 kubelet[2567]: I0129 11:10:10.784514 2567 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:10:10.788278 kubelet[2567]: I0129 11:10:10.787452 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:10:10.790141 kubelet[2567]: I0129 11:10:10.789366 2567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:10:10.790141 kubelet[2567]: I0129 11:10:10.789425 2567 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:10:10.790141 kubelet[2567]: I0129 11:10:10.789453 2567 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:10:10.790141 kubelet[2567]: E0129 11:10:10.789528 2567 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:10:10.800828 kubelet[2567]: I0129 11:10:10.800787 2567 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:10:10.801019 kubelet[2567]: I0129 11:10:10.800936 2567 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:10:10.815592 kubelet[2567]: I0129 11:10:10.812455 2567 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:10:10.815592 kubelet[2567]: E0129 11:10:10.814406 2567 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:10:10.890120 kubelet[2567]: E0129 11:10:10.890050 2567 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:10:10.898609 kubelet[2567]: I0129 11:10:10.898266 2567 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:10:10.899902 kubelet[2567]: I0129 11:10:10.898897 2567 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:10:10.899902 kubelet[2567]: I0129 11:10:10.898930 2567 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:10:10.899902 kubelet[2567]: I0129 11:10:10.899175 2567 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:10:10.899902 kubelet[2567]: I0129 11:10:10.899192 2567 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:10:10.899902 kubelet[2567]: I0129 11:10:10.899216 2567 policy_none.go:49] "None policy: Start" Jan 29 11:10:10.902077 kubelet[2567]: I0129 11:10:10.901707 2567 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:10:10.902077 kubelet[2567]: I0129 11:10:10.901741 2567 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:10:10.902077 kubelet[2567]: I0129 11:10:10.901939 2567 state_mem.go:75] "Updated machine memory state" Jan 29 11:10:10.911555 kubelet[2567]: I0129 11:10:10.911508 2567 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:10:10.915413 kubelet[2567]: I0129 11:10:10.915373 2567 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:10:10.915541 kubelet[2567]: I0129 11:10:10.915396 2567 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:10:10.916870 kubelet[2567]: I0129 11:10:10.916026 2567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:10:10.925219 sudo[2597]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:10:10.926307 sudo[2597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:10:11.033793 kubelet[2567]: I0129 11:10:11.032476 2567 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:11.046010 kubelet[2567]: I0129 11:10:11.045934 2567 kubelet_node_status.go:111] "Node was previously registered" node="ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:11.046899 kubelet[2567]: I0129 11:10:11.046560 2567 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:11.115142 kubelet[2567]: W0129 11:10:11.114255 2567 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:10:11.116149 kubelet[2567]: W0129 11:10:11.115827 2567 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:10:11.117372 kubelet[2567]: W0129 11:10:11.117153 2567 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:10:11.186997 kubelet[2567]: I0129 11:10:11.186546 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f8444b614ea97ec6c6906869feb6016-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-5-b8e0b24f92\" (UID: \"9f8444b614ea97ec6c6906869feb6016\") " pod="kube-system/kube-apiserver-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:11.186997 kubelet[2567]: I0129 11:10:11.186589 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e5cd8a82ed2e0c43ae3d091536029f3-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-5-b8e0b24f92\" (UID: \"7e5cd8a82ed2e0c43ae3d091536029f3\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:11.186997 kubelet[2567]: I0129 11:10:11.186622 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e5cd8a82ed2e0c43ae3d091536029f3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-5-b8e0b24f92\" (UID: \"7e5cd8a82ed2e0c43ae3d091536029f3\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:11.186997 kubelet[2567]: I0129 11:10:11.186645 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a789b695984095d4d7ca4df52323bed9-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-5-b8e0b24f92\" (UID: \"a789b695984095d4d7ca4df52323bed9\") " pod="kube-system/kube-scheduler-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:11.186997 kubelet[2567]: I0129 11:10:11.186677 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f8444b614ea97ec6c6906869feb6016-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-5-b8e0b24f92\" (UID: \"9f8444b614ea97ec6c6906869feb6016\") " pod="kube-system/kube-apiserver-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:11.187370 kubelet[2567]: I0129 11:10:11.186694 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f8444b614ea97ec6c6906869feb6016-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-5-b8e0b24f92\" (UID: \"9f8444b614ea97ec6c6906869feb6016\") " pod="kube-system/kube-apiserver-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:11.187370 kubelet[2567]: I0129 11:10:11.186721 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e5cd8a82ed2e0c43ae3d091536029f3-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-5-b8e0b24f92\" (UID: \"7e5cd8a82ed2e0c43ae3d091536029f3\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:11.187370 kubelet[2567]: I0129 11:10:11.186740 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e5cd8a82ed2e0c43ae3d091536029f3-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-5-b8e0b24f92\" (UID: \"7e5cd8a82ed2e0c43ae3d091536029f3\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:11.187370 kubelet[2567]: I0129 11:10:11.186794 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e5cd8a82ed2e0c43ae3d091536029f3-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-5-b8e0b24f92\" (UID: \"7e5cd8a82ed2e0c43ae3d091536029f3\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:11.415853 kubelet[2567]: E0129 11:10:11.415782 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:11.416418 kubelet[2567]: E0129 11:10:11.416383 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:11.419061 kubelet[2567]: E0129 11:10:11.418906 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:11.685646 sudo[2597]: pam_unix(sudo:session): session closed for user root Jan 29 11:10:11.762469 kubelet[2567]: I0129 11:10:11.762412 2567 apiserver.go:52] "Watching apiserver" Jan 29 11:10:11.786310 kubelet[2567]: I0129 11:10:11.786252 2567 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:10:11.854051 kubelet[2567]: E0129 11:10:11.853994 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:11.855846 kubelet[2567]: E0129 11:10:11.855079 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:11.866325 kubelet[2567]: W0129 11:10:11.866282 2567 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:10:11.866532 kubelet[2567]: E0129 11:10:11.866378 2567 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-5-b8e0b24f92\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-5-b8e0b24f92" Jan 29 11:10:11.867136 kubelet[2567]: E0129 11:10:11.866620 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:11.918821 kubelet[2567]: I0129 11:10:11.918666 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.0-5-b8e0b24f92" podStartSLOduration=0.918644275 podStartE2EDuration="918.644275ms" podCreationTimestamp="2025-01-29 11:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:11.904803103 +0000 UTC m=+1.295357524" watchObservedRunningTime="2025-01-29 11:10:11.918644275 +0000 UTC m=+1.309198689" Jan 29 11:10:11.937246 kubelet[2567]: I0129 11:10:11.935182 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.0-5-b8e0b24f92" podStartSLOduration=0.935157989 podStartE2EDuration="935.157989ms" podCreationTimestamp="2025-01-29 11:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:11.920036375 +0000 UTC m=+1.310590801" watchObservedRunningTime="2025-01-29 11:10:11.935157989 +0000 UTC m=+1.325712441" Jan 29 11:10:11.937246 kubelet[2567]: I0129 11:10:11.935338 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.0-5-b8e0b24f92" podStartSLOduration=0.935328967 podStartE2EDuration="935.328967ms" podCreationTimestamp="2025-01-29 11:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:11.934919422 +0000 UTC m=+1.325473841" watchObservedRunningTime="2025-01-29 11:10:11.935328967 +0000 UTC m=+1.325883396" Jan 29 11:10:12.857080 kubelet[2567]: E0129 11:10:12.857036 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:13.312778 sudo[1685]: pam_unix(sudo:session): session closed for user root Jan 29 11:10:13.317192 sshd[1684]: Connection closed by 139.178.89.65 port 60140 Jan 29 11:10:13.318222 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:13.323026 systemd[1]: sshd@8-143.198.77.23:22-139.178.89.65:60140.service: Deactivated successfully. Jan 29 11:10:13.325231 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:10:13.325407 systemd[1]: session-9.scope: Consumed 5.098s CPU time, 144.7M memory peak, 0B memory swap peak. Jan 29 11:10:13.326193 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:10:13.327882 systemd-logind[1448]: Removed session 9. Jan 29 11:10:13.858646 kubelet[2567]: E0129 11:10:13.858564 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:13.957473 kubelet[2567]: E0129 11:10:13.955763 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:15.107140 kubelet[2567]: I0129 11:10:15.107027 2567 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:10:15.109758 containerd[1472]: time="2025-01-29T11:10:15.108601081Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:10:15.110441 kubelet[2567]: I0129 11:10:15.108928 2567 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:10:15.742974 systemd[1]: Created slice kubepods-besteffort-pod6f9808c7_2536_4ce0_b406_77723ebf10ba.slice - libcontainer container kubepods-besteffort-pod6f9808c7_2536_4ce0_b406_77723ebf10ba.slice. Jan 29 11:10:15.771474 systemd[1]: Created slice kubepods-burstable-pod697d4142_1a88_4854_8dc9_8f615d5853f4.slice - libcontainer container kubepods-burstable-pod697d4142_1a88_4854_8dc9_8f615d5853f4.slice. Jan 29 11:10:15.923161 kubelet[2567]: I0129 11:10:15.922849 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-bpf-maps\") pod \"cilium-mqll7\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " pod="kube-system/cilium-mqll7" Jan 29 11:10:15.923161 kubelet[2567]: I0129 11:10:15.922909 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-host-proc-sys-net\") pod \"cilium-mqll7\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " pod="kube-system/cilium-mqll7" Jan 29 11:10:15.923161 kubelet[2567]: I0129 11:10:15.922990 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-lib-modules\") pod \"cilium-mqll7\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " pod="kube-system/cilium-mqll7" Jan 29 11:10:15.923161 kubelet[2567]: I0129 11:10:15.923049 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-xtables-lock\") pod \"cilium-mqll7\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " pod="kube-system/cilium-mqll7" Jan 29 11:10:15.923161 kubelet[2567]: I0129 11:10:15.923075 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/697d4142-1a88-4854-8dc9-8f615d5853f4-hubble-tls\") pod \"cilium-mqll7\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " pod="kube-system/cilium-mqll7" Jan 29 11:10:15.923161 kubelet[2567]: I0129 11:10:15.923121 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsv8m\" (UniqueName: \"kubernetes.io/projected/697d4142-1a88-4854-8dc9-8f615d5853f4-kube-api-access-qsv8m\") pod \"cilium-mqll7\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " pod="kube-system/cilium-mqll7" Jan 29 11:10:15.923692 kubelet[2567]: I0129 11:10:15.923203 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-cilium-run\") pod \"cilium-mqll7\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " pod="kube-system/cilium-mqll7" Jan 29 11:10:15.923692 kubelet[2567]: I0129 11:10:15.923247 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-cilium-cgroup\") pod \"cilium-mqll7\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " pod="kube-system/cilium-mqll7" Jan 29 11:10:15.923692 kubelet[2567]: I0129 11:10:15.923279 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-host-proc-sys-kernel\") pod \"cilium-mqll7\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " pod="kube-system/cilium-mqll7" Jan 29 11:10:15.923692 kubelet[2567]: I0129 11:10:15.923312 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6f9808c7-2536-4ce0-b406-77723ebf10ba-kube-proxy\") pod \"kube-proxy-hvmhq\" (UID: \"6f9808c7-2536-4ce0-b406-77723ebf10ba\") " pod="kube-system/kube-proxy-hvmhq" Jan 29 11:10:15.923692 kubelet[2567]: I0129 11:10:15.923336 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f9808c7-2536-4ce0-b406-77723ebf10ba-lib-modules\") pod \"kube-proxy-hvmhq\" (UID: \"6f9808c7-2536-4ce0-b406-77723ebf10ba\") " pod="kube-system/kube-proxy-hvmhq" Jan 29 11:10:15.923692 kubelet[2567]: I0129 11:10:15.923385 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/697d4142-1a88-4854-8dc9-8f615d5853f4-clustermesh-secrets\") pod \"cilium-mqll7\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " pod="kube-system/cilium-mqll7" Jan 29 11:10:15.924226 kubelet[2567]: I0129 11:10:15.923424 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f9808c7-2536-4ce0-b406-77723ebf10ba-xtables-lock\") pod \"kube-proxy-hvmhq\" (UID: \"6f9808c7-2536-4ce0-b406-77723ebf10ba\") " pod="kube-system/kube-proxy-hvmhq" Jan 29 11:10:15.924226 kubelet[2567]: I0129 11:10:15.923467 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/697d4142-1a88-4854-8dc9-8f615d5853f4-cilium-config-path\") pod \"cilium-mqll7\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " pod="kube-system/cilium-mqll7" Jan 29 11:10:15.924226 kubelet[2567]: I0129 11:10:15.923504 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-etc-cni-netd\") pod \"cilium-mqll7\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " pod="kube-system/cilium-mqll7" Jan 29 11:10:15.924226 kubelet[2567]: I0129 11:10:15.923532 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-hostproc\") pod \"cilium-mqll7\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " pod="kube-system/cilium-mqll7" Jan 29 11:10:15.924226 kubelet[2567]: I0129 11:10:15.923559 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-cni-path\") pod \"cilium-mqll7\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " pod="kube-system/cilium-mqll7" Jan 29 11:10:15.924226 kubelet[2567]: I0129 11:10:15.923593 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wxqr\" (UniqueName: \"kubernetes.io/projected/6f9808c7-2536-4ce0-b406-77723ebf10ba-kube-api-access-6wxqr\") pod \"kube-proxy-hvmhq\" (UID: \"6f9808c7-2536-4ce0-b406-77723ebf10ba\") " pod="kube-system/kube-proxy-hvmhq" Jan 29 11:10:16.077145 kubelet[2567]: E0129 11:10:16.076957 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:16.080526 containerd[1472]: time="2025-01-29T11:10:16.080446345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mqll7,Uid:697d4142-1a88-4854-8dc9-8f615d5853f4,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:16.148809 containerd[1472]: time="2025-01-29T11:10:16.146795576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:16.155733 containerd[1472]: time="2025-01-29T11:10:16.151074300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:16.155505 systemd[1]: Created slice kubepods-besteffort-podb680a636_9854_4c00_b161_f27da27d89a0.slice - libcontainer container kubepods-besteffort-podb680a636_9854_4c00_b161_f27da27d89a0.slice. Jan 29 11:10:16.156349 containerd[1472]: time="2025-01-29T11:10:16.152576479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:16.156349 containerd[1472]: time="2025-01-29T11:10:16.152764248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:16.204584 systemd[1]: Started cri-containerd-2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1.scope - libcontainer container 2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1. Jan 29 11:10:16.226645 kubelet[2567]: I0129 11:10:16.226513 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb9bd\" (UniqueName: \"kubernetes.io/projected/b680a636-9854-4c00-b161-f27da27d89a0-kube-api-access-sb9bd\") pod \"cilium-operator-5d85765b45-ckk9d\" (UID: \"b680a636-9854-4c00-b161-f27da27d89a0\") " pod="kube-system/cilium-operator-5d85765b45-ckk9d" Jan 29 11:10:16.226645 kubelet[2567]: I0129 11:10:16.226581 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b680a636-9854-4c00-b161-f27da27d89a0-cilium-config-path\") pod \"cilium-operator-5d85765b45-ckk9d\" (UID: \"b680a636-9854-4c00-b161-f27da27d89a0\") " pod="kube-system/cilium-operator-5d85765b45-ckk9d" Jan 29 11:10:16.267849 containerd[1472]: time="2025-01-29T11:10:16.267747836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mqll7,Uid:697d4142-1a88-4854-8dc9-8f615d5853f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\"" Jan 29 11:10:16.271733 kubelet[2567]: E0129 11:10:16.270606 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:16.273012 containerd[1472]: time="2025-01-29T11:10:16.272967883Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:10:16.350481 kubelet[2567]: E0129 11:10:16.350293 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:16.353219 containerd[1472]: time="2025-01-29T11:10:16.351995589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hvmhq,Uid:6f9808c7-2536-4ce0-b406-77723ebf10ba,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:16.390271 containerd[1472]: time="2025-01-29T11:10:16.390080082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:16.391052 containerd[1472]: time="2025-01-29T11:10:16.390809005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:16.391052 containerd[1472]: time="2025-01-29T11:10:16.390838077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:16.391052 containerd[1472]: time="2025-01-29T11:10:16.390951025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:16.416500 systemd[1]: Started cri-containerd-0d72252b78dee29d9ca843ab996c948143307a9c11ed654d209477be9a6080eb.scope - libcontainer container 0d72252b78dee29d9ca843ab996c948143307a9c11ed654d209477be9a6080eb. Jan 29 11:10:16.455013 containerd[1472]: time="2025-01-29T11:10:16.454879287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hvmhq,Uid:6f9808c7-2536-4ce0-b406-77723ebf10ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d72252b78dee29d9ca843ab996c948143307a9c11ed654d209477be9a6080eb\"" Jan 29 11:10:16.457241 kubelet[2567]: E0129 11:10:16.456596 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:16.463301 kubelet[2567]: E0129 11:10:16.463247 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:16.464830 containerd[1472]: time="2025-01-29T11:10:16.464769413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ckk9d,Uid:b680a636-9854-4c00-b161-f27da27d89a0,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:16.470288 containerd[1472]: time="2025-01-29T11:10:16.470133217Z" level=info msg="CreateContainer within sandbox \"0d72252b78dee29d9ca843ab996c948143307a9c11ed654d209477be9a6080eb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:10:16.499046 containerd[1472]: time="2025-01-29T11:10:16.498802787Z" level=info msg="CreateContainer within sandbox \"0d72252b78dee29d9ca843ab996c948143307a9c11ed654d209477be9a6080eb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4b73ab493504bc937a4baa3ac9cd60ee7af20faf5254e8bc07bbb0202e1a1151\"" Jan 29 11:10:16.502924 containerd[1472]: time="2025-01-29T11:10:16.502438628Z" level=info msg="StartContainer for \"4b73ab493504bc937a4baa3ac9cd60ee7af20faf5254e8bc07bbb0202e1a1151\"" Jan 29 11:10:16.532455 containerd[1472]: time="2025-01-29T11:10:16.531883948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:16.532455 containerd[1472]: time="2025-01-29T11:10:16.532002892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:16.532455 containerd[1472]: time="2025-01-29T11:10:16.532028167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:16.532455 containerd[1472]: time="2025-01-29T11:10:16.532273955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:16.564884 systemd[1]: Started cri-containerd-4b73ab493504bc937a4baa3ac9cd60ee7af20faf5254e8bc07bbb0202e1a1151.scope - libcontainer container 4b73ab493504bc937a4baa3ac9cd60ee7af20faf5254e8bc07bbb0202e1a1151. Jan 29 11:10:16.575397 systemd[1]: Started cri-containerd-c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01.scope - libcontainer container c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01. Jan 29 11:10:16.638151 containerd[1472]: time="2025-01-29T11:10:16.637054677Z" level=info msg="StartContainer for \"4b73ab493504bc937a4baa3ac9cd60ee7af20faf5254e8bc07bbb0202e1a1151\" returns successfully" Jan 29 11:10:16.670402 containerd[1472]: time="2025-01-29T11:10:16.670333410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ckk9d,Uid:b680a636-9854-4c00-b161-f27da27d89a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01\"" Jan 29 11:10:16.674201 kubelet[2567]: E0129 11:10:16.673878 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:16.877562 kubelet[2567]: E0129 11:10:16.876525 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:16.901467 kubelet[2567]: I0129 11:10:16.899074 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hvmhq" podStartSLOduration=1.8990442650000001 podStartE2EDuration="1.899044265s" podCreationTimestamp="2025-01-29 11:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:16.89884635 +0000 UTC m=+6.289400772" watchObservedRunningTime="2025-01-29 11:10:16.899044265 +0000 UTC m=+6.289598684" Jan 29 11:10:18.185201 update_engine[1450]: I20250129 11:10:18.185062 1450 update_attempter.cc:509] Updating boot flags... Jan 29 11:10:18.188324 kubelet[2567]: E0129 11:10:18.186798 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:18.254382 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2935) Jan 29 11:10:18.355360 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2937) Jan 29 11:10:18.881984 kubelet[2567]: E0129 11:10:18.881170 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:23.673391 kubelet[2567]: E0129 11:10:23.673349 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:23.966475 kubelet[2567]: E0129 11:10:23.965811 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:24.601214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4059945890.mount: Deactivated successfully. Jan 29 11:10:27.198785 containerd[1472]: time="2025-01-29T11:10:27.165276692Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 11:10:27.240644 containerd[1472]: time="2025-01-29T11:10:27.238590774Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.965354106s" Jan 29 11:10:27.240644 containerd[1472]: time="2025-01-29T11:10:27.238659328Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 11:10:27.243371 containerd[1472]: time="2025-01-29T11:10:27.243319559Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:27.245310 containerd[1472]: time="2025-01-29T11:10:27.245254803Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:27.246224 containerd[1472]: time="2025-01-29T11:10:27.246191456Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:10:27.256768 containerd[1472]: time="2025-01-29T11:10:27.256713465Z" level=info msg="CreateContainer within sandbox \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:10:27.346952 containerd[1472]: time="2025-01-29T11:10:27.346874423Z" level=info msg="CreateContainer within sandbox \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f\"" Jan 29 11:10:27.347809 containerd[1472]: time="2025-01-29T11:10:27.347770293Z" level=info msg="StartContainer for \"31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f\"" Jan 29 11:10:27.446194 systemd[1]: Started cri-containerd-31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f.scope - libcontainer container 31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f. Jan 29 11:10:27.495083 containerd[1472]: time="2025-01-29T11:10:27.494968221Z" level=info msg="StartContainer for \"31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f\" returns successfully" Jan 29 11:10:27.512168 systemd[1]: cri-containerd-31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f.scope: Deactivated successfully. Jan 29 11:10:27.596039 containerd[1472]: time="2025-01-29T11:10:27.556645482Z" level=info msg="shim disconnected" id=31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f namespace=k8s.io Jan 29 11:10:27.596039 containerd[1472]: time="2025-01-29T11:10:27.596027777Z" level=warning msg="cleaning up after shim disconnected" id=31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f namespace=k8s.io Jan 29 11:10:27.596039 containerd[1472]: time="2025-01-29T11:10:27.596044895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:27.938561 kubelet[2567]: E0129 11:10:27.937144 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:27.950280 containerd[1472]: time="2025-01-29T11:10:27.949692340Z" level=info msg="CreateContainer within sandbox \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:10:27.966635 containerd[1472]: time="2025-01-29T11:10:27.966443948Z" level=info msg="CreateContainer within sandbox \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2\"" Jan 29 11:10:27.967703 containerd[1472]: time="2025-01-29T11:10:27.967613006Z" level=info msg="StartContainer for \"a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2\"" Jan 29 11:10:28.008613 systemd[1]: Started cri-containerd-a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2.scope - libcontainer container a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2. Jan 29 11:10:28.046341 containerd[1472]: time="2025-01-29T11:10:28.046252652Z" level=info msg="StartContainer for \"a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2\" returns successfully" Jan 29 11:10:28.066261 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:10:28.066627 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:10:28.066732 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:10:28.073573 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:10:28.073846 systemd[1]: cri-containerd-a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2.scope: Deactivated successfully. Jan 29 11:10:28.117878 containerd[1472]: time="2025-01-29T11:10:28.117808164Z" level=info msg="shim disconnected" id=a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2 namespace=k8s.io Jan 29 11:10:28.118406 containerd[1472]: time="2025-01-29T11:10:28.118098321Z" level=warning msg="cleaning up after shim disconnected" id=a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2 namespace=k8s.io Jan 29 11:10:28.118406 containerd[1472]: time="2025-01-29T11:10:28.118121993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:28.123300 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:10:28.146643 containerd[1472]: time="2025-01-29T11:10:28.146386618Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:10:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:10:28.324531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f-rootfs.mount: Deactivated successfully. Jan 29 11:10:28.799288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1646748016.mount: Deactivated successfully. Jan 29 11:10:28.945958 kubelet[2567]: E0129 11:10:28.945499 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:28.956429 containerd[1472]: time="2025-01-29T11:10:28.956363076Z" level=info msg="CreateContainer within sandbox \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:10:29.018917 containerd[1472]: time="2025-01-29T11:10:29.018626000Z" level=info msg="CreateContainer within sandbox \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce\"" Jan 29 11:10:29.027345 containerd[1472]: time="2025-01-29T11:10:29.027301990Z" level=info msg="StartContainer for \"d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce\"" Jan 29 11:10:29.179379 systemd[1]: Started cri-containerd-d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce.scope - libcontainer container d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce. Jan 29 11:10:29.252223 containerd[1472]: time="2025-01-29T11:10:29.251918421Z" level=info msg="StartContainer for \"d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce\" returns successfully" Jan 29 11:10:29.255976 systemd[1]: cri-containerd-d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce.scope: Deactivated successfully. Jan 29 11:10:29.324555 containerd[1472]: time="2025-01-29T11:10:29.324441710Z" level=info msg="shim disconnected" id=d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce namespace=k8s.io Jan 29 11:10:29.324555 containerd[1472]: time="2025-01-29T11:10:29.324523741Z" level=warning msg="cleaning up after shim disconnected" id=d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce namespace=k8s.io Jan 29 11:10:29.324555 containerd[1472]: time="2025-01-29T11:10:29.324534913Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:29.325523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce-rootfs.mount: Deactivated successfully. Jan 29 11:10:29.354074 containerd[1472]: time="2025-01-29T11:10:29.354006700Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:10:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:10:29.773152 containerd[1472]: time="2025-01-29T11:10:29.772738421Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:29.774718 containerd[1472]: time="2025-01-29T11:10:29.774631878Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 11:10:29.775713 containerd[1472]: time="2025-01-29T11:10:29.775643859Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:29.778571 containerd[1472]: time="2025-01-29T11:10:29.778491118Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.532258332s" Jan 29 11:10:29.778571 containerd[1472]: time="2025-01-29T11:10:29.778564433Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 11:10:29.781729 containerd[1472]: time="2025-01-29T11:10:29.781673836Z" level=info msg="CreateContainer within sandbox \"c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:10:29.808666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount340634952.mount: Deactivated successfully. Jan 29 11:10:29.813522 containerd[1472]: time="2025-01-29T11:10:29.813463463Z" level=info msg="CreateContainer within sandbox \"c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb\"" Jan 29 11:10:29.817154 containerd[1472]: time="2025-01-29T11:10:29.814520609Z" level=info msg="StartContainer for \"8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb\"" Jan 29 11:10:29.884603 systemd[1]: Started cri-containerd-8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb.scope - libcontainer container 8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb. Jan 29 11:10:29.920570 containerd[1472]: time="2025-01-29T11:10:29.920495259Z" level=info msg="StartContainer for \"8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb\" returns successfully" Jan 29 11:10:29.948759 kubelet[2567]: E0129 11:10:29.948714 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:29.955923 kubelet[2567]: E0129 11:10:29.954989 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:29.961259 containerd[1472]: time="2025-01-29T11:10:29.961208472Z" level=info msg="CreateContainer within sandbox \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:10:29.989295 containerd[1472]: time="2025-01-29T11:10:29.989210896Z" level=info msg="CreateContainer within sandbox \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232\"" Jan 29 11:10:29.992511 containerd[1472]: time="2025-01-29T11:10:29.992448786Z" level=info msg="StartContainer for \"373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232\"" Jan 29 11:10:30.029212 kubelet[2567]: I0129 11:10:30.028844 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-ckk9d" podStartSLOduration=0.923996415 podStartE2EDuration="14.028805129s" podCreationTimestamp="2025-01-29 11:10:16 +0000 UTC" firstStartedPulling="2025-01-29 11:10:16.674783464 +0000 UTC m=+6.065337875" lastFinishedPulling="2025-01-29 11:10:29.779592177 +0000 UTC m=+19.170146589" observedRunningTime="2025-01-29 11:10:29.974135377 +0000 UTC m=+19.364689793" watchObservedRunningTime="2025-01-29 11:10:30.028805129 +0000 UTC m=+19.419359549" Jan 29 11:10:30.045372 systemd[1]: Started cri-containerd-373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232.scope - libcontainer container 373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232. Jan 29 11:10:30.093502 systemd[1]: cri-containerd-373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232.scope: Deactivated successfully. Jan 29 11:10:30.098154 containerd[1472]: time="2025-01-29T11:10:30.097600542Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod697d4142_1a88_4854_8dc9_8f615d5853f4.slice/cri-containerd-373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232.scope/memory.events\": no such file or directory" Jan 29 11:10:30.099954 containerd[1472]: time="2025-01-29T11:10:30.099396416Z" level=info msg="StartContainer for \"373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232\" returns successfully" Jan 29 11:10:30.144653 containerd[1472]: time="2025-01-29T11:10:30.143593378Z" level=info msg="shim disconnected" id=373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232 namespace=k8s.io Jan 29 11:10:30.144653 containerd[1472]: time="2025-01-29T11:10:30.144426364Z" level=warning msg="cleaning up after shim disconnected" id=373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232 namespace=k8s.io Jan 29 11:10:30.144653 containerd[1472]: time="2025-01-29T11:10:30.144473576Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:30.961204 kubelet[2567]: E0129 11:10:30.961156 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:30.963188 kubelet[2567]: E0129 11:10:30.961893 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:30.965602 containerd[1472]: time="2025-01-29T11:10:30.965555208Z" level=info msg="CreateContainer within sandbox \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:10:30.991135 containerd[1472]: time="2025-01-29T11:10:30.990144501Z" level=info msg="CreateContainer within sandbox \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644\"" Jan 29 11:10:30.991135 containerd[1472]: time="2025-01-29T11:10:30.990723703Z" level=info msg="StartContainer for \"ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644\"" Jan 29 11:10:31.074512 systemd[1]: Started cri-containerd-ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644.scope - libcontainer container ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644. Jan 29 11:10:31.143294 containerd[1472]: time="2025-01-29T11:10:31.143238125Z" level=info msg="StartContainer for \"ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644\" returns successfully" Jan 29 11:10:31.322879 kubelet[2567]: I0129 11:10:31.320697 2567 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:10:31.326909 systemd[1]: run-containerd-runc-k8s.io-ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644-runc.5uSUuZ.mount: Deactivated successfully. Jan 29 11:10:31.387264 systemd[1]: Created slice kubepods-burstable-podf4417476_c5c2_44de_a6b2_8b9be7e4fe87.slice - libcontainer container kubepods-burstable-podf4417476_c5c2_44de_a6b2_8b9be7e4fe87.slice. Jan 29 11:10:31.402057 systemd[1]: Created slice kubepods-burstable-pod668311ac_9aff_4ca2_a488_b6343119cf24.slice - libcontainer container kubepods-burstable-pod668311ac_9aff_4ca2_a488_b6343119cf24.slice. Jan 29 11:10:31.554270 kubelet[2567]: I0129 11:10:31.554212 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmqmx\" (UniqueName: \"kubernetes.io/projected/f4417476-c5c2-44de-a6b2-8b9be7e4fe87-kube-api-access-wmqmx\") pod \"coredns-6f6b679f8f-s9ftv\" (UID: \"f4417476-c5c2-44de-a6b2-8b9be7e4fe87\") " pod="kube-system/coredns-6f6b679f8f-s9ftv" Jan 29 11:10:31.554689 kubelet[2567]: I0129 11:10:31.554659 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4417476-c5c2-44de-a6b2-8b9be7e4fe87-config-volume\") pod \"coredns-6f6b679f8f-s9ftv\" (UID: \"f4417476-c5c2-44de-a6b2-8b9be7e4fe87\") " pod="kube-system/coredns-6f6b679f8f-s9ftv" Jan 29 11:10:31.554880 kubelet[2567]: I0129 11:10:31.554856 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw46s\" (UniqueName: \"kubernetes.io/projected/668311ac-9aff-4ca2-a488-b6343119cf24-kube-api-access-rw46s\") pod \"coredns-6f6b679f8f-f2fst\" (UID: \"668311ac-9aff-4ca2-a488-b6343119cf24\") " pod="kube-system/coredns-6f6b679f8f-f2fst" Jan 29 11:10:31.554983 kubelet[2567]: I0129 11:10:31.554968 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/668311ac-9aff-4ca2-a488-b6343119cf24-config-volume\") pod \"coredns-6f6b679f8f-f2fst\" (UID: \"668311ac-9aff-4ca2-a488-b6343119cf24\") " pod="kube-system/coredns-6f6b679f8f-f2fst" Jan 29 11:10:31.693875 kubelet[2567]: E0129 11:10:31.693448 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:31.695329 containerd[1472]: time="2025-01-29T11:10:31.695270405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-s9ftv,Uid:f4417476-c5c2-44de-a6b2-8b9be7e4fe87,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:31.710295 kubelet[2567]: E0129 11:10:31.709274 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:31.715565 containerd[1472]: time="2025-01-29T11:10:31.714609255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f2fst,Uid:668311ac-9aff-4ca2-a488-b6343119cf24,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:31.967253 kubelet[2567]: E0129 11:10:31.967055 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:31.994021 kubelet[2567]: I0129 11:10:31.992532 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mqll7" podStartSLOduration=6.01873433 podStartE2EDuration="16.992512114s" podCreationTimestamp="2025-01-29 11:10:15 +0000 UTC" firstStartedPulling="2025-01-29 11:10:16.271870862 +0000 UTC m=+5.662425267" lastFinishedPulling="2025-01-29 11:10:27.245648653 +0000 UTC m=+16.636203051" observedRunningTime="2025-01-29 11:10:31.991400264 +0000 UTC m=+21.381954684" watchObservedRunningTime="2025-01-29 11:10:31.992512114 +0000 UTC m=+21.383066551" Jan 29 11:10:32.969831 kubelet[2567]: E0129 11:10:32.969785 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:33.735514 systemd-networkd[1359]: cilium_host: Link UP Jan 29 11:10:33.735755 systemd-networkd[1359]: cilium_net: Link UP Jan 29 11:10:33.735940 systemd-networkd[1359]: cilium_net: Gained carrier Jan 29 11:10:33.736098 systemd-networkd[1359]: cilium_host: Gained carrier Jan 29 11:10:33.906129 systemd-networkd[1359]: cilium_vxlan: Link UP Jan 29 11:10:33.906137 systemd-networkd[1359]: cilium_vxlan: Gained carrier Jan 29 11:10:33.972486 kubelet[2567]: E0129 11:10:33.972367 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:33.989370 systemd-networkd[1359]: cilium_net: Gained IPv6LL Jan 29 11:10:34.188571 systemd-networkd[1359]: cilium_host: Gained IPv6LL Jan 29 11:10:34.395747 kernel: NET: Registered PF_ALG protocol family Jan 29 11:10:35.100338 systemd-networkd[1359]: cilium_vxlan: Gained IPv6LL Jan 29 11:10:35.333356 systemd-networkd[1359]: lxc_health: Link UP Jan 29 11:10:35.341428 systemd-networkd[1359]: lxc_health: Gained carrier Jan 29 11:10:35.816655 systemd-networkd[1359]: lxc6dcc11b014ad: Link UP Jan 29 11:10:35.822167 kernel: eth0: renamed from tmpcecdd Jan 29 11:10:35.833484 systemd-networkd[1359]: lxc6dcc11b014ad: Gained carrier Jan 29 11:10:35.865779 systemd-networkd[1359]: lxc3cb578121937: Link UP Jan 29 11:10:35.868150 kernel: eth0: renamed from tmp2d623 Jan 29 11:10:35.876407 systemd-networkd[1359]: lxc3cb578121937: Gained carrier Jan 29 11:10:36.081255 kubelet[2567]: E0129 11:10:36.079842 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:37.017137 kubelet[2567]: E0129 11:10:37.017045 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:37.340405 systemd-networkd[1359]: lxc_health: Gained IPv6LL Jan 29 11:10:37.532540 systemd-networkd[1359]: lxc6dcc11b014ad: Gained IPv6LL Jan 29 11:10:37.788887 systemd-networkd[1359]: lxc3cb578121937: Gained IPv6LL Jan 29 11:10:38.020835 kubelet[2567]: E0129 11:10:38.020749 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:41.294670 containerd[1472]: time="2025-01-29T11:10:41.294351023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:41.294670 containerd[1472]: time="2025-01-29T11:10:41.294429879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:41.294670 containerd[1472]: time="2025-01-29T11:10:41.294444798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:41.294670 containerd[1472]: time="2025-01-29T11:10:41.294548938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:41.332731 systemd[1]: Started cri-containerd-cecdd3fd7c0ae62f2e2740c045d9187d51b299e7f1f986f1b434eae224f0db55.scope - libcontainer container cecdd3fd7c0ae62f2e2740c045d9187d51b299e7f1f986f1b434eae224f0db55. Jan 29 11:10:41.362262 containerd[1472]: time="2025-01-29T11:10:41.362125078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:41.362429 containerd[1472]: time="2025-01-29T11:10:41.362329518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:41.363518 containerd[1472]: time="2025-01-29T11:10:41.362966114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:41.363518 containerd[1472]: time="2025-01-29T11:10:41.363210099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:41.405424 systemd[1]: Started cri-containerd-2d62351d917513966dc1b7ddf3776662f08e4f32fd9d5ff085d392ba17015ead.scope - libcontainer container 2d62351d917513966dc1b7ddf3776662f08e4f32fd9d5ff085d392ba17015ead. Jan 29 11:10:41.432771 containerd[1472]: time="2025-01-29T11:10:41.432709518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-f2fst,Uid:668311ac-9aff-4ca2-a488-b6343119cf24,Namespace:kube-system,Attempt:0,} returns sandbox id \"cecdd3fd7c0ae62f2e2740c045d9187d51b299e7f1f986f1b434eae224f0db55\"" Jan 29 11:10:41.434919 kubelet[2567]: E0129 11:10:41.434882 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:41.440617 containerd[1472]: time="2025-01-29T11:10:41.440432768Z" level=info msg="CreateContainer within sandbox \"cecdd3fd7c0ae62f2e2740c045d9187d51b299e7f1f986f1b434eae224f0db55\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:10:41.463426 containerd[1472]: time="2025-01-29T11:10:41.463236447Z" level=info msg="CreateContainer within sandbox \"cecdd3fd7c0ae62f2e2740c045d9187d51b299e7f1f986f1b434eae224f0db55\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"969be931371fb88e8494f873ceed9731acdfdfc4522079813ec4fe80851d6d92\"" Jan 29 11:10:41.466044 containerd[1472]: time="2025-01-29T11:10:41.465374071Z" level=info msg="StartContainer for \"969be931371fb88e8494f873ceed9731acdfdfc4522079813ec4fe80851d6d92\"" Jan 29 11:10:41.533545 systemd[1]: Started cri-containerd-969be931371fb88e8494f873ceed9731acdfdfc4522079813ec4fe80851d6d92.scope - libcontainer container 969be931371fb88e8494f873ceed9731acdfdfc4522079813ec4fe80851d6d92. Jan 29 11:10:41.568234 containerd[1472]: time="2025-01-29T11:10:41.566770938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-s9ftv,Uid:f4417476-c5c2-44de-a6b2-8b9be7e4fe87,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d62351d917513966dc1b7ddf3776662f08e4f32fd9d5ff085d392ba17015ead\"" Jan 29 11:10:41.568432 kubelet[2567]: E0129 11:10:41.567857 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:41.572304 containerd[1472]: time="2025-01-29T11:10:41.572084724Z" level=info msg="CreateContainer within sandbox \"2d62351d917513966dc1b7ddf3776662f08e4f32fd9d5ff085d392ba17015ead\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:10:41.597432 containerd[1472]: time="2025-01-29T11:10:41.597323096Z" level=info msg="CreateContainer within sandbox \"2d62351d917513966dc1b7ddf3776662f08e4f32fd9d5ff085d392ba17015ead\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0031ec8ef8f5be2a8182e30d2e8515b5616f2c3335f3ea94b61785d22075c4e7\"" Jan 29 11:10:41.600932 containerd[1472]: time="2025-01-29T11:10:41.599003574Z" level=info msg="StartContainer for \"0031ec8ef8f5be2a8182e30d2e8515b5616f2c3335f3ea94b61785d22075c4e7\"" Jan 29 11:10:41.636850 containerd[1472]: time="2025-01-29T11:10:41.636794250Z" level=info msg="StartContainer for \"969be931371fb88e8494f873ceed9731acdfdfc4522079813ec4fe80851d6d92\" returns successfully" Jan 29 11:10:41.663452 systemd[1]: Started cri-containerd-0031ec8ef8f5be2a8182e30d2e8515b5616f2c3335f3ea94b61785d22075c4e7.scope - libcontainer container 0031ec8ef8f5be2a8182e30d2e8515b5616f2c3335f3ea94b61785d22075c4e7. Jan 29 11:10:41.709955 containerd[1472]: time="2025-01-29T11:10:41.709899563Z" level=info msg="StartContainer for \"0031ec8ef8f5be2a8182e30d2e8515b5616f2c3335f3ea94b61785d22075c4e7\" returns successfully" Jan 29 11:10:42.035786 kubelet[2567]: E0129 11:10:42.035484 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:42.042183 kubelet[2567]: E0129 11:10:42.041727 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:42.059305 kubelet[2567]: I0129 11:10:42.059249 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-f2fst" podStartSLOduration=26.059228313 podStartE2EDuration="26.059228313s" podCreationTimestamp="2025-01-29 11:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:42.057591356 +0000 UTC m=+31.448145776" watchObservedRunningTime="2025-01-29 11:10:42.059228313 +0000 UTC m=+31.449782732" Jan 29 11:10:42.080383 kubelet[2567]: I0129 11:10:42.080209 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-s9ftv" podStartSLOduration=26.080187879 podStartE2EDuration="26.080187879s" podCreationTimestamp="2025-01-29 11:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:42.078393828 +0000 UTC m=+31.468948245" watchObservedRunningTime="2025-01-29 11:10:42.080187879 +0000 UTC m=+31.470742299" Jan 29 11:10:42.308061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2628428763.mount: Deactivated successfully. Jan 29 11:10:43.044403 kubelet[2567]: E0129 11:10:43.043964 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:43.044403 kubelet[2567]: E0129 11:10:43.044052 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:44.048009 kubelet[2567]: E0129 11:10:44.046080 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:44.048009 kubelet[2567]: E0129 11:10:44.046226 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:11:00.082541 systemd[1]: Started sshd@9-143.198.77.23:22-139.178.89.65:58148.service - OpenSSH per-connection server daemon (139.178.89.65:58148). Jan 29 11:11:00.179230 sshd[3968]: Accepted publickey for core from 139.178.89.65 port 58148 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:00.181628 sshd-session[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:00.190039 systemd-logind[1448]: New session 10 of user core. Jan 29 11:11:00.207418 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:11:00.925055 sshd[3970]: Connection closed by 139.178.89.65 port 58148 Jan 29 11:11:00.927403 sshd-session[3968]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:00.937063 systemd[1]: sshd@9-143.198.77.23:22-139.178.89.65:58148.service: Deactivated successfully. Jan 29 11:11:00.941979 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:11:00.943394 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:11:00.946612 systemd-logind[1448]: Removed session 10. Jan 29 11:11:05.948770 systemd[1]: Started sshd@10-143.198.77.23:22-139.178.89.65:46772.service - OpenSSH per-connection server daemon (139.178.89.65:46772). Jan 29 11:11:06.007424 sshd[3983]: Accepted publickey for core from 139.178.89.65 port 46772 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:06.009677 sshd-session[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:06.018164 systemd-logind[1448]: New session 11 of user core. Jan 29 11:11:06.023544 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:11:06.186315 sshd[3985]: Connection closed by 139.178.89.65 port 46772 Jan 29 11:11:06.187066 sshd-session[3983]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:06.194325 systemd[1]: sshd@10-143.198.77.23:22-139.178.89.65:46772.service: Deactivated successfully. Jan 29 11:11:06.197771 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:11:06.199905 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:11:06.201453 systemd-logind[1448]: Removed session 11. Jan 29 11:11:11.209824 systemd[1]: Started sshd@11-143.198.77.23:22-139.178.89.65:51260.service - OpenSSH per-connection server daemon (139.178.89.65:51260). Jan 29 11:11:11.267988 sshd[3999]: Accepted publickey for core from 139.178.89.65 port 51260 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:11.270688 sshd-session[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:11.277071 systemd-logind[1448]: New session 12 of user core. Jan 29 11:11:11.292438 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:11:11.439744 sshd[4003]: Connection closed by 139.178.89.65 port 51260 Jan 29 11:11:11.440663 sshd-session[3999]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:11.444611 systemd[1]: sshd@11-143.198.77.23:22-139.178.89.65:51260.service: Deactivated successfully. Jan 29 11:11:11.447546 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:11:11.448688 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:11:11.450050 systemd-logind[1448]: Removed session 12. Jan 29 11:11:16.462450 systemd[1]: Started sshd@12-143.198.77.23:22-139.178.89.65:51264.service - OpenSSH per-connection server daemon (139.178.89.65:51264). Jan 29 11:11:16.532265 sshd[4014]: Accepted publickey for core from 139.178.89.65 port 51264 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:16.534295 sshd-session[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:16.542211 systemd-logind[1448]: New session 13 of user core. Jan 29 11:11:16.554428 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:11:16.697194 sshd[4016]: Connection closed by 139.178.89.65 port 51264 Jan 29 11:11:16.697295 sshd-session[4014]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:16.700860 systemd[1]: sshd@12-143.198.77.23:22-139.178.89.65:51264.service: Deactivated successfully. Jan 29 11:11:16.702944 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:11:16.705178 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:11:16.706587 systemd-logind[1448]: Removed session 13. Jan 29 11:11:21.723978 systemd[1]: Started sshd@13-143.198.77.23:22-139.178.89.65:41346.service - OpenSSH per-connection server daemon (139.178.89.65:41346). Jan 29 11:11:21.780265 sshd[4030]: Accepted publickey for core from 139.178.89.65 port 41346 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:21.782627 sshd-session[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:21.789374 systemd-logind[1448]: New session 14 of user core. Jan 29 11:11:21.806468 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:11:21.940170 sshd[4032]: Connection closed by 139.178.89.65 port 41346 Jan 29 11:11:21.940958 sshd-session[4030]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:21.952561 systemd[1]: sshd@13-143.198.77.23:22-139.178.89.65:41346.service: Deactivated successfully. Jan 29 11:11:21.955750 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:11:21.958608 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:11:21.962562 systemd[1]: Started sshd@14-143.198.77.23:22-139.178.89.65:41362.service - OpenSSH per-connection server daemon (139.178.89.65:41362). Jan 29 11:11:21.965343 systemd-logind[1448]: Removed session 14. Jan 29 11:11:22.030466 sshd[4044]: Accepted publickey for core from 139.178.89.65 port 41362 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:22.032910 sshd-session[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:22.038651 systemd-logind[1448]: New session 15 of user core. Jan 29 11:11:22.046376 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:11:22.230780 sshd[4046]: Connection closed by 139.178.89.65 port 41362 Jan 29 11:11:22.232086 sshd-session[4044]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:22.245004 systemd[1]: sshd@14-143.198.77.23:22-139.178.89.65:41362.service: Deactivated successfully. Jan 29 11:11:22.249896 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:11:22.252329 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:11:22.264297 systemd[1]: Started sshd@15-143.198.77.23:22-139.178.89.65:41378.service - OpenSSH per-connection server daemon (139.178.89.65:41378). Jan 29 11:11:22.267236 systemd-logind[1448]: Removed session 15. Jan 29 11:11:22.347858 sshd[4056]: Accepted publickey for core from 139.178.89.65 port 41378 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:22.349879 sshd-session[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:22.359180 systemd-logind[1448]: New session 16 of user core. Jan 29 11:11:22.363751 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:11:22.505871 sshd[4058]: Connection closed by 139.178.89.65 port 41378 Jan 29 11:11:22.507032 sshd-session[4056]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:22.512403 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:11:22.512585 systemd[1]: sshd@15-143.198.77.23:22-139.178.89.65:41378.service: Deactivated successfully. Jan 29 11:11:22.515704 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:11:22.516999 systemd-logind[1448]: Removed session 16. Jan 29 11:11:23.791450 kubelet[2567]: E0129 11:11:23.790932 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:11:25.790779 kubelet[2567]: E0129 11:11:25.790717 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:11:27.526567 systemd[1]: Started sshd@16-143.198.77.23:22-139.178.89.65:41384.service - OpenSSH per-connection server daemon (139.178.89.65:41384). Jan 29 11:11:27.578196 sshd[4069]: Accepted publickey for core from 139.178.89.65 port 41384 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:27.580063 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:27.585889 systemd-logind[1448]: New session 17 of user core. Jan 29 11:11:27.593524 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:11:27.748864 sshd[4071]: Connection closed by 139.178.89.65 port 41384 Jan 29 11:11:27.749662 sshd-session[4069]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:27.753686 systemd[1]: sshd@16-143.198.77.23:22-139.178.89.65:41384.service: Deactivated successfully. Jan 29 11:11:27.754042 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:11:27.757280 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:11:27.761552 systemd-logind[1448]: Removed session 17. Jan 29 11:11:32.769635 systemd[1]: Started sshd@17-143.198.77.23:22-139.178.89.65:55466.service - OpenSSH per-connection server daemon (139.178.89.65:55466). Jan 29 11:11:32.833710 sshd[4082]: Accepted publickey for core from 139.178.89.65 port 55466 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:32.835290 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:32.841280 systemd-logind[1448]: New session 18 of user core. Jan 29 11:11:32.845376 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:11:32.977613 sshd[4084]: Connection closed by 139.178.89.65 port 55466 Jan 29 11:11:32.978316 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:32.981759 systemd[1]: sshd@17-143.198.77.23:22-139.178.89.65:55466.service: Deactivated successfully. Jan 29 11:11:32.984168 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:11:32.985648 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:11:32.987634 systemd-logind[1448]: Removed session 18. Jan 29 11:11:36.791731 kubelet[2567]: E0129 11:11:36.790643 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:11:37.790535 kubelet[2567]: E0129 11:11:37.790430 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:11:38.008734 systemd[1]: Started sshd@18-143.198.77.23:22-139.178.89.65:55474.service - OpenSSH per-connection server daemon (139.178.89.65:55474). Jan 29 11:11:38.070170 sshd[4096]: Accepted publickey for core from 139.178.89.65 port 55474 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:38.071983 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:38.079404 systemd-logind[1448]: New session 19 of user core. Jan 29 11:11:38.089427 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:11:38.238387 sshd[4098]: Connection closed by 139.178.89.65 port 55474 Jan 29 11:11:38.239712 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:38.248733 systemd[1]: sshd@18-143.198.77.23:22-139.178.89.65:55474.service: Deactivated successfully. Jan 29 11:11:38.252470 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:11:38.253933 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:11:38.255523 systemd-logind[1448]: Removed session 19. Jan 29 11:11:43.268670 systemd[1]: Started sshd@19-143.198.77.23:22-139.178.89.65:60626.service - OpenSSH per-connection server daemon (139.178.89.65:60626). Jan 29 11:11:43.325396 sshd[4109]: Accepted publickey for core from 139.178.89.65 port 60626 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:43.327455 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:43.333913 systemd-logind[1448]: New session 20 of user core. Jan 29 11:11:43.341386 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:11:43.480873 sshd[4111]: Connection closed by 139.178.89.65 port 60626 Jan 29 11:11:43.481828 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:43.492513 systemd[1]: sshd@19-143.198.77.23:22-139.178.89.65:60626.service: Deactivated successfully. Jan 29 11:11:43.495024 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:11:43.497042 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:11:43.505662 systemd[1]: Started sshd@20-143.198.77.23:22-139.178.89.65:60630.service - OpenSSH per-connection server daemon (139.178.89.65:60630). Jan 29 11:11:43.507791 systemd-logind[1448]: Removed session 20. Jan 29 11:11:43.559252 sshd[4122]: Accepted publickey for core from 139.178.89.65 port 60630 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:43.560573 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:43.567744 systemd-logind[1448]: New session 21 of user core. Jan 29 11:11:43.569400 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:11:43.908926 sshd[4124]: Connection closed by 139.178.89.65 port 60630 Jan 29 11:11:43.910169 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:43.925226 systemd[1]: sshd@20-143.198.77.23:22-139.178.89.65:60630.service: Deactivated successfully. Jan 29 11:11:43.927610 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:11:43.929471 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:11:43.933725 systemd[1]: Started sshd@21-143.198.77.23:22-139.178.89.65:60638.service - OpenSSH per-connection server daemon (139.178.89.65:60638). Jan 29 11:11:43.935734 systemd-logind[1448]: Removed session 21. Jan 29 11:11:44.017780 sshd[4133]: Accepted publickey for core from 139.178.89.65 port 60638 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:44.019666 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:44.026431 systemd-logind[1448]: New session 22 of user core. Jan 29 11:11:44.034533 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:11:46.151450 sshd[4135]: Connection closed by 139.178.89.65 port 60638 Jan 29 11:11:46.152503 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:46.170251 systemd[1]: sshd@21-143.198.77.23:22-139.178.89.65:60638.service: Deactivated successfully. Jan 29 11:11:46.175232 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:11:46.179710 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:11:46.190620 systemd[1]: Started sshd@22-143.198.77.23:22-139.178.89.65:60650.service - OpenSSH per-connection server daemon (139.178.89.65:60650). Jan 29 11:11:46.194029 systemd-logind[1448]: Removed session 22. Jan 29 11:11:46.258711 sshd[4149]: Accepted publickey for core from 139.178.89.65 port 60650 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:46.260561 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:46.266472 systemd-logind[1448]: New session 23 of user core. Jan 29 11:11:46.273500 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:11:46.674371 sshd[4153]: Connection closed by 139.178.89.65 port 60650 Jan 29 11:11:46.674870 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:46.686556 systemd[1]: sshd@22-143.198.77.23:22-139.178.89.65:60650.service: Deactivated successfully. Jan 29 11:11:46.689024 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:11:46.691609 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:11:46.698645 systemd[1]: Started sshd@23-143.198.77.23:22-139.178.89.65:60654.service - OpenSSH per-connection server daemon (139.178.89.65:60654). Jan 29 11:11:46.700451 systemd-logind[1448]: Removed session 23. Jan 29 11:11:46.752751 sshd[4162]: Accepted publickey for core from 139.178.89.65 port 60654 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:46.754463 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:46.760740 systemd-logind[1448]: New session 24 of user core. Jan 29 11:11:46.770432 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:11:46.792130 kubelet[2567]: E0129 11:11:46.791760 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:11:46.914170 sshd[4164]: Connection closed by 139.178.89.65 port 60654 Jan 29 11:11:46.914945 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:46.919468 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:11:46.919812 systemd[1]: sshd@23-143.198.77.23:22-139.178.89.65:60654.service: Deactivated successfully. Jan 29 11:11:46.922508 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:11:46.923926 systemd-logind[1448]: Removed session 24. Jan 29 11:11:48.791356 kubelet[2567]: E0129 11:11:48.790842 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:11:51.934580 systemd[1]: Started sshd@24-143.198.77.23:22-139.178.89.65:55942.service - OpenSSH per-connection server daemon (139.178.89.65:55942). Jan 29 11:11:52.000756 sshd[4177]: Accepted publickey for core from 139.178.89.65 port 55942 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:52.001797 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:52.008006 systemd-logind[1448]: New session 25 of user core. Jan 29 11:11:52.016518 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:11:52.158300 sshd[4179]: Connection closed by 139.178.89.65 port 55942 Jan 29 11:11:52.159065 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:52.167185 systemd[1]: sshd@24-143.198.77.23:22-139.178.89.65:55942.service: Deactivated successfully. Jan 29 11:11:52.169861 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:11:52.171183 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:11:52.173439 systemd-logind[1448]: Removed session 25. Jan 29 11:11:57.177581 systemd[1]: Started sshd@25-143.198.77.23:22-139.178.89.65:55956.service - OpenSSH per-connection server daemon (139.178.89.65:55956). Jan 29 11:11:57.232785 sshd[4193]: Accepted publickey for core from 139.178.89.65 port 55956 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:57.234422 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:57.240325 systemd-logind[1448]: New session 26 of user core. Jan 29 11:11:57.250434 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:11:57.386573 sshd[4195]: Connection closed by 139.178.89.65 port 55956 Jan 29 11:11:57.387426 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:57.392513 systemd[1]: sshd@25-143.198.77.23:22-139.178.89.65:55956.service: Deactivated successfully. Jan 29 11:11:57.394915 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:11:57.396462 systemd-logind[1448]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:11:57.398653 systemd-logind[1448]: Removed session 26. Jan 29 11:12:02.411510 systemd[1]: Started sshd@26-143.198.77.23:22-139.178.89.65:57352.service - OpenSSH per-connection server daemon (139.178.89.65:57352). Jan 29 11:12:02.471791 sshd[4206]: Accepted publickey for core from 139.178.89.65 port 57352 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:12:02.474413 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:02.483507 systemd-logind[1448]: New session 27 of user core. Jan 29 11:12:02.489488 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 11:12:02.645051 sshd[4208]: Connection closed by 139.178.89.65 port 57352 Jan 29 11:12:02.645905 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:02.651080 systemd[1]: sshd@26-143.198.77.23:22-139.178.89.65:57352.service: Deactivated successfully. Jan 29 11:12:02.656389 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 11:12:02.659021 systemd-logind[1448]: Session 27 logged out. Waiting for processes to exit. Jan 29 11:12:02.661185 systemd-logind[1448]: Removed session 27. Jan 29 11:12:02.792472 kubelet[2567]: E0129 11:12:02.791195 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:12:04.791093 kubelet[2567]: E0129 11:12:04.790682 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:12:07.662031 systemd[1]: Started sshd@27-143.198.77.23:22-139.178.89.65:57364.service - OpenSSH per-connection server daemon (139.178.89.65:57364). Jan 29 11:12:07.715712 sshd[4219]: Accepted publickey for core from 139.178.89.65 port 57364 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:12:07.717680 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:07.725378 systemd-logind[1448]: New session 28 of user core. Jan 29 11:12:07.730394 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 11:12:07.864828 sshd[4221]: Connection closed by 139.178.89.65 port 57364 Jan 29 11:12:07.866346 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:07.879663 systemd[1]: sshd@27-143.198.77.23:22-139.178.89.65:57364.service: Deactivated successfully. Jan 29 11:12:07.883358 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 11:12:07.886350 systemd-logind[1448]: Session 28 logged out. Waiting for processes to exit. Jan 29 11:12:07.895555 systemd[1]: Started sshd@28-143.198.77.23:22-139.178.89.65:57372.service - OpenSSH per-connection server daemon (139.178.89.65:57372). Jan 29 11:12:07.897658 systemd-logind[1448]: Removed session 28. Jan 29 11:12:07.974417 sshd[4232]: Accepted publickey for core from 139.178.89.65 port 57372 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:12:07.976572 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:07.983062 systemd-logind[1448]: New session 29 of user core. Jan 29 11:12:07.989449 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 29 11:12:09.407447 containerd[1472]: time="2025-01-29T11:12:09.407378872Z" level=info msg="StopContainer for \"8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb\" with timeout 30 (s)" Jan 29 11:12:09.416747 containerd[1472]: time="2025-01-29T11:12:09.416521900Z" level=info msg="Stop container \"8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb\" with signal terminated" Jan 29 11:12:09.448864 systemd[1]: cri-containerd-8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb.scope: Deactivated successfully. Jan 29 11:12:09.455650 containerd[1472]: time="2025-01-29T11:12:09.455587183Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:12:09.466378 containerd[1472]: time="2025-01-29T11:12:09.466314568Z" level=info msg="StopContainer for \"ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644\" with timeout 2 (s)" Jan 29 11:12:09.466882 containerd[1472]: time="2025-01-29T11:12:09.466843681Z" level=info msg="Stop container \"ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644\" with signal terminated" Jan 29 11:12:09.482531 systemd-networkd[1359]: lxc_health: Link DOWN Jan 29 11:12:09.482546 systemd-networkd[1359]: lxc_health: Lost carrier Jan 29 11:12:09.511334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb-rootfs.mount: Deactivated successfully. Jan 29 11:12:09.513387 systemd[1]: cri-containerd-ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644.scope: Deactivated successfully. Jan 29 11:12:09.513691 systemd[1]: cri-containerd-ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644.scope: Consumed 9.382s CPU time. Jan 29 11:12:09.516295 containerd[1472]: time="2025-01-29T11:12:09.516187579Z" level=info msg="shim disconnected" id=8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb namespace=k8s.io Jan 29 11:12:09.516295 containerd[1472]: time="2025-01-29T11:12:09.516267355Z" level=warning msg="cleaning up after shim disconnected" id=8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb namespace=k8s.io Jan 29 11:12:09.516295 containerd[1472]: time="2025-01-29T11:12:09.516284322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:12:09.566856 containerd[1472]: time="2025-01-29T11:12:09.566339541Z" level=info msg="StopContainer for \"8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb\" returns successfully" Jan 29 11:12:09.569084 containerd[1472]: time="2025-01-29T11:12:09.567569128Z" level=info msg="StopPodSandbox for \"c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01\"" Jan 29 11:12:09.567888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644-rootfs.mount: Deactivated successfully. Jan 29 11:12:09.577995 containerd[1472]: time="2025-01-29T11:12:09.577924749Z" level=info msg="shim disconnected" id=ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644 namespace=k8s.io Jan 29 11:12:09.577995 containerd[1472]: time="2025-01-29T11:12:09.577993184Z" level=warning msg="cleaning up after shim disconnected" id=ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644 namespace=k8s.io Jan 29 11:12:09.577995 containerd[1472]: time="2025-01-29T11:12:09.578009213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:12:09.578624 containerd[1472]: time="2025-01-29T11:12:09.571424846Z" level=info msg="Container to stop \"8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:12:09.583123 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01-shm.mount: Deactivated successfully. Jan 29 11:12:09.596904 systemd[1]: cri-containerd-c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01.scope: Deactivated successfully. Jan 29 11:12:09.613967 containerd[1472]: time="2025-01-29T11:12:09.613894670Z" level=info msg="StopContainer for \"ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644\" returns successfully" Jan 29 11:12:09.616028 containerd[1472]: time="2025-01-29T11:12:09.615959100Z" level=info msg="StopPodSandbox for \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\"" Jan 29 11:12:09.616359 containerd[1472]: time="2025-01-29T11:12:09.616115539Z" level=info msg="Container to stop \"373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:12:09.616359 containerd[1472]: time="2025-01-29T11:12:09.616317856Z" level=info msg="Container to stop \"ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:12:09.616359 containerd[1472]: time="2025-01-29T11:12:09.616328918Z" level=info msg="Container to stop \"a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:12:09.616359 containerd[1472]: time="2025-01-29T11:12:09.616337381Z" level=info msg="Container to stop \"d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:12:09.616635 containerd[1472]: time="2025-01-29T11:12:09.616348111Z" level=info msg="Container to stop \"31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:12:09.620837 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1-shm.mount: Deactivated successfully. Jan 29 11:12:09.630661 systemd[1]: cri-containerd-2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1.scope: Deactivated successfully. Jan 29 11:12:09.647454 containerd[1472]: time="2025-01-29T11:12:09.647221271Z" level=info msg="shim disconnected" id=c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01 namespace=k8s.io Jan 29 11:12:09.647715 containerd[1472]: time="2025-01-29T11:12:09.647688611Z" level=warning msg="cleaning up after shim disconnected" id=c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01 namespace=k8s.io Jan 29 11:12:09.647946 containerd[1472]: time="2025-01-29T11:12:09.647929407Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:12:09.666396 containerd[1472]: time="2025-01-29T11:12:09.665923236Z" level=info msg="shim disconnected" id=2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1 namespace=k8s.io Jan 29 11:12:09.666396 containerd[1472]: time="2025-01-29T11:12:09.666023578Z" level=warning msg="cleaning up after shim disconnected" id=2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1 namespace=k8s.io Jan 29 11:12:09.666396 containerd[1472]: time="2025-01-29T11:12:09.666036854Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:12:09.675023 containerd[1472]: time="2025-01-29T11:12:09.674955391Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:12:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:12:09.676690 containerd[1472]: time="2025-01-29T11:12:09.676479878Z" level=info msg="TearDown network for sandbox \"c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01\" successfully" Jan 29 11:12:09.676690 containerd[1472]: time="2025-01-29T11:12:09.676533236Z" level=info msg="StopPodSandbox for \"c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01\" returns successfully" Jan 29 11:12:09.699254 kubelet[2567]: I0129 11:12:09.699057 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b680a636-9854-4c00-b161-f27da27d89a0-cilium-config-path\") pod \"b680a636-9854-4c00-b161-f27da27d89a0\" (UID: \"b680a636-9854-4c00-b161-f27da27d89a0\") " Jan 29 11:12:09.699254 kubelet[2567]: I0129 11:12:09.699171 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb9bd\" (UniqueName: \"kubernetes.io/projected/b680a636-9854-4c00-b161-f27da27d89a0-kube-api-access-sb9bd\") pod \"b680a636-9854-4c00-b161-f27da27d89a0\" (UID: \"b680a636-9854-4c00-b161-f27da27d89a0\") " Jan 29 11:12:09.702140 containerd[1472]: time="2025-01-29T11:12:09.701052432Z" level=info msg="TearDown network for sandbox \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\" successfully" Jan 29 11:12:09.702140 containerd[1472]: time="2025-01-29T11:12:09.701235887Z" level=info msg="StopPodSandbox for \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\" returns successfully" Jan 29 11:12:09.706557 kubelet[2567]: I0129 11:12:09.706501 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b680a636-9854-4c00-b161-f27da27d89a0-kube-api-access-sb9bd" (OuterVolumeSpecName: "kube-api-access-sb9bd") pod "b680a636-9854-4c00-b161-f27da27d89a0" (UID: "b680a636-9854-4c00-b161-f27da27d89a0"). InnerVolumeSpecName "kube-api-access-sb9bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:12:09.706998 kubelet[2567]: I0129 11:12:09.706955 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b680a636-9854-4c00-b161-f27da27d89a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b680a636-9854-4c00-b161-f27da27d89a0" (UID: "b680a636-9854-4c00-b161-f27da27d89a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:12:09.800458 kubelet[2567]: I0129 11:12:09.800398 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-bpf-maps\") pod \"697d4142-1a88-4854-8dc9-8f615d5853f4\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " Jan 29 11:12:09.800458 kubelet[2567]: I0129 11:12:09.800464 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/697d4142-1a88-4854-8dc9-8f615d5853f4-hubble-tls\") pod \"697d4142-1a88-4854-8dc9-8f615d5853f4\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " Jan 29 11:12:09.800672 kubelet[2567]: I0129 11:12:09.800501 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/697d4142-1a88-4854-8dc9-8f615d5853f4-cilium-config-path\") pod \"697d4142-1a88-4854-8dc9-8f615d5853f4\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " Jan 29 11:12:09.800672 kubelet[2567]: I0129 11:12:09.800524 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-cilium-run\") pod \"697d4142-1a88-4854-8dc9-8f615d5853f4\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " Jan 29 11:12:09.800672 kubelet[2567]: I0129 11:12:09.800549 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsv8m\" (UniqueName: \"kubernetes.io/projected/697d4142-1a88-4854-8dc9-8f615d5853f4-kube-api-access-qsv8m\") pod \"697d4142-1a88-4854-8dc9-8f615d5853f4\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " Jan 29 11:12:09.800672 kubelet[2567]: I0129 11:12:09.800573 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-xtables-lock\") pod \"697d4142-1a88-4854-8dc9-8f615d5853f4\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " Jan 29 11:12:09.800672 kubelet[2567]: I0129 11:12:09.800594 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-host-proc-sys-kernel\") pod \"697d4142-1a88-4854-8dc9-8f615d5853f4\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " Jan 29 11:12:09.800672 kubelet[2567]: I0129 11:12:09.800615 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-host-proc-sys-net\") pod \"697d4142-1a88-4854-8dc9-8f615d5853f4\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " Jan 29 11:12:09.800884 kubelet[2567]: I0129 11:12:09.800634 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-lib-modules\") pod \"697d4142-1a88-4854-8dc9-8f615d5853f4\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " Jan 29 11:12:09.800884 kubelet[2567]: I0129 11:12:09.800661 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/697d4142-1a88-4854-8dc9-8f615d5853f4-clustermesh-secrets\") pod \"697d4142-1a88-4854-8dc9-8f615d5853f4\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " Jan 29 11:12:09.800884 kubelet[2567]: I0129 11:12:09.800687 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-cilium-cgroup\") pod \"697d4142-1a88-4854-8dc9-8f615d5853f4\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " Jan 29 11:12:09.800884 kubelet[2567]: I0129 11:12:09.800710 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-cni-path\") pod \"697d4142-1a88-4854-8dc9-8f615d5853f4\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " Jan 29 11:12:09.800884 kubelet[2567]: I0129 11:12:09.800752 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-etc-cni-netd\") pod \"697d4142-1a88-4854-8dc9-8f615d5853f4\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " Jan 29 11:12:09.800884 kubelet[2567]: I0129 11:12:09.800778 2567 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-hostproc\") pod \"697d4142-1a88-4854-8dc9-8f615d5853f4\" (UID: \"697d4142-1a88-4854-8dc9-8f615d5853f4\") " Jan 29 11:12:09.801092 kubelet[2567]: I0129 11:12:09.800833 2567 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b680a636-9854-4c00-b161-f27da27d89a0-cilium-config-path\") on node \"ci-4186.1.0-5-b8e0b24f92\" DevicePath \"\"" Jan 29 11:12:09.801092 kubelet[2567]: I0129 11:12:09.800848 2567 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sb9bd\" (UniqueName: \"kubernetes.io/projected/b680a636-9854-4c00-b161-f27da27d89a0-kube-api-access-sb9bd\") on node \"ci-4186.1.0-5-b8e0b24f92\" DevicePath \"\"" Jan 29 11:12:09.801092 kubelet[2567]: I0129 11:12:09.800934 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-hostproc" (OuterVolumeSpecName: "hostproc") pod "697d4142-1a88-4854-8dc9-8f615d5853f4" (UID: "697d4142-1a88-4854-8dc9-8f615d5853f4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:12:09.801092 kubelet[2567]: I0129 11:12:09.800984 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "697d4142-1a88-4854-8dc9-8f615d5853f4" (UID: "697d4142-1a88-4854-8dc9-8f615d5853f4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:12:09.803138 kubelet[2567]: I0129 11:12:09.801366 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "697d4142-1a88-4854-8dc9-8f615d5853f4" (UID: "697d4142-1a88-4854-8dc9-8f615d5853f4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:12:09.803469 kubelet[2567]: I0129 11:12:09.803428 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "697d4142-1a88-4854-8dc9-8f615d5853f4" (UID: "697d4142-1a88-4854-8dc9-8f615d5853f4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:12:09.804636 kubelet[2567]: I0129 11:12:09.803569 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "697d4142-1a88-4854-8dc9-8f615d5853f4" (UID: "697d4142-1a88-4854-8dc9-8f615d5853f4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:12:09.804774 kubelet[2567]: I0129 11:12:09.804568 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "697d4142-1a88-4854-8dc9-8f615d5853f4" (UID: "697d4142-1a88-4854-8dc9-8f615d5853f4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:12:09.804816 kubelet[2567]: I0129 11:12:09.804585 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "697d4142-1a88-4854-8dc9-8f615d5853f4" (UID: "697d4142-1a88-4854-8dc9-8f615d5853f4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:12:09.805195 kubelet[2567]: I0129 11:12:09.804925 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-cni-path" (OuterVolumeSpecName: "cni-path") pod "697d4142-1a88-4854-8dc9-8f615d5853f4" (UID: "697d4142-1a88-4854-8dc9-8f615d5853f4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:12:09.805195 kubelet[2567]: I0129 11:12:09.804944 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "697d4142-1a88-4854-8dc9-8f615d5853f4" (UID: "697d4142-1a88-4854-8dc9-8f615d5853f4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:12:09.805195 kubelet[2567]: I0129 11:12:09.804969 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "697d4142-1a88-4854-8dc9-8f615d5853f4" (UID: "697d4142-1a88-4854-8dc9-8f615d5853f4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:12:09.806006 kubelet[2567]: I0129 11:12:09.805972 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697d4142-1a88-4854-8dc9-8f615d5853f4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "697d4142-1a88-4854-8dc9-8f615d5853f4" (UID: "697d4142-1a88-4854-8dc9-8f615d5853f4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:12:09.806987 kubelet[2567]: I0129 11:12:09.806949 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/697d4142-1a88-4854-8dc9-8f615d5853f4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "697d4142-1a88-4854-8dc9-8f615d5853f4" (UID: "697d4142-1a88-4854-8dc9-8f615d5853f4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:12:09.808881 kubelet[2567]: I0129 11:12:09.808815 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/697d4142-1a88-4854-8dc9-8f615d5853f4-kube-api-access-qsv8m" (OuterVolumeSpecName: "kube-api-access-qsv8m") pod "697d4142-1a88-4854-8dc9-8f615d5853f4" (UID: "697d4142-1a88-4854-8dc9-8f615d5853f4"). InnerVolumeSpecName "kube-api-access-qsv8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:12:09.809009 kubelet[2567]: I0129 11:12:09.808977 2567 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/697d4142-1a88-4854-8dc9-8f615d5853f4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "697d4142-1a88-4854-8dc9-8f615d5853f4" (UID: "697d4142-1a88-4854-8dc9-8f615d5853f4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:12:09.901520 kubelet[2567]: I0129 11:12:09.901435 2567 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/697d4142-1a88-4854-8dc9-8f615d5853f4-clustermesh-secrets\") on node \"ci-4186.1.0-5-b8e0b24f92\" DevicePath \"\"" Jan 29 11:12:09.901713 kubelet[2567]: I0129 11:12:09.901493 2567 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-cilium-cgroup\") on node \"ci-4186.1.0-5-b8e0b24f92\" DevicePath \"\"" Jan 29 11:12:09.901713 kubelet[2567]: I0129 11:12:09.901572 2567 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-cni-path\") on node \"ci-4186.1.0-5-b8e0b24f92\" DevicePath \"\"" Jan 29 11:12:09.901713 kubelet[2567]: I0129 11:12:09.901586 2567 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-etc-cni-netd\") on node \"ci-4186.1.0-5-b8e0b24f92\" DevicePath \"\"" Jan 29 11:12:09.901713 kubelet[2567]: I0129 11:12:09.901600 2567 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-hostproc\") on node \"ci-4186.1.0-5-b8e0b24f92\" DevicePath \"\"" Jan 29 11:12:09.901713 kubelet[2567]: I0129 11:12:09.901612 2567 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-bpf-maps\") on node \"ci-4186.1.0-5-b8e0b24f92\" DevicePath \"\"" Jan 29 11:12:09.901713 kubelet[2567]: I0129 11:12:09.901625 2567 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/697d4142-1a88-4854-8dc9-8f615d5853f4-hubble-tls\") on node \"ci-4186.1.0-5-b8e0b24f92\" DevicePath \"\"" Jan 29 11:12:09.901713 kubelet[2567]: I0129 11:12:09.901639 2567 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/697d4142-1a88-4854-8dc9-8f615d5853f4-cilium-config-path\") on node \"ci-4186.1.0-5-b8e0b24f92\" DevicePath \"\"" Jan 29 11:12:09.901713 kubelet[2567]: I0129 11:12:09.901656 2567 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-cilium-run\") on node \"ci-4186.1.0-5-b8e0b24f92\" DevicePath \"\"" Jan 29 11:12:09.901932 kubelet[2567]: I0129 11:12:09.901673 2567 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qsv8m\" (UniqueName: \"kubernetes.io/projected/697d4142-1a88-4854-8dc9-8f615d5853f4-kube-api-access-qsv8m\") on node \"ci-4186.1.0-5-b8e0b24f92\" DevicePath \"\"" Jan 29 11:12:09.901932 kubelet[2567]: I0129 11:12:09.901686 2567 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-xtables-lock\") on node \"ci-4186.1.0-5-b8e0b24f92\" DevicePath \"\"" Jan 29 11:12:09.901932 kubelet[2567]: I0129 11:12:09.901699 2567 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-host-proc-sys-kernel\") on node \"ci-4186.1.0-5-b8e0b24f92\" DevicePath \"\"" Jan 29 11:12:09.901932 kubelet[2567]: I0129 11:12:09.901713 2567 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-host-proc-sys-net\") on node \"ci-4186.1.0-5-b8e0b24f92\" DevicePath \"\"" Jan 29 11:12:09.901932 kubelet[2567]: I0129 11:12:09.901727 2567 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/697d4142-1a88-4854-8dc9-8f615d5853f4-lib-modules\") on node \"ci-4186.1.0-5-b8e0b24f92\" DevicePath \"\"" Jan 29 11:12:10.268952 kubelet[2567]: I0129 11:12:10.268886 2567 scope.go:117] "RemoveContainer" containerID="ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644" Jan 29 11:12:10.279837 containerd[1472]: time="2025-01-29T11:12:10.279765563Z" level=info msg="RemoveContainer for \"ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644\"" Jan 29 11:12:10.281795 systemd[1]: Removed slice kubepods-burstable-pod697d4142_1a88_4854_8dc9_8f615d5853f4.slice - libcontainer container kubepods-burstable-pod697d4142_1a88_4854_8dc9_8f615d5853f4.slice. Jan 29 11:12:10.281951 systemd[1]: kubepods-burstable-pod697d4142_1a88_4854_8dc9_8f615d5853f4.slice: Consumed 9.495s CPU time. Jan 29 11:12:10.285691 systemd[1]: Removed slice kubepods-besteffort-podb680a636_9854_4c00_b161_f27da27d89a0.slice - libcontainer container kubepods-besteffort-podb680a636_9854_4c00_b161_f27da27d89a0.slice. Jan 29 11:12:10.294916 containerd[1472]: time="2025-01-29T11:12:10.294860748Z" level=info msg="RemoveContainer for \"ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644\" returns successfully" Jan 29 11:12:10.295526 kubelet[2567]: I0129 11:12:10.295437 2567 scope.go:117] "RemoveContainer" containerID="373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232" Jan 29 11:12:10.298180 containerd[1472]: time="2025-01-29T11:12:10.297710529Z" level=info msg="RemoveContainer for \"373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232\"" Jan 29 11:12:10.301218 containerd[1472]: time="2025-01-29T11:12:10.301094287Z" level=info msg="RemoveContainer for \"373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232\" returns successfully" Jan 29 11:12:10.304789 kubelet[2567]: I0129 11:12:10.303758 2567 scope.go:117] "RemoveContainer" containerID="d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce" Jan 29 11:12:10.306199 containerd[1472]: time="2025-01-29T11:12:10.306126703Z" level=info msg="RemoveContainer for \"d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce\"" Jan 29 11:12:10.310423 containerd[1472]: time="2025-01-29T11:12:10.310364640Z" level=info msg="RemoveContainer for \"d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce\" returns successfully" Jan 29 11:12:10.311157 kubelet[2567]: I0129 11:12:10.310746 2567 scope.go:117] "RemoveContainer" containerID="a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2" Jan 29 11:12:10.312802 containerd[1472]: time="2025-01-29T11:12:10.312689667Z" level=info msg="RemoveContainer for \"a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2\"" Jan 29 11:12:10.317359 containerd[1472]: time="2025-01-29T11:12:10.317133510Z" level=info msg="RemoveContainer for \"a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2\" returns successfully" Jan 29 11:12:10.317493 kubelet[2567]: I0129 11:12:10.317431 2567 scope.go:117] "RemoveContainer" containerID="31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f" Jan 29 11:12:10.319982 containerd[1472]: time="2025-01-29T11:12:10.319945507Z" level=info msg="RemoveContainer for \"31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f\"" Jan 29 11:12:10.326519 containerd[1472]: time="2025-01-29T11:12:10.324822814Z" level=info msg="RemoveContainer for \"31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f\" returns successfully" Jan 29 11:12:10.327470 kubelet[2567]: I0129 11:12:10.327444 2567 scope.go:117] "RemoveContainer" containerID="ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644" Jan 29 11:12:10.328395 containerd[1472]: time="2025-01-29T11:12:10.328154628Z" level=error msg="ContainerStatus for \"ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644\": not found" Jan 29 11:12:10.331402 kubelet[2567]: E0129 11:12:10.330296 2567 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644\": not found" containerID="ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644" Jan 29 11:12:10.331402 kubelet[2567]: I0129 11:12:10.330338 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644"} err="failed to get container status \"ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae66f637d1aea7d4300da47b47b4b7aba5eb7e83357eb476c6163aaf3665f644\": not found" Jan 29 11:12:10.331402 kubelet[2567]: I0129 11:12:10.330440 2567 scope.go:117] "RemoveContainer" containerID="373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232" Jan 29 11:12:10.332072 containerd[1472]: time="2025-01-29T11:12:10.332031034Z" level=error msg="ContainerStatus for \"373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232\": not found" Jan 29 11:12:10.334818 kubelet[2567]: E0129 11:12:10.333666 2567 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232\": not found" containerID="373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232" Jan 29 11:12:10.334818 kubelet[2567]: I0129 11:12:10.333716 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232"} err="failed to get container status \"373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232\": rpc error: code = NotFound desc = an error occurred when try to find container \"373be0f747c8308979d57b2e2bc38fc8af86a0ec284263672cba730ac732e232\": not found" Jan 29 11:12:10.334818 kubelet[2567]: I0129 11:12:10.333746 2567 scope.go:117] "RemoveContainer" containerID="d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce" Jan 29 11:12:10.335577 containerd[1472]: time="2025-01-29T11:12:10.335534506Z" level=error msg="ContainerStatus for \"d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce\": not found" Jan 29 11:12:10.337063 kubelet[2567]: E0129 11:12:10.336569 2567 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce\": not found" containerID="d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce" Jan 29 11:12:10.337063 kubelet[2567]: I0129 11:12:10.336617 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce"} err="failed to get container status \"d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"d2924805bed0cc396c0840983b8f0ab1d9f212b9bba957df245986174cd682ce\": not found" Jan 29 11:12:10.337063 kubelet[2567]: I0129 11:12:10.336642 2567 scope.go:117] "RemoveContainer" containerID="a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2" Jan 29 11:12:10.337063 kubelet[2567]: E0129 11:12:10.337054 2567 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2\": not found" containerID="a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2" Jan 29 11:12:10.337249 containerd[1472]: time="2025-01-29T11:12:10.336885904Z" level=error msg="ContainerStatus for \"a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2\": not found" Jan 29 11:12:10.337343 kubelet[2567]: I0129 11:12:10.337086 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2"} err="failed to get container status \"a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9dde73092a0e29cde43297fbb7b3bae5fca508104d24525444451e31bb17cf2\": not found" Jan 29 11:12:10.337343 kubelet[2567]: I0129 11:12:10.337121 2567 scope.go:117] "RemoveContainer" containerID="31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f" Jan 29 11:12:10.337413 containerd[1472]: time="2025-01-29T11:12:10.337268643Z" level=error msg="ContainerStatus for \"31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f\": not found" Jan 29 11:12:10.337447 kubelet[2567]: E0129 11:12:10.337377 2567 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f\": not found" containerID="31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f" Jan 29 11:12:10.337447 kubelet[2567]: I0129 11:12:10.337394 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f"} err="failed to get container status \"31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f\": rpc error: code = NotFound desc = an error occurred when try to find container \"31c46e9e13b58f03d6be0176fe2ab351b3b74e0f762f43562d73f4fb6b77368f\": not found" Jan 29 11:12:10.337447 kubelet[2567]: I0129 11:12:10.337407 2567 scope.go:117] "RemoveContainer" containerID="8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb" Jan 29 11:12:10.339034 containerd[1472]: time="2025-01-29T11:12:10.339001969Z" level=info msg="RemoveContainer for \"8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb\"" Jan 29 11:12:10.341665 containerd[1472]: time="2025-01-29T11:12:10.341613537Z" level=info msg="RemoveContainer for \"8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb\" returns successfully" Jan 29 11:12:10.341941 kubelet[2567]: I0129 11:12:10.341902 2567 scope.go:117] "RemoveContainer" containerID="8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb" Jan 29 11:12:10.342341 containerd[1472]: time="2025-01-29T11:12:10.342305131Z" level=error msg="ContainerStatus for \"8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb\": not found" Jan 29 11:12:10.342557 kubelet[2567]: E0129 11:12:10.342519 2567 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb\": not found" containerID="8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb" Jan 29 11:12:10.342618 kubelet[2567]: I0129 11:12:10.342556 2567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb"} err="failed to get container status \"8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c870602046903fd600db2252f7261636ac5d0297675d5376fe70705c2ecd5eb\": not found" Jan 29 11:12:10.411930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01-rootfs.mount: Deactivated successfully. Jan 29 11:12:10.412046 systemd[1]: var-lib-kubelet-pods-b680a636\x2d9854\x2d4c00\x2db161\x2df27da27d89a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsb9bd.mount: Deactivated successfully. Jan 29 11:12:10.412275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1-rootfs.mount: Deactivated successfully. Jan 29 11:12:10.412389 systemd[1]: var-lib-kubelet-pods-697d4142\x2d1a88\x2d4854\x2d8dc9\x2d8f615d5853f4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqsv8m.mount: Deactivated successfully. Jan 29 11:12:10.412492 systemd[1]: var-lib-kubelet-pods-697d4142\x2d1a88\x2d4854\x2d8dc9\x2d8f615d5853f4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:12:10.412577 systemd[1]: var-lib-kubelet-pods-697d4142\x2d1a88\x2d4854\x2d8dc9\x2d8f615d5853f4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:12:10.794230 kubelet[2567]: I0129 11:12:10.794168 2567 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="697d4142-1a88-4854-8dc9-8f615d5853f4" path="/var/lib/kubelet/pods/697d4142-1a88-4854-8dc9-8f615d5853f4/volumes" Jan 29 11:12:10.795642 kubelet[2567]: I0129 11:12:10.795187 2567 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b680a636-9854-4c00-b161-f27da27d89a0" path="/var/lib/kubelet/pods/b680a636-9854-4c00-b161-f27da27d89a0/volumes" Jan 29 11:12:10.819480 containerd[1472]: time="2025-01-29T11:12:10.819438896Z" level=info msg="StopPodSandbox for \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\"" Jan 29 11:12:10.819996 containerd[1472]: time="2025-01-29T11:12:10.819539901Z" level=info msg="TearDown network for sandbox \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\" successfully" Jan 29 11:12:10.819996 containerd[1472]: time="2025-01-29T11:12:10.819550107Z" level=info msg="StopPodSandbox for \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\" returns successfully" Jan 29 11:12:10.820676 containerd[1472]: time="2025-01-29T11:12:10.820647312Z" level=info msg="RemovePodSandbox for \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\"" Jan 29 11:12:10.820676 containerd[1472]: time="2025-01-29T11:12:10.820681958Z" level=info msg="Forcibly stopping sandbox \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\"" Jan 29 11:12:10.820946 containerd[1472]: time="2025-01-29T11:12:10.820740427Z" level=info msg="TearDown network for sandbox \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\" successfully" Jan 29 11:12:10.825262 containerd[1472]: time="2025-01-29T11:12:10.825195314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:12:10.825843 containerd[1472]: time="2025-01-29T11:12:10.825286089Z" level=info msg="RemovePodSandbox \"2e355efc2ef7f47162589fc1cffa239ec9698d18ea32be7425a83bf1490b22f1\" returns successfully" Jan 29 11:12:10.826366 containerd[1472]: time="2025-01-29T11:12:10.826148302Z" level=info msg="StopPodSandbox for \"c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01\"" Jan 29 11:12:10.826366 containerd[1472]: time="2025-01-29T11:12:10.826265956Z" level=info msg="TearDown network for sandbox \"c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01\" successfully" Jan 29 11:12:10.826366 containerd[1472]: time="2025-01-29T11:12:10.826284617Z" level=info msg="StopPodSandbox for \"c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01\" returns successfully" Jan 29 11:12:10.827230 containerd[1472]: time="2025-01-29T11:12:10.827004953Z" level=info msg="RemovePodSandbox for \"c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01\"" Jan 29 11:12:10.827230 containerd[1472]: time="2025-01-29T11:12:10.827040289Z" level=info msg="Forcibly stopping sandbox \"c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01\"" Jan 29 11:12:10.827230 containerd[1472]: time="2025-01-29T11:12:10.827131270Z" level=info msg="TearDown network for sandbox \"c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01\" successfully" Jan 29 11:12:10.831011 containerd[1472]: time="2025-01-29T11:12:10.830319820Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:12:10.831011 containerd[1472]: time="2025-01-29T11:12:10.830392710Z" level=info msg="RemovePodSandbox \"c6fc10a2f4fe1f6d8c4c1f974fa217f1be49e08474e67c73b583a70ca8679d01\" returns successfully" Jan 29 11:12:10.966678 kubelet[2567]: E0129 11:12:10.966498 2567 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:12:11.331049 sshd[4234]: Connection closed by 139.178.89.65 port 57372 Jan 29 11:12:11.330921 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:11.344330 systemd[1]: sshd@28-143.198.77.23:22-139.178.89.65:57372.service: Deactivated successfully. Jan 29 11:12:11.348413 systemd[1]: session-29.scope: Deactivated successfully. Jan 29 11:12:11.351065 systemd-logind[1448]: Session 29 logged out. Waiting for processes to exit. Jan 29 11:12:11.356671 systemd[1]: Started sshd@29-143.198.77.23:22-139.178.89.65:58828.service - OpenSSH per-connection server daemon (139.178.89.65:58828). Jan 29 11:12:11.358305 systemd-logind[1448]: Removed session 29. Jan 29 11:12:11.429147 sshd[4390]: Accepted publickey for core from 139.178.89.65 port 58828 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:12:11.433834 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:11.443600 systemd-logind[1448]: New session 30 of user core. Jan 29 11:12:11.449437 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 29 11:12:12.264472 sshd[4392]: Connection closed by 139.178.89.65 port 58828 Jan 29 11:12:12.265872 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:12.280578 systemd[1]: sshd@29-143.198.77.23:22-139.178.89.65:58828.service: Deactivated successfully. Jan 29 11:12:12.284680 systemd[1]: session-30.scope: Deactivated successfully. Jan 29 11:12:12.291407 systemd-logind[1448]: Session 30 logged out. Waiting for processes to exit. Jan 29 11:12:12.300875 systemd[1]: Started sshd@30-143.198.77.23:22-139.178.89.65:58834.service - OpenSSH per-connection server daemon (139.178.89.65:58834). Jan 29 11:12:12.306720 systemd-logind[1448]: Removed session 30. Jan 29 11:12:12.339827 kubelet[2567]: E0129 11:12:12.339734 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b680a636-9854-4c00-b161-f27da27d89a0" containerName="cilium-operator" Jan 29 11:12:12.339827 kubelet[2567]: E0129 11:12:12.339821 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="697d4142-1a88-4854-8dc9-8f615d5853f4" containerName="clean-cilium-state" Jan 29 11:12:12.339827 kubelet[2567]: E0129 11:12:12.339834 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="697d4142-1a88-4854-8dc9-8f615d5853f4" containerName="mount-bpf-fs" Jan 29 11:12:12.339827 kubelet[2567]: E0129 11:12:12.339844 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="697d4142-1a88-4854-8dc9-8f615d5853f4" containerName="mount-cgroup" Jan 29 11:12:12.340471 kubelet[2567]: E0129 11:12:12.339855 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="697d4142-1a88-4854-8dc9-8f615d5853f4" containerName="apply-sysctl-overwrites" Jan 29 11:12:12.340471 kubelet[2567]: E0129 11:12:12.339864 2567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="697d4142-1a88-4854-8dc9-8f615d5853f4" containerName="cilium-agent" Jan 29 11:12:12.340471 kubelet[2567]: I0129 11:12:12.339904 2567 memory_manager.go:354] "RemoveStaleState removing state" podUID="697d4142-1a88-4854-8dc9-8f615d5853f4" containerName="cilium-agent" Jan 29 11:12:12.340471 kubelet[2567]: I0129 11:12:12.339913 2567 memory_manager.go:354] "RemoveStaleState removing state" podUID="b680a636-9854-4c00-b161-f27da27d89a0" containerName="cilium-operator" Jan 29 11:12:12.359050 systemd[1]: Created slice kubepods-burstable-pod9ae84ce9_bde6_48ed_a61e_b7a476671233.slice - libcontainer container kubepods-burstable-pod9ae84ce9_bde6_48ed_a61e_b7a476671233.slice. Jan 29 11:12:12.371143 sshd[4401]: Accepted publickey for core from 139.178.89.65 port 58834 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:12:12.373787 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:12.387102 systemd-logind[1448]: New session 31 of user core. Jan 29 11:12:12.393372 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 29 11:12:12.418632 kubelet[2567]: I0129 11:12:12.418576 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9ae84ce9-bde6-48ed-a61e-b7a476671233-cilium-cgroup\") pod \"cilium-6xslf\" (UID: \"9ae84ce9-bde6-48ed-a61e-b7a476671233\") " pod="kube-system/cilium-6xslf" Jan 29 11:12:12.418632 kubelet[2567]: I0129 11:12:12.418642 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85jfg\" (UniqueName: \"kubernetes.io/projected/9ae84ce9-bde6-48ed-a61e-b7a476671233-kube-api-access-85jfg\") pod \"cilium-6xslf\" (UID: \"9ae84ce9-bde6-48ed-a61e-b7a476671233\") " pod="kube-system/cilium-6xslf" Jan 29 11:12:12.418911 kubelet[2567]: I0129 11:12:12.418690 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ae84ce9-bde6-48ed-a61e-b7a476671233-lib-modules\") pod \"cilium-6xslf\" (UID: \"9ae84ce9-bde6-48ed-a61e-b7a476671233\") " pod="kube-system/cilium-6xslf" Jan 29 11:12:12.418911 kubelet[2567]: I0129 11:12:12.418720 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9ae84ce9-bde6-48ed-a61e-b7a476671233-host-proc-sys-kernel\") pod \"cilium-6xslf\" (UID: \"9ae84ce9-bde6-48ed-a61e-b7a476671233\") " pod="kube-system/cilium-6xslf" Jan 29 11:12:12.418911 kubelet[2567]: I0129 11:12:12.418748 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ae84ce9-bde6-48ed-a61e-b7a476671233-etc-cni-netd\") pod \"cilium-6xslf\" (UID: \"9ae84ce9-bde6-48ed-a61e-b7a476671233\") " pod="kube-system/cilium-6xslf" Jan 29 11:12:12.418911 kubelet[2567]: I0129 11:12:12.418776 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ae84ce9-bde6-48ed-a61e-b7a476671233-cilium-config-path\") pod \"cilium-6xslf\" (UID: \"9ae84ce9-bde6-48ed-a61e-b7a476671233\") " pod="kube-system/cilium-6xslf" Jan 29 11:12:12.418911 kubelet[2567]: I0129 11:12:12.418808 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ae84ce9-bde6-48ed-a61e-b7a476671233-xtables-lock\") pod \"cilium-6xslf\" (UID: \"9ae84ce9-bde6-48ed-a61e-b7a476671233\") " pod="kube-system/cilium-6xslf" Jan 29 11:12:12.418911 kubelet[2567]: I0129 11:12:12.418836 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9ae84ce9-bde6-48ed-a61e-b7a476671233-cilium-run\") pod \"cilium-6xslf\" (UID: \"9ae84ce9-bde6-48ed-a61e-b7a476671233\") " pod="kube-system/cilium-6xslf" Jan 29 11:12:12.419067 kubelet[2567]: I0129 11:12:12.418859 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9ae84ce9-bde6-48ed-a61e-b7a476671233-hostproc\") pod \"cilium-6xslf\" (UID: \"9ae84ce9-bde6-48ed-a61e-b7a476671233\") " pod="kube-system/cilium-6xslf" Jan 29 11:12:12.419067 kubelet[2567]: I0129 11:12:12.418883 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9ae84ce9-bde6-48ed-a61e-b7a476671233-clustermesh-secrets\") pod \"cilium-6xslf\" (UID: \"9ae84ce9-bde6-48ed-a61e-b7a476671233\") " pod="kube-system/cilium-6xslf" Jan 29 11:12:12.419067 kubelet[2567]: I0129 11:12:12.418910 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9ae84ce9-bde6-48ed-a61e-b7a476671233-host-proc-sys-net\") pod \"cilium-6xslf\" (UID: \"9ae84ce9-bde6-48ed-a61e-b7a476671233\") " pod="kube-system/cilium-6xslf" Jan 29 11:12:12.419067 kubelet[2567]: I0129 11:12:12.418941 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9ae84ce9-bde6-48ed-a61e-b7a476671233-cni-path\") pod \"cilium-6xslf\" (UID: \"9ae84ce9-bde6-48ed-a61e-b7a476671233\") " pod="kube-system/cilium-6xslf" Jan 29 11:12:12.419067 kubelet[2567]: I0129 11:12:12.418967 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9ae84ce9-bde6-48ed-a61e-b7a476671233-cilium-ipsec-secrets\") pod \"cilium-6xslf\" (UID: \"9ae84ce9-bde6-48ed-a61e-b7a476671233\") " pod="kube-system/cilium-6xslf" Jan 29 11:12:12.419067 kubelet[2567]: I0129 11:12:12.418993 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9ae84ce9-bde6-48ed-a61e-b7a476671233-hubble-tls\") pod \"cilium-6xslf\" (UID: \"9ae84ce9-bde6-48ed-a61e-b7a476671233\") " pod="kube-system/cilium-6xslf" Jan 29 11:12:12.419247 kubelet[2567]: I0129 11:12:12.419019 2567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9ae84ce9-bde6-48ed-a61e-b7a476671233-bpf-maps\") pod \"cilium-6xslf\" (UID: \"9ae84ce9-bde6-48ed-a61e-b7a476671233\") " pod="kube-system/cilium-6xslf" Jan 29 11:12:12.465275 sshd[4403]: Connection closed by 139.178.89.65 port 58834 Jan 29 11:12:12.465769 sshd-session[4401]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:12.478529 systemd[1]: sshd@30-143.198.77.23:22-139.178.89.65:58834.service: Deactivated successfully. Jan 29 11:12:12.483021 systemd[1]: session-31.scope: Deactivated successfully. Jan 29 11:12:12.487000 systemd-logind[1448]: Session 31 logged out. Waiting for processes to exit. Jan 29 11:12:12.502174 systemd[1]: Started sshd@31-143.198.77.23:22-139.178.89.65:58838.service - OpenSSH per-connection server daemon (139.178.89.65:58838). Jan 29 11:12:12.508310 systemd-logind[1448]: Removed session 31. Jan 29 11:12:12.614039 sshd[4409]: Accepted publickey for core from 139.178.89.65 port 58838 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:12:12.617217 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:12.627322 systemd-logind[1448]: New session 32 of user core. Jan 29 11:12:12.634362 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 29 11:12:12.666805 kubelet[2567]: E0129 11:12:12.666714 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:12:12.667707 containerd[1472]: time="2025-01-29T11:12:12.667622169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xslf,Uid:9ae84ce9-bde6-48ed-a61e-b7a476671233,Namespace:kube-system,Attempt:0,}" Jan 29 11:12:12.713470 containerd[1472]: time="2025-01-29T11:12:12.713307295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:12:12.713470 containerd[1472]: time="2025-01-29T11:12:12.713406921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:12:12.713470 containerd[1472]: time="2025-01-29T11:12:12.713430650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:12.713748 containerd[1472]: time="2025-01-29T11:12:12.713570436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:12.750731 systemd[1]: Started cri-containerd-6c1a117551a0b2841ebb9f684d6cb1f5b533d283c05d55519d1b3ead34f09d9a.scope - libcontainer container 6c1a117551a0b2841ebb9f684d6cb1f5b533d283c05d55519d1b3ead34f09d9a. Jan 29 11:12:12.818336 containerd[1472]: time="2025-01-29T11:12:12.818280578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xslf,Uid:9ae84ce9-bde6-48ed-a61e-b7a476671233,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c1a117551a0b2841ebb9f684d6cb1f5b533d283c05d55519d1b3ead34f09d9a\"" Jan 29 11:12:12.819900 kubelet[2567]: E0129 11:12:12.819837 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:12:12.828450 containerd[1472]: time="2025-01-29T11:12:12.828381568Z" level=info msg="CreateContainer within sandbox \"6c1a117551a0b2841ebb9f684d6cb1f5b533d283c05d55519d1b3ead34f09d9a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:12:12.848508 containerd[1472]: time="2025-01-29T11:12:12.848366739Z" level=info msg="CreateContainer within sandbox \"6c1a117551a0b2841ebb9f684d6cb1f5b533d283c05d55519d1b3ead34f09d9a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"13686b04e6792cab60481d96dad02069a2fc3d6d5af12a70cebb30529c92baf1\"" Jan 29 11:12:12.850278 containerd[1472]: time="2025-01-29T11:12:12.850205154Z" level=info msg="StartContainer for \"13686b04e6792cab60481d96dad02069a2fc3d6d5af12a70cebb30529c92baf1\"" Jan 29 11:12:12.887471 systemd[1]: Started cri-containerd-13686b04e6792cab60481d96dad02069a2fc3d6d5af12a70cebb30529c92baf1.scope - libcontainer container 13686b04e6792cab60481d96dad02069a2fc3d6d5af12a70cebb30529c92baf1. Jan 29 11:12:12.933724 containerd[1472]: time="2025-01-29T11:12:12.933623146Z" level=info msg="StartContainer for \"13686b04e6792cab60481d96dad02069a2fc3d6d5af12a70cebb30529c92baf1\" returns successfully" Jan 29 11:12:12.951828 systemd[1]: cri-containerd-13686b04e6792cab60481d96dad02069a2fc3d6d5af12a70cebb30529c92baf1.scope: Deactivated successfully. Jan 29 11:12:12.996726 containerd[1472]: time="2025-01-29T11:12:12.996578781Z" level=info msg="shim disconnected" id=13686b04e6792cab60481d96dad02069a2fc3d6d5af12a70cebb30529c92baf1 namespace=k8s.io Jan 29 11:12:12.996726 containerd[1472]: time="2025-01-29T11:12:12.996635607Z" level=warning msg="cleaning up after shim disconnected" id=13686b04e6792cab60481d96dad02069a2fc3d6d5af12a70cebb30529c92baf1 namespace=k8s.io Jan 29 11:12:12.996726 containerd[1472]: time="2025-01-29T11:12:12.996643975Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:12:13.174436 kubelet[2567]: I0129 11:12:13.174278 2567 setters.go:600] "Node became not ready" node="ci-4186.1.0-5-b8e0b24f92" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T11:12:13Z","lastTransitionTime":"2025-01-29T11:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 11:12:13.282994 kubelet[2567]: E0129 11:12:13.282954 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:12:13.287315 containerd[1472]: time="2025-01-29T11:12:13.287244748Z" level=info msg="CreateContainer within sandbox \"6c1a117551a0b2841ebb9f684d6cb1f5b533d283c05d55519d1b3ead34f09d9a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:12:13.304697 containerd[1472]: time="2025-01-29T11:12:13.304609690Z" level=info msg="CreateContainer within sandbox \"6c1a117551a0b2841ebb9f684d6cb1f5b533d283c05d55519d1b3ead34f09d9a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"07310f150ff438d69d11ff73daf817e86cffb8153e8c0cc0f263be2570674b46\"" Jan 29 11:12:13.305942 containerd[1472]: time="2025-01-29T11:12:13.305851803Z" level=info msg="StartContainer for \"07310f150ff438d69d11ff73daf817e86cffb8153e8c0cc0f263be2570674b46\"" Jan 29 11:12:13.347803 systemd[1]: Started cri-containerd-07310f150ff438d69d11ff73daf817e86cffb8153e8c0cc0f263be2570674b46.scope - libcontainer container 07310f150ff438d69d11ff73daf817e86cffb8153e8c0cc0f263be2570674b46. Jan 29 11:12:13.379934 containerd[1472]: time="2025-01-29T11:12:13.379885885Z" level=info msg="StartContainer for \"07310f150ff438d69d11ff73daf817e86cffb8153e8c0cc0f263be2570674b46\" returns successfully" Jan 29 11:12:13.395025 systemd[1]: cri-containerd-07310f150ff438d69d11ff73daf817e86cffb8153e8c0cc0f263be2570674b46.scope: Deactivated successfully. Jan 29 11:12:13.423056 containerd[1472]: time="2025-01-29T11:12:13.422773598Z" level=info msg="shim disconnected" id=07310f150ff438d69d11ff73daf817e86cffb8153e8c0cc0f263be2570674b46 namespace=k8s.io Jan 29 11:12:13.423056 containerd[1472]: time="2025-01-29T11:12:13.422856962Z" level=warning msg="cleaning up after shim disconnected" id=07310f150ff438d69d11ff73daf817e86cffb8153e8c0cc0f263be2570674b46 namespace=k8s.io Jan 29 11:12:13.423056 containerd[1472]: time="2025-01-29T11:12:13.422871019Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:12:14.287528 kubelet[2567]: E0129 11:12:14.287484 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:12:14.293939 containerd[1472]: time="2025-01-29T11:12:14.293404275Z" level=info msg="CreateContainer within sandbox \"6c1a117551a0b2841ebb9f684d6cb1f5b533d283c05d55519d1b3ead34f09d9a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:12:14.322898 containerd[1472]: time="2025-01-29T11:12:14.322826144Z" level=info msg="CreateContainer within sandbox \"6c1a117551a0b2841ebb9f684d6cb1f5b533d283c05d55519d1b3ead34f09d9a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7960e8cc028bc777e6eea7dbe812d833ea0865eb46e6c5644033a8762fcfc9e0\"" Jan 29 11:12:14.323991 containerd[1472]: time="2025-01-29T11:12:14.323897489Z" level=info msg="StartContainer for \"7960e8cc028bc777e6eea7dbe812d833ea0865eb46e6c5644033a8762fcfc9e0\"" Jan 29 11:12:14.388644 systemd[1]: Started cri-containerd-7960e8cc028bc777e6eea7dbe812d833ea0865eb46e6c5644033a8762fcfc9e0.scope - libcontainer container 7960e8cc028bc777e6eea7dbe812d833ea0865eb46e6c5644033a8762fcfc9e0. Jan 29 11:12:14.435486 containerd[1472]: time="2025-01-29T11:12:14.435215745Z" level=info msg="StartContainer for \"7960e8cc028bc777e6eea7dbe812d833ea0865eb46e6c5644033a8762fcfc9e0\" returns successfully" Jan 29 11:12:14.444853 systemd[1]: cri-containerd-7960e8cc028bc777e6eea7dbe812d833ea0865eb46e6c5644033a8762fcfc9e0.scope: Deactivated successfully. Jan 29 11:12:14.475485 containerd[1472]: time="2025-01-29T11:12:14.474729656Z" level=info msg="shim disconnected" id=7960e8cc028bc777e6eea7dbe812d833ea0865eb46e6c5644033a8762fcfc9e0 namespace=k8s.io Jan 29 11:12:14.475485 containerd[1472]: time="2025-01-29T11:12:14.474820106Z" level=warning msg="cleaning up after shim disconnected" id=7960e8cc028bc777e6eea7dbe812d833ea0865eb46e6c5644033a8762fcfc9e0 namespace=k8s.io Jan 29 11:12:14.475485 containerd[1472]: time="2025-01-29T11:12:14.474834186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:12:14.494778 containerd[1472]: time="2025-01-29T11:12:14.494708996Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:12:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:12:14.538202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7960e8cc028bc777e6eea7dbe812d833ea0865eb46e6c5644033a8762fcfc9e0-rootfs.mount: Deactivated successfully. Jan 29 11:12:15.293734 kubelet[2567]: E0129 11:12:15.293218 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:12:15.297732 containerd[1472]: time="2025-01-29T11:12:15.297656833Z" level=info msg="CreateContainer within sandbox \"6c1a117551a0b2841ebb9f684d6cb1f5b533d283c05d55519d1b3ead34f09d9a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:12:15.331260 containerd[1472]: time="2025-01-29T11:12:15.331193034Z" level=info msg="CreateContainer within sandbox \"6c1a117551a0b2841ebb9f684d6cb1f5b533d283c05d55519d1b3ead34f09d9a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"561b6baee7afe532ea921d0fdb167e9564947202e810c9e64903a70e2729ea50\"" Jan 29 11:12:15.332706 containerd[1472]: time="2025-01-29T11:12:15.332655808Z" level=info msg="StartContainer for \"561b6baee7afe532ea921d0fdb167e9564947202e810c9e64903a70e2729ea50\"" Jan 29 11:12:15.400716 systemd[1]: Started cri-containerd-561b6baee7afe532ea921d0fdb167e9564947202e810c9e64903a70e2729ea50.scope - libcontainer container 561b6baee7afe532ea921d0fdb167e9564947202e810c9e64903a70e2729ea50. Jan 29 11:12:15.456562 systemd[1]: cri-containerd-561b6baee7afe532ea921d0fdb167e9564947202e810c9e64903a70e2729ea50.scope: Deactivated successfully. Jan 29 11:12:15.457444 containerd[1472]: time="2025-01-29T11:12:15.456972692Z" level=info msg="StartContainer for \"561b6baee7afe532ea921d0fdb167e9564947202e810c9e64903a70e2729ea50\" returns successfully" Jan 29 11:12:15.500988 containerd[1472]: time="2025-01-29T11:12:15.500889287Z" level=info msg="shim disconnected" id=561b6baee7afe532ea921d0fdb167e9564947202e810c9e64903a70e2729ea50 namespace=k8s.io Jan 29 11:12:15.501500 containerd[1472]: time="2025-01-29T11:12:15.501371700Z" level=warning msg="cleaning up after shim disconnected" id=561b6baee7afe532ea921d0fdb167e9564947202e810c9e64903a70e2729ea50 namespace=k8s.io Jan 29 11:12:15.501500 containerd[1472]: time="2025-01-29T11:12:15.501403272Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:12:15.541143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-561b6baee7afe532ea921d0fdb167e9564947202e810c9e64903a70e2729ea50-rootfs.mount: Deactivated successfully. Jan 29 11:12:15.968611 kubelet[2567]: E0129 11:12:15.968487 2567 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:12:16.300689 kubelet[2567]: E0129 11:12:16.298963 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:12:16.306877 containerd[1472]: time="2025-01-29T11:12:16.306530010Z" level=info msg="CreateContainer within sandbox \"6c1a117551a0b2841ebb9f684d6cb1f5b533d283c05d55519d1b3ead34f09d9a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:12:16.326039 containerd[1472]: time="2025-01-29T11:12:16.325983801Z" level=info msg="CreateContainer within sandbox \"6c1a117551a0b2841ebb9f684d6cb1f5b533d283c05d55519d1b3ead34f09d9a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"664f18252aea1dfb73ae7507a0d94e843c287fc75e801ac1ffd7678f4691507a\"" Jan 29 11:12:16.328321 containerd[1472]: time="2025-01-29T11:12:16.328269467Z" level=info msg="StartContainer for \"664f18252aea1dfb73ae7507a0d94e843c287fc75e801ac1ffd7678f4691507a\"" Jan 29 11:12:16.379548 systemd[1]: Started cri-containerd-664f18252aea1dfb73ae7507a0d94e843c287fc75e801ac1ffd7678f4691507a.scope - libcontainer container 664f18252aea1dfb73ae7507a0d94e843c287fc75e801ac1ffd7678f4691507a. Jan 29 11:12:16.421917 containerd[1472]: time="2025-01-29T11:12:16.421822090Z" level=info msg="StartContainer for \"664f18252aea1dfb73ae7507a0d94e843c287fc75e801ac1ffd7678f4691507a\" returns successfully" Jan 29 11:12:17.115178 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 11:12:17.308481 kubelet[2567]: E0129 11:12:17.308426 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:12:17.333564 kubelet[2567]: I0129 11:12:17.333361 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6xslf" podStartSLOduration=5.33325327 podStartE2EDuration="5.33325327s" podCreationTimestamp="2025-01-29 11:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:12:17.331670888 +0000 UTC m=+126.722225307" watchObservedRunningTime="2025-01-29 11:12:17.33325327 +0000 UTC m=+126.723807701" Jan 29 11:12:18.668553 kubelet[2567]: E0129 11:12:18.668503 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:12:19.356610 kubelet[2567]: E0129 11:12:19.356566 2567 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57896->127.0.0.1:33143: write tcp 127.0.0.1:57896->127.0.0.1:33143: write: broken pipe Jan 29 11:12:20.839309 systemd-networkd[1359]: lxc_health: Link UP Jan 29 11:12:20.856397 systemd-networkd[1359]: lxc_health: Gained carrier Jan 29 11:12:21.980635 systemd-networkd[1359]: lxc_health: Gained IPv6LL Jan 29 11:12:22.671503 kubelet[2567]: E0129 11:12:22.671451 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:12:23.324495 kubelet[2567]: E0129 11:12:23.324447 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:12:24.328864 kubelet[2567]: E0129 11:12:24.328820 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:12:27.790218 kubelet[2567]: E0129 11:12:27.790166 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:12:28.631791 systemd[1]: run-containerd-runc-k8s.io-664f18252aea1dfb73ae7507a0d94e843c287fc75e801ac1ffd7678f4691507a-runc.D9rGmo.mount: Deactivated successfully. Jan 29 11:12:28.761162 sshd[4415]: Connection closed by 139.178.89.65 port 58838 Jan 29 11:12:28.762377 sshd-session[4409]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:28.766757 systemd[1]: sshd@31-143.198.77.23:22-139.178.89.65:58838.service: Deactivated successfully. Jan 29 11:12:28.771084 systemd[1]: session-32.scope: Deactivated successfully. Jan 29 11:12:28.774692 systemd-logind[1448]: Session 32 logged out. Waiting for processes to exit. Jan 29 11:12:28.777331 systemd-logind[1448]: Removed session 32. Jan 29 11:12:30.791640 kubelet[2567]: E0129 11:12:30.790897 2567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"