Feb 13 15:53:12.315732 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:06:02 -00 2025 Feb 13 15:53:12.315789 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:53:12.315812 kernel: BIOS-provided physical RAM map: Feb 13 15:53:12.315822 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 15:53:12.315832 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 15:53:12.315841 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 15:53:12.315853 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Feb 13 15:53:12.315863 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Feb 13 15:53:12.315873 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:53:12.315883 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 15:53:12.315898 kernel: NX (Execute Disable) protection: active Feb 13 15:53:12.315908 kernel: APIC: Static calls initialized Feb 13 15:53:12.315928 kernel: SMBIOS 2.8 present. Feb 13 15:53:12.315942 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 13 15:53:12.315957 kernel: Hypervisor detected: KVM Feb 13 15:53:12.315968 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:53:12.315991 kernel: kvm-clock: using sched offset of 4289523458 cycles Feb 13 15:53:12.316003 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:53:12.316015 kernel: tsc: Detected 1995.309 MHz processor Feb 13 15:53:12.316027 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:53:12.316040 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:53:12.316051 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Feb 13 15:53:12.319198 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 15:53:12.319219 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:53:12.319246 kernel: ACPI: Early table checksum verification disabled Feb 13 15:53:12.319257 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Feb 13 15:53:12.319270 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:53:12.319282 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:53:12.319294 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:53:12.319306 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 13 15:53:12.319318 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:53:12.319330 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:53:12.319342 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:53:12.319358 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:53:12.319370 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 13 15:53:12.319383 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 13 15:53:12.319394 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 13 15:53:12.319406 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 13 15:53:12.319418 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 13 15:53:12.319434 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 13 15:53:12.319453 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 13 15:53:12.319469 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:53:12.319480 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:53:12.319493 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 15:53:12.319505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 15:53:12.319532 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Feb 13 15:53:12.319546 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Feb 13 15:53:12.319562 kernel: Zone ranges: Feb 13 15:53:12.319574 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:53:12.319586 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Feb 13 15:53:12.319598 kernel: Normal empty Feb 13 15:53:12.319609 kernel: Movable zone start for each node Feb 13 15:53:12.319620 kernel: Early memory node ranges Feb 13 15:53:12.319631 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 15:53:12.319642 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Feb 13 15:53:12.319653 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Feb 13 15:53:12.319670 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:53:12.320417 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 15:53:12.320457 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Feb 13 15:53:12.320648 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:53:12.320667 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:53:12.320749 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:53:12.320764 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:53:12.320932 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:53:12.320952 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:53:12.320966 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:53:12.320994 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:53:12.321006 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:53:12.321019 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:53:12.321033 kernel: TSC deadline timer available Feb 13 15:53:12.321046 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:53:12.321322 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:53:12.321342 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 13 15:53:12.321373 kernel: Booting paravirtualized kernel on KVM Feb 13 15:53:12.321391 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:53:12.321423 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:53:12.321439 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:53:12.321456 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:53:12.321473 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:53:12.321490 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 13 15:53:12.321530 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:53:12.321550 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:53:12.321567 kernel: random: crng init done Feb 13 15:53:12.321591 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:53:12.321608 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:53:12.321623 kernel: Fallback order for Node 0: 0 Feb 13 15:53:12.321639 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Feb 13 15:53:12.321652 kernel: Policy zone: DMA32 Feb 13 15:53:12.321665 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:53:12.321679 kernel: Memory: 1969156K/2096612K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 127196K reserved, 0K cma-reserved) Feb 13 15:53:12.321696 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:53:12.321708 kernel: Kernel/User page tables isolation: enabled Feb 13 15:53:12.321726 kernel: ftrace: allocating 37890 entries in 149 pages Feb 13 15:53:12.321738 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:53:12.321750 kernel: Dynamic Preempt: voluntary Feb 13 15:53:12.321763 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:53:12.321781 kernel: rcu: RCU event tracing is enabled. Feb 13 15:53:12.321793 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:53:12.321805 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:53:12.321818 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:53:12.321830 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:53:12.321848 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:53:12.321860 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:53:12.321875 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 15:53:12.321891 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:53:12.321917 kernel: Console: colour VGA+ 80x25 Feb 13 15:53:12.321933 kernel: printk: console [tty0] enabled Feb 13 15:53:12.321949 kernel: printk: console [ttyS0] enabled Feb 13 15:53:12.321962 kernel: ACPI: Core revision 20230628 Feb 13 15:53:12.321974 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:53:12.321994 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:53:12.322006 kernel: x2apic enabled Feb 13 15:53:12.322020 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:53:12.322034 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:53:12.326179 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985bd6d44e, max_idle_ns: 881590467931 ns Feb 13 15:53:12.326256 kernel: Calibrating delay loop (skipped) preset value.. 3990.61 BogoMIPS (lpj=1995309) Feb 13 15:53:12.326273 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 15:53:12.326288 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 15:53:12.326329 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:53:12.326344 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:53:12.326357 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:53:12.326369 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:53:12.326386 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 15:53:12.326398 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:53:12.326412 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:53:12.326426 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 15:53:12.326441 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:53:12.326476 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:53:12.326489 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:53:12.326502 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:53:12.326518 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:53:12.326533 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 15:53:12.326547 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:53:12.326560 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:53:12.326573 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:53:12.326590 kernel: landlock: Up and running. Feb 13 15:53:12.326604 kernel: SELinux: Initializing. Feb 13 15:53:12.326617 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 15:53:12.326630 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 15:53:12.326643 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Feb 13 15:53:12.326658 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:53:12.326673 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:53:12.326687 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:53:12.326700 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Feb 13 15:53:12.326719 kernel: signal: max sigframe size: 1776 Feb 13 15:53:12.326732 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:53:12.326749 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:53:12.326763 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:53:12.326777 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:53:12.326789 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:53:12.326934 kernel: .... node #0, CPUs: #1 Feb 13 15:53:12.326951 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:53:12.326975 kernel: smpboot: Max logical packages: 1 Feb 13 15:53:12.329168 kernel: smpboot: Total of 2 processors activated (7981.23 BogoMIPS) Feb 13 15:53:12.329211 kernel: devtmpfs: initialized Feb 13 15:53:12.329226 kernel: x86/mm: Memory block size: 128MB Feb 13 15:53:12.329241 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:53:12.329257 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:53:12.329273 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:53:12.329286 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:53:12.329300 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:53:12.329315 kernel: audit: type=2000 audit(1739461990.760:1): state=initialized audit_enabled=0 res=1 Feb 13 15:53:12.329345 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:53:12.329708 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:53:12.329725 kernel: cpuidle: using governor menu Feb 13 15:53:12.330396 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:53:12.330413 kernel: dca service started, version 1.12.1 Feb 13 15:53:12.330427 kernel: PCI: Using configuration type 1 for base access Feb 13 15:53:12.330473 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:53:12.330488 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:53:12.330501 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:53:12.330529 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:53:12.330542 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:53:12.330557 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:53:12.330574 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:53:12.330588 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:53:12.330601 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:53:12.330615 kernel: ACPI: Interpreter enabled Feb 13 15:53:12.330629 kernel: ACPI: PM: (supports S0 S5) Feb 13 15:53:12.330645 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:53:12.330665 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:53:12.330678 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:53:12.330690 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 13 15:53:12.330703 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:53:12.335382 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:53:12.335678 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 15:53:12.335870 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 15:53:12.335910 kernel: acpiphp: Slot [3] registered Feb 13 15:53:12.335925 kernel: acpiphp: Slot [4] registered Feb 13 15:53:12.335938 kernel: acpiphp: Slot [5] registered Feb 13 15:53:12.335952 kernel: acpiphp: Slot [6] registered Feb 13 15:53:12.335965 kernel: acpiphp: Slot [7] registered Feb 13 15:53:12.335979 kernel: acpiphp: Slot [8] registered Feb 13 15:53:12.335993 kernel: acpiphp: Slot [9] registered Feb 13 15:53:12.336006 kernel: acpiphp: Slot [10] registered Feb 13 15:53:12.336015 kernel: acpiphp: Slot [11] registered Feb 13 15:53:12.336024 kernel: acpiphp: Slot [12] registered Feb 13 15:53:12.336038 kernel: acpiphp: Slot [13] registered Feb 13 15:53:12.336047 kernel: acpiphp: Slot [14] registered Feb 13 15:53:12.336085 kernel: acpiphp: Slot [15] registered Feb 13 15:53:12.336094 kernel: acpiphp: Slot [16] registered Feb 13 15:53:12.336103 kernel: acpiphp: Slot [17] registered Feb 13 15:53:12.336112 kernel: acpiphp: Slot [18] registered Feb 13 15:53:12.336121 kernel: acpiphp: Slot [19] registered Feb 13 15:53:12.336129 kernel: acpiphp: Slot [20] registered Feb 13 15:53:12.336138 kernel: acpiphp: Slot [21] registered Feb 13 15:53:12.336150 kernel: acpiphp: Slot [22] registered Feb 13 15:53:12.336159 kernel: acpiphp: Slot [23] registered Feb 13 15:53:12.336168 kernel: acpiphp: Slot [24] registered Feb 13 15:53:12.336176 kernel: acpiphp: Slot [25] registered Feb 13 15:53:12.336185 kernel: acpiphp: Slot [26] registered Feb 13 15:53:12.336194 kernel: acpiphp: Slot [27] registered Feb 13 15:53:12.336203 kernel: acpiphp: Slot [28] registered Feb 13 15:53:12.336211 kernel: acpiphp: Slot [29] registered Feb 13 15:53:12.336220 kernel: acpiphp: Slot [30] registered Feb 13 15:53:12.336232 kernel: acpiphp: Slot [31] registered Feb 13 15:53:12.336241 kernel: PCI host bridge to bus 0000:00 Feb 13 15:53:12.336391 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:53:12.336513 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:53:12.336649 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:53:12.336788 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 15:53:12.336916 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 13 15:53:12.337046 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:53:12.340105 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 15:53:12.340363 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 15:53:12.340501 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 13 15:53:12.340622 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 13 15:53:12.340765 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 13 15:53:12.340916 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 13 15:53:12.341101 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 13 15:53:12.344990 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 13 15:53:12.345321 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 13 15:53:12.345492 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 13 15:53:12.345681 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 15:53:12.345838 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 13 15:53:12.346008 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 13 15:53:12.346221 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 13 15:53:12.346392 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 13 15:53:12.346551 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 13 15:53:12.346706 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 13 15:53:12.346851 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 15:53:12.346989 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:53:12.347304 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:53:12.347449 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 13 15:53:12.347586 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 13 15:53:12.347737 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 13 15:53:12.347924 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:53:12.349188 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 13 15:53:12.349412 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 13 15:53:12.349582 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 13 15:53:12.349731 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 13 15:53:12.349834 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 13 15:53:12.349932 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 13 15:53:12.350027 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 13 15:53:12.352622 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:53:12.352919 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 15:53:12.353196 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 13 15:53:12.353351 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 13 15:53:12.353533 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:53:12.353668 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 13 15:53:12.353796 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 13 15:53:12.353925 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 13 15:53:12.356822 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 13 15:53:12.357021 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 13 15:53:12.357167 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 13 15:53:12.357185 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:53:12.357199 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:53:12.357212 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:53:12.357225 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:53:12.357239 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 15:53:12.357257 kernel: iommu: Default domain type: Translated Feb 13 15:53:12.357270 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:53:12.357283 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:53:12.357297 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:53:12.357311 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 15:53:12.357324 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Feb 13 15:53:12.357460 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 13 15:53:12.357585 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 13 15:53:12.357713 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:53:12.357729 kernel: vgaarb: loaded Feb 13 15:53:12.357743 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:53:12.357756 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:53:12.357770 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:53:12.357783 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:53:12.357797 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:53:12.357810 kernel: pnp: PnP ACPI init Feb 13 15:53:12.357823 kernel: pnp: PnP ACPI: found 4 devices Feb 13 15:53:12.357849 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:53:12.357863 kernel: NET: Registered PF_INET protocol family Feb 13 15:53:12.357875 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:53:12.357889 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 15:53:12.357905 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:53:12.357929 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:53:12.357952 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 15:53:12.357978 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 15:53:12.358001 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 15:53:12.358030 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 15:53:12.358053 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:53:12.358093 kernel: NET: Registered PF_XDP protocol family Feb 13 15:53:12.358273 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:53:12.359552 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:53:12.359725 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:53:12.359867 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 15:53:12.360003 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 13 15:53:12.361366 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 13 15:53:12.361550 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 15:53:12.361575 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 15:53:12.361741 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 40785 usecs Feb 13 15:53:12.361773 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:53:12.361797 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:53:12.361821 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985bd6d44e, max_idle_ns: 881590467931 ns Feb 13 15:53:12.361844 kernel: Initialise system trusted keyrings Feb 13 15:53:12.361859 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 15:53:12.361892 kernel: Key type asymmetric registered Feb 13 15:53:12.361906 kernel: Asymmetric key parser 'x509' registered Feb 13 15:53:12.361920 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:53:12.361934 kernel: io scheduler mq-deadline registered Feb 13 15:53:12.361948 kernel: io scheduler kyber registered Feb 13 15:53:12.361964 kernel: io scheduler bfq registered Feb 13 15:53:12.361980 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:53:12.361993 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 13 15:53:12.362005 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 15:53:12.362024 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 15:53:12.362037 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:53:12.362050 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:53:12.364184 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:53:12.364204 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:53:12.364214 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:53:12.364597 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 15:53:12.364626 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:53:12.364760 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 15:53:12.364857 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T15:53:11 UTC (1739461991) Feb 13 15:53:12.364948 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 15:53:12.364959 kernel: intel_pstate: CPU model not supported Feb 13 15:53:12.364970 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:53:12.364979 kernel: Segment Routing with IPv6 Feb 13 15:53:12.364989 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:53:12.364998 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:53:12.365012 kernel: Key type dns_resolver registered Feb 13 15:53:12.365022 kernel: IPI shorthand broadcast: enabled Feb 13 15:53:12.365032 kernel: sched_clock: Marking stable (1608006040, 165193275)->(1877672050, -104472735) Feb 13 15:53:12.365049 kernel: registered taskstats version 1 Feb 13 15:53:12.365099 kernel: Loading compiled-in X.509 certificates Feb 13 15:53:12.365136 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 3d19ae6dcd850c11d55bf09bd44e00c45ed399eb' Feb 13 15:53:12.365148 kernel: Key type .fscrypt registered Feb 13 15:53:12.365157 kernel: Key type fscrypt-provisioning registered Feb 13 15:53:12.365166 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:53:12.365180 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:53:12.365189 kernel: ima: No architecture policies found Feb 13 15:53:12.365198 kernel: clk: Disabling unused clocks Feb 13 15:53:12.365206 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 15:53:12.365215 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:53:12.365244 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 15:53:12.365256 kernel: Run /init as init process Feb 13 15:53:12.365268 kernel: with arguments: Feb 13 15:53:12.365283 kernel: /init Feb 13 15:53:12.365299 kernel: with environment: Feb 13 15:53:12.365313 kernel: HOME=/ Feb 13 15:53:12.365329 kernel: TERM=linux Feb 13 15:53:12.365343 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:53:12.365364 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:53:12.365385 systemd[1]: Detected virtualization kvm. Feb 13 15:53:12.365402 systemd[1]: Detected architecture x86-64. Feb 13 15:53:12.365412 systemd[1]: Running in initrd. Feb 13 15:53:12.365435 systemd[1]: No hostname configured, using default hostname. Feb 13 15:53:12.365451 systemd[1]: Hostname set to . Feb 13 15:53:12.365469 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:53:12.365486 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:53:12.365504 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:53:12.365521 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:53:12.365541 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:53:12.365559 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:53:12.365579 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:53:12.365597 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:53:12.365617 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:53:12.365633 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:53:12.365654 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:53:12.365667 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:53:12.365681 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:53:12.365699 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:53:12.365712 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:53:12.365731 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:53:12.365751 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:53:12.365768 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:53:12.365789 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:53:12.365804 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:53:12.365822 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:53:12.365838 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:53:12.365852 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:53:12.365869 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:53:12.365886 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:53:12.365897 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:53:12.365906 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:53:12.365919 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:53:12.365929 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:53:12.365941 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:53:12.365959 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:53:12.365978 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:53:12.367411 systemd-journald[184]: Collecting audit messages is disabled. Feb 13 15:53:12.367490 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:53:12.367503 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:53:12.367514 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:53:12.367528 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:53:12.367538 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:53:12.367552 systemd-journald[184]: Journal started Feb 13 15:53:12.367586 systemd-journald[184]: Runtime Journal (/run/log/journal/075193568a48443cac2d4ff4452c8186) is 4.9M, max 39.3M, 34.4M free. Feb 13 15:53:12.360940 systemd-modules-load[185]: Inserted module 'overlay' Feb 13 15:53:12.435016 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:53:12.435190 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:53:12.435215 kernel: Bridge firewalling registered Feb 13 15:53:12.411678 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 13 15:53:12.433924 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:53:12.436191 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:53:12.437951 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:53:12.462639 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:53:12.467496 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:53:12.498728 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:53:12.520252 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:53:12.523176 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:53:12.535485 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:53:12.537690 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:53:12.541932 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:53:12.583180 dracut-cmdline[220]: dracut-dracut-053 Feb 13 15:53:12.583093 systemd-resolved[216]: Positive Trust Anchors: Feb 13 15:53:12.583111 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:53:12.583169 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:53:12.587772 systemd-resolved[216]: Defaulting to hostname 'linux'. Feb 13 15:53:12.589804 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:53:12.607833 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:53:12.604042 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:53:12.807107 kernel: SCSI subsystem initialized Feb 13 15:53:12.825888 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:53:12.863827 kernel: iscsi: registered transport (tcp) Feb 13 15:53:12.906432 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:53:12.906586 kernel: QLogic iSCSI HBA Driver Feb 13 15:53:13.026154 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:53:13.034737 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:53:13.114468 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:53:13.114606 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:53:13.116920 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:53:13.216469 kernel: raid6: avx2x4 gen() 6256 MB/s Feb 13 15:53:13.243995 kernel: raid6: avx2x2 gen() 12736 MB/s Feb 13 15:53:13.272409 kernel: raid6: avx2x1 gen() 7694 MB/s Feb 13 15:53:13.272576 kernel: raid6: using algorithm avx2x2 gen() 12736 MB/s Feb 13 15:53:13.299194 kernel: raid6: .... xor() 7054 MB/s, rmw enabled Feb 13 15:53:13.299465 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:53:13.343097 kernel: xor: automatically using best checksumming function avx Feb 13 15:53:13.659347 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:53:13.684567 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:53:13.694827 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:53:13.735640 systemd-udevd[402]: Using default interface naming scheme 'v255'. Feb 13 15:53:13.742799 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:53:13.776983 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:53:13.826327 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Feb 13 15:53:13.916170 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:53:13.931490 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:53:14.050079 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:53:14.062627 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:53:14.123373 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:53:14.140280 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:53:14.143242 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:53:14.146253 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:53:14.157536 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:53:14.201092 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:53:14.305123 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Feb 13 15:53:14.391936 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 15:53:14.392270 kernel: scsi host0: Virtio SCSI HBA Feb 13 15:53:14.392497 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:53:14.392520 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:53:14.392558 kernel: GPT:9289727 != 125829119 Feb 13 15:53:14.392577 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:53:14.392595 kernel: GPT:9289727 != 125829119 Feb 13 15:53:14.392614 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:53:14.392632 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:53:14.395396 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Feb 13 15:53:14.405882 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Feb 13 15:53:14.470182 kernel: libata version 3.00 loaded. Feb 13 15:53:14.486274 kernel: ACPI: bus type USB registered Feb 13 15:53:14.490102 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:53:14.494000 kernel: usbcore: registered new interface driver usbfs Feb 13 15:53:14.494137 kernel: usbcore: registered new interface driver hub Feb 13 15:53:14.495599 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:53:14.497484 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:53:14.501919 kernel: AES CTR mode by8 optimization enabled Feb 13 15:53:14.502610 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:53:14.503617 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:53:14.513118 kernel: usbcore: registered new device driver usb Feb 13 15:53:14.504052 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:53:14.510449 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:53:14.532760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:53:14.543476 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 13 15:53:14.603762 kernel: scsi host1: ata_piix Feb 13 15:53:14.604151 kernel: scsi host2: ata_piix Feb 13 15:53:14.604371 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 13 15:53:14.604394 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 13 15:53:14.617133 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (448) Feb 13 15:53:14.637110 kernel: BTRFS: device fsid 0e178e67-0100-48b1-87c9-422b9a68652a devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (456) Feb 13 15:53:14.660518 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:53:14.718747 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:53:14.739194 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:53:14.754902 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:53:14.764986 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:53:14.765939 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:53:14.777790 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:53:14.798859 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:53:14.840955 disk-uuid[535]: Primary Header is updated. Feb 13 15:53:14.840955 disk-uuid[535]: Secondary Entries is updated. Feb 13 15:53:14.840955 disk-uuid[535]: Secondary Header is updated. Feb 13 15:53:14.864109 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:53:14.868671 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:53:14.925629 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 13 15:53:14.948864 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 13 15:53:14.949449 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 13 15:53:14.949649 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Feb 13 15:53:14.949846 kernel: hub 1-0:1.0: USB hub found Feb 13 15:53:14.950088 kernel: hub 1-0:1.0: 2 ports detected Feb 13 15:53:15.894795 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:53:15.897415 disk-uuid[541]: The operation has completed successfully. Feb 13 15:53:15.997414 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:53:15.997637 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:53:16.044044 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:53:16.065553 sh[561]: Success Feb 13 15:53:16.117397 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:53:16.314574 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:53:16.317869 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:53:16.337223 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:53:16.422011 kernel: BTRFS info (device dm-0): first mount of filesystem 0e178e67-0100-48b1-87c9-422b9a68652a Feb 13 15:53:16.427442 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:53:16.427616 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:53:16.428708 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:53:16.431931 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:53:16.456621 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:53:16.459331 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:53:16.471226 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:53:16.483748 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:53:16.528110 kernel: BTRFS info (device vda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:53:16.531458 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:53:16.531590 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:53:16.547151 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:53:16.586789 kernel: BTRFS info (device vda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:53:16.586152 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:53:16.611469 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:53:16.620539 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:53:16.875309 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:53:16.953960 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:53:17.052212 systemd-networkd[746]: lo: Link UP Feb 13 15:53:17.052232 systemd-networkd[746]: lo: Gained carrier Feb 13 15:53:17.056458 systemd-networkd[746]: Enumeration completed Feb 13 15:53:17.057216 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 15:53:17.057221 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 13 15:53:17.058781 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:53:17.073257 systemd[1]: Reached target network.target - Network. Feb 13 15:53:17.091697 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:53:17.091705 systemd-networkd[746]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:53:17.099487 systemd-networkd[746]: eth0: Link UP Feb 13 15:53:17.099495 systemd-networkd[746]: eth0: Gained carrier Feb 13 15:53:17.099519 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 15:53:17.118606 systemd-networkd[746]: eth1: Link UP Feb 13 15:53:17.118613 systemd-networkd[746]: eth1: Gained carrier Feb 13 15:53:17.118637 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:53:17.135242 systemd-networkd[746]: eth0: DHCPv4 address 143.198.102.37/20, gateway 143.198.96.1 acquired from 169.254.169.253 Feb 13 15:53:17.143215 ignition[671]: Ignition 2.20.0 Feb 13 15:53:17.144404 ignition[671]: Stage: fetch-offline Feb 13 15:53:17.144503 ignition[671]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:53:17.144519 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 15:53:17.144773 ignition[671]: parsed url from cmdline: "" Feb 13 15:53:17.144780 ignition[671]: no config URL provided Feb 13 15:53:17.144790 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:53:17.144805 ignition[671]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:53:17.144815 ignition[671]: failed to fetch config: resource requires networking Feb 13 15:53:17.145192 ignition[671]: Ignition finished successfully Feb 13 15:53:17.153851 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:53:17.155941 systemd-networkd[746]: eth1: DHCPv4 address 10.124.0.23/20 acquired from 169.254.169.253 Feb 13 15:53:17.183623 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:53:17.241513 ignition[757]: Ignition 2.20.0 Feb 13 15:53:17.241538 ignition[757]: Stage: fetch Feb 13 15:53:17.241888 ignition[757]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:53:17.241908 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 15:53:17.243639 ignition[757]: parsed url from cmdline: "" Feb 13 15:53:17.243649 ignition[757]: no config URL provided Feb 13 15:53:17.243664 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:53:17.243693 ignition[757]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:53:17.243752 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 13 15:53:17.267813 ignition[757]: GET result: OK Feb 13 15:53:17.270487 ignition[757]: parsing config with SHA512: 006fad3eea7caee9c222efa15957e689c4ab1b8d1242692ebe40a00cbf099d5d72aef02d92ca5f62ce6e74f40d723300667434fac978143ace0893f94e06b263 Feb 13 15:53:17.279656 unknown[757]: fetched base config from "system" Feb 13 15:53:17.282604 unknown[757]: fetched base config from "system" Feb 13 15:53:17.282621 unknown[757]: fetched user config from "digitalocean" Feb 13 15:53:17.283655 ignition[757]: fetch: fetch complete Feb 13 15:53:17.283668 ignition[757]: fetch: fetch passed Feb 13 15:53:17.283814 ignition[757]: Ignition finished successfully Feb 13 15:53:17.290812 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:53:17.306947 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:53:17.356232 ignition[764]: Ignition 2.20.0 Feb 13 15:53:17.360240 ignition[764]: Stage: kargs Feb 13 15:53:17.360719 ignition[764]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:53:17.360747 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 15:53:17.362873 ignition[764]: kargs: kargs passed Feb 13 15:53:17.363160 ignition[764]: Ignition finished successfully Feb 13 15:53:17.370352 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:53:17.377788 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:53:17.431360 ignition[772]: Ignition 2.20.0 Feb 13 15:53:17.431380 ignition[772]: Stage: disks Feb 13 15:53:17.431626 ignition[772]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:53:17.431638 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 15:53:17.435654 ignition[772]: disks: disks passed Feb 13 15:53:17.435786 ignition[772]: Ignition finished successfully Feb 13 15:53:17.442215 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:53:17.448496 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:53:17.449998 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:53:17.450929 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:53:17.454590 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:53:17.458158 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:53:17.467526 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:53:17.521887 systemd-fsck[781]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:53:17.530797 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:53:17.541600 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:53:17.795193 kernel: EXT4-fs (vda9): mounted filesystem e45e00fd-a630-4f0f-91bb-bc879e42a47e r/w with ordered data mode. Quota mode: none. Feb 13 15:53:17.797468 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:53:17.800445 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:53:17.813551 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:53:17.822411 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:53:17.840782 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Feb 13 15:53:17.866037 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:53:17.867026 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:53:17.867358 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:53:17.874992 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:53:17.880434 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:53:17.902446 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (789) Feb 13 15:53:17.923257 kernel: BTRFS info (device vda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:53:17.923387 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:53:17.923426 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:53:17.923445 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:53:17.927025 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:53:18.056595 coreos-metadata[792]: Feb 13 15:53:18.056 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 15:53:18.080123 coreos-metadata[792]: Feb 13 15:53:18.075 INFO Fetch successful Feb 13 15:53:18.083689 coreos-metadata[791]: Feb 13 15:53:18.082 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 15:53:18.094105 coreos-metadata[792]: Feb 13 15:53:18.093 INFO wrote hostname ci-4186.1.1-d-137a032ec7 to /sysroot/etc/hostname Feb 13 15:53:18.101192 coreos-metadata[791]: Feb 13 15:53:18.100 INFO Fetch successful Feb 13 15:53:18.099454 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:53:18.104939 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:53:18.114795 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:53:18.125019 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Feb 13 15:53:18.125219 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Feb 13 15:53:18.131348 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:53:18.141171 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:53:18.327828 systemd-networkd[746]: eth1: Gained IPv6LL Feb 13 15:53:18.412400 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:53:18.427544 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:53:18.443917 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:53:18.461889 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:53:18.471283 kernel: BTRFS info (device vda6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:53:18.551180 ignition[910]: INFO : Ignition 2.20.0 Feb 13 15:53:18.553258 ignition[910]: INFO : Stage: mount Feb 13 15:53:18.553258 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:53:18.553258 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 15:53:18.568250 ignition[910]: INFO : mount: mount passed Feb 13 15:53:18.568250 ignition[910]: INFO : Ignition finished successfully Feb 13 15:53:18.571983 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:53:18.574800 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:53:18.593749 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:53:18.821266 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:53:18.842084 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (922) Feb 13 15:53:18.845859 kernel: BTRFS info (device vda6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:53:18.846005 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:53:18.849238 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:53:18.864574 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:53:18.872042 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:53:18.939489 ignition[938]: INFO : Ignition 2.20.0 Feb 13 15:53:18.939489 ignition[938]: INFO : Stage: files Feb 13 15:53:18.939489 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:53:18.939489 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 15:53:18.947252 ignition[938]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:53:18.949689 ignition[938]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:53:18.949689 ignition[938]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:53:18.958186 ignition[938]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:53:18.960198 ignition[938]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:53:18.962678 unknown[938]: wrote ssh authorized keys file for user: core Feb 13 15:53:18.966250 ignition[938]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:53:18.970287 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:53:18.975693 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:53:19.060113 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:53:19.096399 systemd-networkd[746]: eth0: Gained IPv6LL Feb 13 15:53:19.229385 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:53:19.229385 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:53:19.229385 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 15:53:19.577737 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:53:19.783828 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:53:19.783828 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:53:19.788858 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:53:19.788858 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:53:19.788858 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:53:19.788858 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:53:19.788858 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:53:19.788858 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:53:19.788858 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:53:19.788858 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:53:19.788858 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:53:19.788858 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 15:53:19.788858 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 15:53:19.788858 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 15:53:19.788858 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 15:53:20.104325 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:53:20.630527 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 15:53:20.630527 ignition[938]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:53:20.644614 ignition[938]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:53:20.644614 ignition[938]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:53:20.644614 ignition[938]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:53:20.644614 ignition[938]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:53:20.644614 ignition[938]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:53:20.644614 ignition[938]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:53:20.644614 ignition[938]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:53:20.644614 ignition[938]: INFO : files: files passed Feb 13 15:53:20.644614 ignition[938]: INFO : Ignition finished successfully Feb 13 15:53:20.646749 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:53:20.675171 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:53:20.678474 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:53:20.685762 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:53:20.685972 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:53:20.730520 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:53:20.730520 initrd-setup-root-after-ignition[968]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:53:20.744763 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:53:20.744861 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:53:20.750851 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:53:20.761584 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:53:20.849136 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:53:20.849346 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:53:20.851922 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:53:20.857923 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:53:20.860834 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:53:20.882639 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:53:20.905041 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:53:20.916209 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:53:20.961581 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:53:20.961872 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:53:20.967496 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:53:20.968415 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:53:20.969284 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:53:20.970082 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:53:20.970256 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:53:20.971376 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:53:20.975921 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:53:20.976940 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:53:20.981427 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:53:20.982225 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:53:20.982826 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:53:20.986341 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:53:20.987296 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:53:20.987895 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:53:20.988460 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:53:20.988954 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:53:20.991321 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:53:20.992554 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:53:20.993254 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:53:20.993933 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:53:20.997420 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:53:20.999385 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:53:20.999553 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:53:21.001753 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:53:21.001887 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:53:21.002761 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:53:21.002870 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:53:21.006365 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:53:21.006502 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:53:21.016292 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:53:21.019039 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:53:21.019236 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:53:21.036425 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:53:21.038247 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:53:21.038393 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:53:21.040284 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:53:21.040397 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:53:21.089377 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:53:21.101044 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:53:21.101401 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:53:21.111648 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:53:21.113548 ignition[993]: INFO : Ignition 2.20.0 Feb 13 15:53:21.113548 ignition[993]: INFO : Stage: umount Feb 13 15:53:21.113548 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:53:21.113548 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 15:53:21.113548 ignition[993]: INFO : umount: umount passed Feb 13 15:53:21.113548 ignition[993]: INFO : Ignition finished successfully Feb 13 15:53:21.111806 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:53:21.117353 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:53:21.117480 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:53:21.119478 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:53:21.119603 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:53:21.124344 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:53:21.124468 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:53:21.126235 systemd[1]: Stopped target network.target - Network. Feb 13 15:53:21.127681 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:53:21.130633 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:53:21.132163 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:53:21.134202 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:53:21.138224 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:53:21.139410 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:53:21.141051 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:53:21.142490 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:53:21.142565 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:53:21.144504 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:53:21.144645 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:53:21.146744 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:53:21.146875 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:53:21.148353 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:53:21.149311 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:53:21.151017 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:53:21.151207 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:53:21.153270 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:53:21.156184 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:53:21.158162 systemd-networkd[746]: eth1: DHCPv6 lease lost Feb 13 15:53:21.171152 systemd-networkd[746]: eth0: DHCPv6 lease lost Feb 13 15:53:21.176913 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:53:21.177164 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:53:21.178879 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:53:21.179204 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:53:21.190749 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:53:21.190877 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:53:21.204418 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:53:21.205279 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:53:21.205425 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:53:21.206490 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:53:21.206601 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:53:21.210985 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:53:21.211186 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:53:21.213834 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:53:21.213970 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:53:21.216088 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:53:21.250734 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:53:21.253283 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:53:21.257094 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:53:21.257311 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:53:21.274681 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:53:21.274841 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:53:21.279936 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:53:21.280044 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:53:21.283598 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:53:21.283746 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:53:21.284843 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:53:21.284992 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:53:21.301611 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:53:21.301795 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:53:21.327135 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:53:21.329291 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:53:21.329445 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:53:21.330383 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:53:21.330473 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:53:21.357564 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:53:21.359697 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:53:21.362791 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:53:21.371670 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:53:21.403559 systemd[1]: Switching root. Feb 13 15:53:21.456249 systemd-journald[184]: Journal stopped Feb 13 15:53:23.818660 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 13 15:53:23.818841 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:53:23.818871 kernel: SELinux: policy capability open_perms=1 Feb 13 15:53:23.818891 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:53:23.818913 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:53:23.818934 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:53:23.818956 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:53:23.818987 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:53:23.819026 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:53:23.819048 kernel: audit: type=1403 audit(1739462001.844:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:53:23.823605 systemd[1]: Successfully loaded SELinux policy in 76.286ms. Feb 13 15:53:23.823742 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.207ms. Feb 13 15:53:23.823765 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:53:23.823786 systemd[1]: Detected virtualization kvm. Feb 13 15:53:23.823805 systemd[1]: Detected architecture x86-64. Feb 13 15:53:23.823824 systemd[1]: Detected first boot. Feb 13 15:53:23.823852 systemd[1]: Hostname set to . Feb 13 15:53:23.823871 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:53:23.823889 zram_generator::config[1035]: No configuration found. Feb 13 15:53:23.823912 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:53:23.823931 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:53:23.823948 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:53:23.823966 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:53:23.823987 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:53:23.824010 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:53:23.824027 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:53:23.824051 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:53:23.826729 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:53:23.826765 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:53:23.826786 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:53:23.826805 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:53:23.826825 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:53:23.827121 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:53:23.827175 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:53:23.827196 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:53:23.827216 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:53:23.827237 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:53:23.827258 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:53:23.827279 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:53:23.827299 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:53:23.827324 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:53:23.827350 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:53:23.827369 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:53:23.827388 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:53:23.827410 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:53:23.827429 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:53:23.827448 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:53:23.827468 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:53:23.827497 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:53:23.827517 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:53:23.827538 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:53:23.827559 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:53:23.827580 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:53:23.827602 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:53:23.827623 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:53:23.827644 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:53:23.827665 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:53:23.827690 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:53:23.827711 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:53:23.827731 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:53:23.827754 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:53:23.827776 systemd[1]: Reached target machines.target - Containers. Feb 13 15:53:23.827797 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:53:23.827818 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:53:23.827839 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:53:23.827863 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:53:23.827904 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:53:23.827924 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:53:23.827946 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:53:23.827979 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:53:23.827999 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:53:23.828019 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:53:23.828038 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:53:23.833135 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:53:23.833246 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:53:23.833273 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:53:23.833293 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:53:23.833310 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:53:23.833329 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:53:23.833346 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:53:23.833365 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:53:23.833387 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:53:23.833406 systemd[1]: Stopped verity-setup.service. Feb 13 15:53:23.833434 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:53:23.833456 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:53:23.833480 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:53:23.833501 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:53:23.833526 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:53:23.833547 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:53:23.833567 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:53:23.833588 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:53:23.833608 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:53:23.833627 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:53:23.833647 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:53:23.833677 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:53:23.833701 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:53:23.833724 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:53:23.833746 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:53:23.833769 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:53:23.833793 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:53:23.833816 kernel: ACPI: bus type drm_connector registered Feb 13 15:53:23.833845 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:53:23.833867 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:53:23.833947 systemd-journald[1111]: Collecting audit messages is disabled. Feb 13 15:53:23.834001 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:53:23.834023 kernel: fuse: init (API version 7.39) Feb 13 15:53:23.834046 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:53:23.834093 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:53:23.834138 kernel: loop: module loaded Feb 13 15:53:23.834165 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:53:23.834191 systemd-journald[1111]: Journal started Feb 13 15:53:23.834242 systemd-journald[1111]: Runtime Journal (/run/log/journal/075193568a48443cac2d4ff4452c8186) is 4.9M, max 39.3M, 34.4M free. Feb 13 15:53:23.084854 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:53:23.127259 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:53:23.128806 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:53:23.842265 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:53:23.857147 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:53:23.864661 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:53:23.890105 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:53:23.898124 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:53:23.919436 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:53:23.937655 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:53:23.949194 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:53:23.980231 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:53:24.010204 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:53:23.986133 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:53:23.986436 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:53:23.988272 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:53:23.988552 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:53:23.989976 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:53:23.990443 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:53:23.991801 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:53:23.993257 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:53:24.111292 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:53:24.126324 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:53:24.130484 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:53:24.165074 kernel: loop0: detected capacity change from 0 to 138184 Feb 13 15:53:24.165378 systemd-journald[1111]: Time spent on flushing to /var/log/journal/075193568a48443cac2d4ff4452c8186 is 37.151ms for 989 entries. Feb 13 15:53:24.165378 systemd-journald[1111]: System Journal (/var/log/journal/075193568a48443cac2d4ff4452c8186) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:53:24.275006 systemd-journald[1111]: Received client request to flush runtime journal. Feb 13 15:53:24.275208 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:53:24.167358 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:53:24.168743 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:53:24.171711 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:53:24.191260 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:53:24.289866 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:53:24.311241 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:53:24.330118 kernel: loop1: detected capacity change from 0 to 205544 Feb 13 15:53:24.331513 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:53:24.337258 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:53:24.394473 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:53:24.432175 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:53:24.478352 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:53:24.479762 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:53:24.496144 kernel: loop2: detected capacity change from 0 to 141000 Feb 13 15:53:24.523035 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Feb 13 15:53:24.523087 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Feb 13 15:53:24.562601 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:53:24.590255 kernel: loop3: detected capacity change from 0 to 8 Feb 13 15:53:24.618752 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:53:24.622603 kernel: loop4: detected capacity change from 0 to 138184 Feb 13 15:53:24.671105 kernel: loop5: detected capacity change from 0 to 205544 Feb 13 15:53:24.699313 kernel: loop6: detected capacity change from 0 to 141000 Feb 13 15:53:24.759328 kernel: loop7: detected capacity change from 0 to 8 Feb 13 15:53:24.782459 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Feb 13 15:53:24.785195 (sd-merge)[1181]: Merged extensions into '/usr'. Feb 13 15:53:24.802549 systemd[1]: Reloading requested from client PID 1136 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:53:24.802583 systemd[1]: Reloading... Feb 13 15:53:25.023099 zram_generator::config[1207]: No configuration found. Feb 13 15:53:25.489766 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:53:25.609752 ldconfig[1132]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:53:25.619881 systemd[1]: Reloading finished in 814 ms. Feb 13 15:53:25.672226 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:53:25.683190 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:53:25.706868 systemd[1]: Starting ensure-sysext.service... Feb 13 15:53:25.736669 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:53:25.754663 systemd[1]: Reloading requested from client PID 1250 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:53:25.754717 systemd[1]: Reloading... Feb 13 15:53:25.906041 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:53:25.909215 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:53:25.915180 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:53:25.915957 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Feb 13 15:53:25.918249 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Feb 13 15:53:25.928525 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:53:25.928804 systemd-tmpfiles[1251]: Skipping /boot Feb 13 15:53:25.967014 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:53:25.967337 systemd-tmpfiles[1251]: Skipping /boot Feb 13 15:53:26.047115 zram_generator::config[1277]: No configuration found. Feb 13 15:53:26.380116 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:53:26.484857 systemd[1]: Reloading finished in 724 ms. Feb 13 15:53:26.508376 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:53:26.510847 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:53:26.544112 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:53:26.566765 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:53:26.585640 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:53:26.594567 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:53:26.625836 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:53:26.629615 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:53:26.661893 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:53:26.662264 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:53:26.677679 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:53:26.702603 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:53:26.717764 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:53:26.719227 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:53:26.719621 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:53:26.729738 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:53:26.734922 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:53:26.735834 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:53:26.736219 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:53:26.736360 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:53:26.778552 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:53:26.781132 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:53:26.794332 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:53:26.794655 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:53:26.804573 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:53:26.807052 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:53:26.808389 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:53:26.809297 systemd[1]: Finished ensure-sysext.service. Feb 13 15:53:26.834466 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:53:26.854317 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:53:26.854668 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:53:26.861229 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:53:26.870583 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Feb 13 15:53:26.883013 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:53:26.921609 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:53:26.925139 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:53:26.938287 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:53:26.954655 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:53:26.983712 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:53:26.987778 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:53:26.988131 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:53:26.996141 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:53:26.998765 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:53:27.001310 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:53:27.029508 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:53:27.053279 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:53:27.068468 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:53:27.149277 augenrules[1380]: No rules Feb 13 15:53:27.152046 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:53:27.159013 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:53:27.273526 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:53:27.485233 systemd-networkd[1369]: lo: Link UP Feb 13 15:53:27.486681 systemd-networkd[1369]: lo: Gained carrier Feb 13 15:53:27.490934 systemd-networkd[1369]: Enumeration completed Feb 13 15:53:27.491657 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:53:27.493735 systemd-networkd[1369]: eth1: Configuring with /run/systemd/network/10-02:b1:64:38:0a:10.network. Feb 13 15:53:27.497812 systemd-networkd[1369]: eth1: Link UP Feb 13 15:53:27.497825 systemd-networkd[1369]: eth1: Gained carrier Feb 13 15:53:27.507690 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:53:27.528401 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Feb 13 15:53:27.540467 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:53:27.543327 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:53:27.553546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:53:27.566306 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:53:27.582626 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:53:27.586639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:53:27.587217 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:53:27.587282 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:53:27.592374 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:53:27.595674 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:53:27.608577 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:53:27.610247 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:53:27.621052 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:53:27.622651 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:53:27.629604 systemd-resolved[1326]: Positive Trust Anchors: Feb 13 15:53:27.629634 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:53:27.629684 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:53:27.630749 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:53:27.650475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:53:27.650826 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:53:27.654215 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:53:27.654748 systemd-resolved[1326]: Using system hostname 'ci-4186.1.1-d-137a032ec7'. Feb 13 15:53:27.659753 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:53:27.661315 systemd[1]: Reached target network.target - Network. Feb 13 15:53:27.662006 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:53:27.676648 systemd-networkd[1369]: eth0: Configuring with /run/systemd/network/10-8e:c4:39:9b:de:dd.network. Feb 13 15:53:27.682294 systemd-networkd[1369]: eth0: Link UP Feb 13 15:53:27.682472 systemd-networkd[1369]: eth0: Gained carrier Feb 13 15:53:27.686161 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1372) Feb 13 15:53:27.687753 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Feb 13 15:53:27.705400 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Feb 13 15:53:27.724096 kernel: ISO 9660 Extensions: RRIP_1991A Feb 13 15:53:27.735542 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Feb 13 15:53:27.805775 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:53:27.816180 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 15:53:27.818801 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:53:27.843152 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:53:27.871145 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:53:27.884254 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 15:53:27.905150 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 13 15:53:27.982105 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:53:28.033777 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:53:28.038123 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Feb 13 15:53:28.043452 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Feb 13 15:53:28.061428 kernel: Console: switching to colour dummy device 80x25 Feb 13 15:53:28.061597 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 15:53:28.061624 kernel: [drm] features: -context_init Feb 13 15:53:28.062302 kernel: [drm] number of scanouts: 1 Feb 13 15:53:28.062382 kernel: [drm] number of cap sets: 0 Feb 13 15:53:28.112142 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Feb 13 15:53:28.112341 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Feb 13 15:53:28.112369 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 15:53:28.112392 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 15:53:28.131976 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:53:28.132295 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:53:28.160762 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:53:28.202510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:53:28.202897 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:53:28.215687 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:53:28.299281 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:53:28.345169 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:53:28.364194 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:53:28.399444 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:53:28.477410 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:53:28.489718 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:53:28.493741 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:53:28.495201 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:53:28.497225 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:53:28.497533 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:53:28.498081 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:53:28.498459 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:53:28.498598 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:53:28.498707 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:53:28.498744 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:53:28.498836 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:53:28.505590 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:53:28.509443 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:53:28.527924 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:53:28.559932 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:53:28.562547 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:53:28.567511 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:53:28.568593 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:53:28.570586 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:53:28.570636 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:53:28.590952 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:53:28.600445 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:53:28.618458 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:53:28.632470 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:53:28.650805 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:53:28.673557 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:53:28.674528 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:53:28.684152 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:53:28.704613 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:53:28.759514 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:53:28.760808 systemd-networkd[1369]: eth0: Gained IPv6LL Feb 13 15:53:28.762246 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Feb 13 15:53:28.769525 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:53:28.782112 coreos-metadata[1441]: Feb 13 15:53:28.780 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 15:53:28.789644 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:53:28.793075 jq[1443]: false Feb 13 15:53:28.793314 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:53:28.794309 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:53:28.801478 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:53:28.821424 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:53:28.823915 dbus-daemon[1442]: [system] SELinux support is enabled Feb 13 15:53:28.828770 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:53:28.843560 coreos-metadata[1441]: Feb 13 15:53:28.843 INFO Fetch successful Feb 13 15:53:28.852259 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:53:28.857264 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:53:28.892196 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:53:28.893231 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:53:28.933987 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:53:28.964390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:53:28.982456 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:53:28.988439 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:53:28.988529 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:53:28.989400 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:53:28.989479 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Feb 13 15:53:28.989502 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:53:28.996172 jq[1455]: true Feb 13 15:53:28.997999 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:53:28.999451 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:53:29.015975 systemd-networkd[1369]: eth1: Gained IPv6LL Feb 13 15:53:29.017896 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Feb 13 15:53:29.039623 tar[1459]: linux-amd64/helm Feb 13 15:53:29.077725 update_engine[1452]: I20250213 15:53:29.077548 1452 main.cc:92] Flatcar Update Engine starting Feb 13 15:53:29.081325 update_engine[1452]: I20250213 15:53:29.080942 1452 update_check_scheduler.cc:74] Next update check in 5m32s Feb 13 15:53:29.099425 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:53:29.108118 extend-filesystems[1444]: Found loop4 Feb 13 15:53:29.108118 extend-filesystems[1444]: Found loop5 Feb 13 15:53:29.108118 extend-filesystems[1444]: Found loop6 Feb 13 15:53:29.108118 extend-filesystems[1444]: Found loop7 Feb 13 15:53:29.108118 extend-filesystems[1444]: Found vda Feb 13 15:53:29.108118 extend-filesystems[1444]: Found vda1 Feb 13 15:53:29.108118 extend-filesystems[1444]: Found vda2 Feb 13 15:53:29.108118 extend-filesystems[1444]: Found vda3 Feb 13 15:53:29.108118 extend-filesystems[1444]: Found usr Feb 13 15:53:29.108118 extend-filesystems[1444]: Found vda4 Feb 13 15:53:29.108118 extend-filesystems[1444]: Found vda6 Feb 13 15:53:29.108118 extend-filesystems[1444]: Found vda7 Feb 13 15:53:29.108118 extend-filesystems[1444]: Found vda9 Feb 13 15:53:29.108118 extend-filesystems[1444]: Checking size of /dev/vda9 Feb 13 15:53:29.143834 (ntainerd)[1480]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:53:29.144602 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:53:29.159725 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:53:29.306921 jq[1473]: true Feb 13 15:53:29.161256 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:53:29.334082 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:53:29.341851 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:53:29.347579 extend-filesystems[1444]: Resized partition /dev/vda9 Feb 13 15:53:29.384118 extend-filesystems[1498]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:53:29.429597 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 13 15:53:29.481579 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:53:29.543631 systemd-logind[1451]: New seat seat0. Feb 13 15:53:29.547784 systemd-logind[1451]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:53:29.547818 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:53:29.548266 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:53:29.645343 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1373) Feb 13 15:53:29.715148 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 15:53:29.744874 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:53:29.804017 extend-filesystems[1498]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:53:29.804017 extend-filesystems[1498]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 15:53:29.804017 extend-filesystems[1498]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 15:53:29.835511 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Feb 13 15:53:29.835511 extend-filesystems[1444]: Found vdb Feb 13 15:53:29.806904 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:53:29.808001 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:53:29.903017 bash[1517]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:53:29.903758 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:53:29.936811 systemd[1]: Starting sshkeys.service... Feb 13 15:53:29.948625 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:53:29.997418 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:53:30.035172 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:53:30.103427 coreos-metadata[1526]: Feb 13 15:53:30.100 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 15:53:30.124105 coreos-metadata[1526]: Feb 13 15:53:30.120 INFO Fetch successful Feb 13 15:53:30.149409 unknown[1526]: wrote ssh authorized keys file for user: core Feb 13 15:53:30.311241 update-ssh-keys[1532]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:53:30.314133 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:53:30.327901 systemd[1]: Finished sshkeys.service. Feb 13 15:53:30.541535 containerd[1480]: time="2025-02-13T15:53:30.540715031Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:53:30.709143 containerd[1480]: time="2025-02-13T15:53:30.707684735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:53:30.716501 containerd[1480]: time="2025-02-13T15:53:30.716390690Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:53:30.716501 containerd[1480]: time="2025-02-13T15:53:30.716501651Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:53:30.716858 containerd[1480]: time="2025-02-13T15:53:30.716535910Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:53:30.719125 containerd[1480]: time="2025-02-13T15:53:30.716918075Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:53:30.719125 containerd[1480]: time="2025-02-13T15:53:30.716958510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:53:30.719125 containerd[1480]: time="2025-02-13T15:53:30.717101342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:53:30.719125 containerd[1480]: time="2025-02-13T15:53:30.717121917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:53:30.719696 containerd[1480]: time="2025-02-13T15:53:30.719617849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:53:30.719756 containerd[1480]: time="2025-02-13T15:53:30.719696804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:53:30.719756 containerd[1480]: time="2025-02-13T15:53:30.719743927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:53:30.719857 containerd[1480]: time="2025-02-13T15:53:30.719759455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:53:30.720451 containerd[1480]: time="2025-02-13T15:53:30.720184751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:53:30.721216 containerd[1480]: time="2025-02-13T15:53:30.721177505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:53:30.721548 containerd[1480]: time="2025-02-13T15:53:30.721500748Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:53:30.721597 containerd[1480]: time="2025-02-13T15:53:30.721549055Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:53:30.722564 containerd[1480]: time="2025-02-13T15:53:30.722495703Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:53:30.723031 containerd[1480]: time="2025-02-13T15:53:30.722671667Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:53:30.744536 containerd[1480]: time="2025-02-13T15:53:30.744186253Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:53:30.744807 containerd[1480]: time="2025-02-13T15:53:30.744565014Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:53:30.744807 containerd[1480]: time="2025-02-13T15:53:30.744602118Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:53:30.744807 containerd[1480]: time="2025-02-13T15:53:30.744629000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:53:30.744807 containerd[1480]: time="2025-02-13T15:53:30.744654169Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:53:30.746200 containerd[1480]: time="2025-02-13T15:53:30.744999318Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:53:30.763764 containerd[1480]: time="2025-02-13T15:53:30.762164222Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:53:30.763764 containerd[1480]: time="2025-02-13T15:53:30.762556022Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:53:30.763764 containerd[1480]: time="2025-02-13T15:53:30.762593043Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:53:30.763764 containerd[1480]: time="2025-02-13T15:53:30.762722094Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:53:30.763764 containerd[1480]: time="2025-02-13T15:53:30.762762590Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:53:30.763764 containerd[1480]: time="2025-02-13T15:53:30.762795967Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:53:30.763764 containerd[1480]: time="2025-02-13T15:53:30.762822141Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:53:30.763764 containerd[1480]: time="2025-02-13T15:53:30.762855694Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:53:30.763764 containerd[1480]: time="2025-02-13T15:53:30.762888087Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:53:30.763764 containerd[1480]: time="2025-02-13T15:53:30.762918195Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:53:30.763764 containerd[1480]: time="2025-02-13T15:53:30.762970665Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:53:30.763764 containerd[1480]: time="2025-02-13T15:53:30.762998613Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:53:30.763764 containerd[1480]: time="2025-02-13T15:53:30.763042833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.773943 containerd[1480]: time="2025-02-13T15:53:30.767855059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.773943 containerd[1480]: time="2025-02-13T15:53:30.767953084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.773943 containerd[1480]: time="2025-02-13T15:53:30.767987848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.773943 containerd[1480]: time="2025-02-13T15:53:30.768013790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.773943 containerd[1480]: time="2025-02-13T15:53:30.768048908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.773943 containerd[1480]: time="2025-02-13T15:53:30.768095530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.773943 containerd[1480]: time="2025-02-13T15:53:30.768123838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.773943 containerd[1480]: time="2025-02-13T15:53:30.768150326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.773943 containerd[1480]: time="2025-02-13T15:53:30.768297868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.773943 containerd[1480]: time="2025-02-13T15:53:30.768328098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.773943 containerd[1480]: time="2025-02-13T15:53:30.768354085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.773943 containerd[1480]: time="2025-02-13T15:53:30.768384644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.799677 containerd[1480]: time="2025-02-13T15:53:30.798040609Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:53:30.799677 containerd[1480]: time="2025-02-13T15:53:30.798181260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.799677 containerd[1480]: time="2025-02-13T15:53:30.798228752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.799677 containerd[1480]: time="2025-02-13T15:53:30.798304333Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:53:30.799677 containerd[1480]: time="2025-02-13T15:53:30.798421170Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:53:30.801094 containerd[1480]: time="2025-02-13T15:53:30.798459950Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:53:30.801094 containerd[1480]: time="2025-02-13T15:53:30.800187310Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:53:30.801094 containerd[1480]: time="2025-02-13T15:53:30.800253550Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:53:30.801094 containerd[1480]: time="2025-02-13T15:53:30.800282003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.801094 containerd[1480]: time="2025-02-13T15:53:30.800316235Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:53:30.801094 containerd[1480]: time="2025-02-13T15:53:30.800338718Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:53:30.801094 containerd[1480]: time="2025-02-13T15:53:30.800363781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:53:30.801479 containerd[1480]: time="2025-02-13T15:53:30.800895548Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:53:30.801479 containerd[1480]: time="2025-02-13T15:53:30.800999415Z" level=info msg="Connect containerd service" Feb 13 15:53:30.807819 containerd[1480]: time="2025-02-13T15:53:30.805852636Z" level=info msg="using legacy CRI server" Feb 13 15:53:30.807819 containerd[1480]: time="2025-02-13T15:53:30.805917240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:53:30.807819 containerd[1480]: time="2025-02-13T15:53:30.806201679Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:53:30.817515 containerd[1480]: time="2025-02-13T15:53:30.817436686Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:53:30.819761 containerd[1480]: time="2025-02-13T15:53:30.819508462Z" level=info msg="Start subscribing containerd event" Feb 13 15:53:30.819761 containerd[1480]: time="2025-02-13T15:53:30.819632948Z" level=info msg="Start recovering state" Feb 13 15:53:30.821233 containerd[1480]: time="2025-02-13T15:53:30.821175066Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:53:30.822957 containerd[1480]: time="2025-02-13T15:53:30.822903169Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:53:30.841689 containerd[1480]: time="2025-02-13T15:53:30.821645754Z" level=info msg="Start event monitor" Feb 13 15:53:30.841689 containerd[1480]: time="2025-02-13T15:53:30.823296623Z" level=info msg="Start snapshots syncer" Feb 13 15:53:30.841689 containerd[1480]: time="2025-02-13T15:53:30.823320552Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:53:30.841689 containerd[1480]: time="2025-02-13T15:53:30.823367754Z" level=info msg="Start streaming server" Feb 13 15:53:30.823723 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:53:30.855321 containerd[1480]: time="2025-02-13T15:53:30.842767545Z" level=info msg="containerd successfully booted in 0.315790s" Feb 13 15:53:31.014315 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:53:31.107339 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:53:31.141269 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:53:31.180424 systemd[1]: Started sshd@0-143.198.102.37:22-218.92.0.157:61707.service - OpenSSH per-connection server daemon (218.92.0.157:61707). Feb 13 15:53:31.229234 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:53:31.229629 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:53:31.265790 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:53:31.355517 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:53:31.398179 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:53:31.412809 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:53:31.414336 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:53:31.588833 kernel: hrtimer: interrupt took 7767326 ns Feb 13 15:53:31.819181 tar[1459]: linux-amd64/LICENSE Feb 13 15:53:31.824263 tar[1459]: linux-amd64/README.md Feb 13 15:53:31.894213 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:53:32.392535 sshd-session[1562]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Feb 13 15:53:32.682261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:53:32.698844 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:53:32.721017 systemd[1]: Startup finished in 1.884s (kernel) + 9.948s (initrd) + 10.951s (userspace) = 22.783s. Feb 13 15:53:32.730949 (kubelet)[1567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:53:32.798780 agetty[1558]: failed to open credentials directory Feb 13 15:53:32.803370 agetty[1557]: failed to open credentials directory Feb 13 15:53:34.112549 kubelet[1567]: E0213 15:53:34.112395 1567 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:53:34.117906 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:53:34.118696 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:53:34.119622 systemd[1]: kubelet.service: Consumed 1.654s CPU time. Feb 13 15:53:35.004753 sshd[1549]: PAM: Permission denied for root from 218.92.0.157 Feb 13 15:53:35.305031 sshd-session[1580]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Feb 13 15:53:36.991362 sshd[1549]: PAM: Permission denied for root from 218.92.0.157 Feb 13 15:53:37.276213 sshd-session[1581]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Feb 13 15:53:38.307703 systemd[1]: Started sshd@1-143.198.102.37:22-139.178.89.65:45212.service - OpenSSH per-connection server daemon (139.178.89.65:45212). Feb 13 15:53:38.428266 sshd[1583]: Accepted publickey for core from 139.178.89.65 port 45212 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:53:38.432214 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:38.464264 systemd-logind[1451]: New session 1 of user core. Feb 13 15:53:38.467891 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:53:38.480729 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:53:38.531695 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:53:38.543188 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:53:38.557813 (systemd)[1587]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:53:38.835302 systemd[1587]: Queued start job for default target default.target. Feb 13 15:53:38.845786 systemd[1587]: Created slice app.slice - User Application Slice. Feb 13 15:53:38.846199 systemd[1587]: Reached target paths.target - Paths. Feb 13 15:53:38.846234 systemd[1587]: Reached target timers.target - Timers. Feb 13 15:53:38.852711 systemd[1587]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:53:38.897447 systemd[1587]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:53:38.897726 systemd[1587]: Reached target sockets.target - Sockets. Feb 13 15:53:38.897752 systemd[1587]: Reached target basic.target - Basic System. Feb 13 15:53:38.897845 systemd[1587]: Reached target default.target - Main User Target. Feb 13 15:53:38.897899 systemd[1587]: Startup finished in 322ms. Feb 13 15:53:38.898478 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:53:38.909673 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:53:39.002957 systemd[1]: Started sshd@2-143.198.102.37:22-139.178.89.65:45224.service - OpenSSH per-connection server daemon (139.178.89.65:45224). Feb 13 15:53:39.111440 sshd[1598]: Accepted publickey for core from 139.178.89.65 port 45224 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:53:39.115668 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:39.134887 systemd-logind[1451]: New session 2 of user core. Feb 13 15:53:39.147865 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:53:39.225747 sshd[1600]: Connection closed by 139.178.89.65 port 45224 Feb 13 15:53:39.228260 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:39.241192 sshd[1549]: PAM: Permission denied for root from 218.92.0.157 Feb 13 15:53:39.246009 systemd[1]: sshd@2-143.198.102.37:22-139.178.89.65:45224.service: Deactivated successfully. Feb 13 15:53:39.252475 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:53:39.268652 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:53:39.277774 systemd[1]: Started sshd@3-143.198.102.37:22-139.178.89.65:45234.service - OpenSSH per-connection server daemon (139.178.89.65:45234). Feb 13 15:53:39.281708 systemd-logind[1451]: Removed session 2. Feb 13 15:53:39.397633 sshd[1605]: Accepted publickey for core from 139.178.89.65 port 45234 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:53:39.398268 sshd[1549]: Received disconnect from 218.92.0.157 port 61707:11: [preauth] Feb 13 15:53:39.398268 sshd[1549]: Disconnected from authenticating user root 218.92.0.157 port 61707 [preauth] Feb 13 15:53:39.405425 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:39.403053 systemd[1]: sshd@0-143.198.102.37:22-218.92.0.157:61707.service: Deactivated successfully. Feb 13 15:53:39.430549 systemd-logind[1451]: New session 3 of user core. Feb 13 15:53:39.453176 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:53:39.535028 sshd[1609]: Connection closed by 139.178.89.65 port 45234 Feb 13 15:53:39.537947 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:39.554762 systemd[1]: sshd@3-143.198.102.37:22-139.178.89.65:45234.service: Deactivated successfully. Feb 13 15:53:39.557831 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:53:39.561399 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:53:39.564659 systemd[1]: Started sshd@4-143.198.102.37:22-139.178.89.65:45250.service - OpenSSH per-connection server daemon (139.178.89.65:45250). Feb 13 15:53:39.570250 systemd-logind[1451]: Removed session 3. Feb 13 15:53:39.685172 sshd[1614]: Accepted publickey for core from 139.178.89.65 port 45250 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:53:39.688664 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:39.705194 systemd-logind[1451]: New session 4 of user core. Feb 13 15:53:39.717642 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:53:39.797231 sshd[1616]: Connection closed by 139.178.89.65 port 45250 Feb 13 15:53:39.795808 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:39.818510 systemd[1]: sshd@4-143.198.102.37:22-139.178.89.65:45250.service: Deactivated successfully. Feb 13 15:53:39.822648 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:53:39.825761 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:53:39.830045 systemd-logind[1451]: Removed session 4. Feb 13 15:53:39.837879 systemd[1]: Started sshd@5-143.198.102.37:22-139.178.89.65:45260.service - OpenSSH per-connection server daemon (139.178.89.65:45260). Feb 13 15:53:39.956364 sshd[1621]: Accepted publickey for core from 139.178.89.65 port 45260 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:53:39.959735 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:39.976650 systemd-logind[1451]: New session 5 of user core. Feb 13 15:53:39.982674 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:53:40.085267 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:53:40.104005 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:53:40.135511 sudo[1624]: pam_unix(sudo:session): session closed for user root Feb 13 15:53:40.142285 sshd[1623]: Connection closed by 139.178.89.65 port 45260 Feb 13 15:53:40.144255 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:40.161044 systemd[1]: sshd@5-143.198.102.37:22-139.178.89.65:45260.service: Deactivated successfully. Feb 13 15:53:40.169028 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:53:40.181421 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:53:40.195931 systemd[1]: Started sshd@6-143.198.102.37:22-139.178.89.65:45266.service - OpenSSH per-connection server daemon (139.178.89.65:45266). Feb 13 15:53:40.198627 systemd-logind[1451]: Removed session 5. Feb 13 15:53:40.291948 sshd[1629]: Accepted publickey for core from 139.178.89.65 port 45266 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:53:40.293165 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:40.304108 systemd-logind[1451]: New session 6 of user core. Feb 13 15:53:40.312520 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:53:40.387737 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:53:40.388233 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:53:40.402387 sudo[1633]: pam_unix(sudo:session): session closed for user root Feb 13 15:53:40.433016 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:53:40.433745 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:53:40.491398 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:53:40.609948 augenrules[1655]: No rules Feb 13 15:53:40.614043 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:53:40.614436 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:53:40.618493 sudo[1632]: pam_unix(sudo:session): session closed for user root Feb 13 15:53:40.626120 sshd[1631]: Connection closed by 139.178.89.65 port 45266 Feb 13 15:53:40.633220 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:40.654862 systemd[1]: sshd@6-143.198.102.37:22-139.178.89.65:45266.service: Deactivated successfully. Feb 13 15:53:40.659483 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:53:40.674073 systemd[1]: Started sshd@7-143.198.102.37:22-139.178.89.65:45272.service - OpenSSH per-connection server daemon (139.178.89.65:45272). Feb 13 15:53:40.696313 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:53:40.705430 systemd-logind[1451]: Removed session 6. Feb 13 15:53:40.846153 sshd[1663]: Accepted publickey for core from 139.178.89.65 port 45272 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:53:40.845452 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:40.860309 systemd-logind[1451]: New session 7 of user core. Feb 13 15:53:40.874342 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:53:40.955338 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:53:40.955895 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:53:41.952592 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:53:41.954689 (dockerd)[1684]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:53:42.969930 dockerd[1684]: time="2025-02-13T15:53:42.969712996Z" level=info msg="Starting up" Feb 13 15:53:43.281913 dockerd[1684]: time="2025-02-13T15:53:43.281345342Z" level=info msg="Loading containers: start." Feb 13 15:53:43.876541 kernel: Initializing XFRM netlink socket Feb 13 15:53:43.927242 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Feb 13 15:53:45.060693 systemd-resolved[1326]: Clock change detected. Flushing caches. Feb 13 15:53:45.061481 systemd-timesyncd[1341]: Contacted time server 45.61.187.39:123 (2.flatcar.pool.ntp.org). Feb 13 15:53:45.061701 systemd-timesyncd[1341]: Initial clock synchronization to Thu 2025-02-13 15:53:45.060502 UTC. Feb 13 15:53:45.097901 systemd-networkd[1369]: docker0: Link UP Feb 13 15:53:45.366479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:53:45.410194 dockerd[1684]: time="2025-02-13T15:53:45.387733803Z" level=info msg="Loading containers: done." Feb 13 15:53:45.435090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:53:45.476190 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck736630122-merged.mount: Deactivated successfully. Feb 13 15:53:45.484021 dockerd[1684]: time="2025-02-13T15:53:45.481613982Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:53:45.484021 dockerd[1684]: time="2025-02-13T15:53:45.482181133Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:53:45.484021 dockerd[1684]: time="2025-02-13T15:53:45.482494990Z" level=info msg="Daemon has completed initialization" Feb 13 15:53:45.694026 dockerd[1684]: time="2025-02-13T15:53:45.693746939Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:53:45.698856 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:53:45.900760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:53:45.921778 (kubelet)[1883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:53:46.041749 kubelet[1883]: E0213 15:53:46.037984 1883 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:53:46.043508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:53:46.043757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:53:47.174918 containerd[1480]: time="2025-02-13T15:53:47.174076274Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 15:53:48.036720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3853730138.mount: Deactivated successfully. Feb 13 15:53:50.886660 containerd[1480]: time="2025-02-13T15:53:50.886477164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:50.890029 containerd[1480]: time="2025-02-13T15:53:50.888694886Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27976588" Feb 13 15:53:50.892628 containerd[1480]: time="2025-02-13T15:53:50.891160257Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:50.901547 containerd[1480]: time="2025-02-13T15:53:50.901449781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:50.907119 containerd[1480]: time="2025-02-13T15:53:50.907040718Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 3.73288484s" Feb 13 15:53:50.907495 containerd[1480]: time="2025-02-13T15:53:50.907310450Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 15:53:50.910427 containerd[1480]: time="2025-02-13T15:53:50.910368505Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 15:53:51.079974 systemd-resolved[1326]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Feb 13 15:53:53.947228 containerd[1480]: time="2025-02-13T15:53:53.946494317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:53.952090 containerd[1480]: time="2025-02-13T15:53:53.951697596Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24708193" Feb 13 15:53:53.956767 containerd[1480]: time="2025-02-13T15:53:53.955651087Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:53.966527 containerd[1480]: time="2025-02-13T15:53:53.965176631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:53.966527 containerd[1480]: time="2025-02-13T15:53:53.966323875Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 3.055164328s" Feb 13 15:53:53.966527 containerd[1480]: time="2025-02-13T15:53:53.966375755Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 15:53:53.969332 containerd[1480]: time="2025-02-13T15:53:53.969256497Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 15:53:54.180916 systemd-resolved[1326]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Feb 13 15:53:56.295425 containerd[1480]: time="2025-02-13T15:53:56.295335997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:56.296302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:53:56.300849 containerd[1480]: time="2025-02-13T15:53:56.297791169Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18652425" Feb 13 15:53:56.302807 containerd[1480]: time="2025-02-13T15:53:56.302728181Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:56.305907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:53:56.313960 containerd[1480]: time="2025-02-13T15:53:56.312991809Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 2.343402838s" Feb 13 15:53:56.313960 containerd[1480]: time="2025-02-13T15:53:56.313388463Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 15:53:56.315226 containerd[1480]: time="2025-02-13T15:53:56.314461380Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 15:53:56.315226 containerd[1480]: time="2025-02-13T15:53:56.314667771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:56.625235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:53:56.628544 (kubelet)[1963]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:53:56.752867 kubelet[1963]: E0213 15:53:56.752612 1963 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:53:56.755571 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:53:56.756259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:53:58.465273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount654989333.mount: Deactivated successfully. Feb 13 15:53:59.757842 containerd[1480]: time="2025-02-13T15:53:59.757735786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:59.766634 containerd[1480]: time="2025-02-13T15:53:59.764502232Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 15:53:59.766634 containerd[1480]: time="2025-02-13T15:53:59.765420217Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:59.773060 containerd[1480]: time="2025-02-13T15:53:59.772982894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:59.778625 containerd[1480]: time="2025-02-13T15:53:59.776802591Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 3.462226304s" Feb 13 15:53:59.778860 containerd[1480]: time="2025-02-13T15:53:59.778692230Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 15:53:59.780326 containerd[1480]: time="2025-02-13T15:53:59.779930683Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:53:59.783624 systemd-resolved[1326]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Feb 13 15:54:00.537744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1427382124.mount: Deactivated successfully. Feb 13 15:54:03.042034 containerd[1480]: time="2025-02-13T15:54:03.041767525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:03.045354 containerd[1480]: time="2025-02-13T15:54:03.045253954Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:54:03.050800 containerd[1480]: time="2025-02-13T15:54:03.050701858Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:03.065620 containerd[1480]: time="2025-02-13T15:54:03.063880553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:03.065620 containerd[1480]: time="2025-02-13T15:54:03.065525111Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.285527582s" Feb 13 15:54:03.065939 containerd[1480]: time="2025-02-13T15:54:03.065906668Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:54:03.067946 containerd[1480]: time="2025-02-13T15:54:03.067881761Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:54:03.830338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94335060.mount: Deactivated successfully. Feb 13 15:54:03.850916 containerd[1480]: time="2025-02-13T15:54:03.850768404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:03.852851 containerd[1480]: time="2025-02-13T15:54:03.852696470Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 15:54:03.860133 containerd[1480]: time="2025-02-13T15:54:03.859503257Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:03.872225 containerd[1480]: time="2025-02-13T15:54:03.869730094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:03.873744 containerd[1480]: time="2025-02-13T15:54:03.872796590Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 804.55792ms" Feb 13 15:54:03.873744 containerd[1480]: time="2025-02-13T15:54:03.872885668Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 15:54:03.875030 containerd[1480]: time="2025-02-13T15:54:03.874220323Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 15:54:04.638110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3534686043.mount: Deactivated successfully. Feb 13 15:54:06.972402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:54:06.983521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:54:07.448313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:54:07.462652 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:54:07.701135 kubelet[2088]: E0213 15:54:07.700751 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:54:07.707951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:54:07.708385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:54:09.606270 containerd[1480]: time="2025-02-13T15:54:09.606177176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:09.609728 containerd[1480]: time="2025-02-13T15:54:09.609366893Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Feb 13 15:54:09.612744 containerd[1480]: time="2025-02-13T15:54:09.611891035Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:09.630928 containerd[1480]: time="2025-02-13T15:54:09.630837225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:09.633350 containerd[1480]: time="2025-02-13T15:54:09.633268402Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.75897888s" Feb 13 15:54:09.633350 containerd[1480]: time="2025-02-13T15:54:09.633343529Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 15:54:13.453715 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:54:13.466501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:54:13.559388 systemd[1]: Reloading requested from client PID 2123 ('systemctl') (unit session-7.scope)... Feb 13 15:54:13.559493 systemd[1]: Reloading... Feb 13 15:54:13.949646 zram_generator::config[2166]: No configuration found. Feb 13 15:54:14.216861 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:54:14.371071 systemd[1]: Reloading finished in 810 ms. Feb 13 15:54:14.524614 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:54:14.524763 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:54:14.525891 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:54:14.551800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:54:14.880114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:54:14.895572 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:54:14.943481 update_engine[1452]: I20250213 15:54:14.943361 1452 update_attempter.cc:509] Updating boot flags... Feb 13 15:54:15.097749 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2229) Feb 13 15:54:15.200337 kubelet[2217]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:54:15.202060 kubelet[2217]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:54:15.202060 kubelet[2217]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:54:15.203875 kubelet[2217]: I0213 15:54:15.203746 2217 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:54:15.226889 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2227) Feb 13 15:54:15.927052 kubelet[2217]: I0213 15:54:15.925236 2217 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:54:15.927052 kubelet[2217]: I0213 15:54:15.925294 2217 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:54:15.927052 kubelet[2217]: I0213 15:54:15.925777 2217 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:54:15.970959 kubelet[2217]: E0213 15:54:15.969624 2217 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://143.198.102.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.102.37:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:54:15.978265 kubelet[2217]: I0213 15:54:15.977503 2217 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:54:16.038509 kubelet[2217]: E0213 15:54:16.035623 2217 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:54:16.038509 kubelet[2217]: I0213 15:54:16.035718 2217 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:54:16.061238 kubelet[2217]: I0213 15:54:16.056328 2217 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:54:16.061238 kubelet[2217]: I0213 15:54:16.059513 2217 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:54:16.061238 kubelet[2217]: I0213 15:54:16.059930 2217 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:54:16.061238 kubelet[2217]: I0213 15:54:16.060015 2217 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.1-d-137a032ec7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:54:16.061853 kubelet[2217]: I0213 15:54:16.060308 2217 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:54:16.061853 kubelet[2217]: I0213 15:54:16.060327 2217 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:54:16.061853 kubelet[2217]: I0213 15:54:16.060547 2217 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:54:16.068627 kubelet[2217]: I0213 15:54:16.067332 2217 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:54:16.068627 kubelet[2217]: I0213 15:54:16.067423 2217 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:54:16.068627 kubelet[2217]: I0213 15:54:16.067488 2217 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:54:16.068627 kubelet[2217]: I0213 15:54:16.067518 2217 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:54:16.078931 kubelet[2217]: W0213 15:54:16.077603 2217 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.102.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-d-137a032ec7&limit=500&resourceVersion=0": dial tcp 143.198.102.37:6443: connect: connection refused Feb 13 15:54:16.078931 kubelet[2217]: E0213 15:54:16.077707 2217 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.102.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-d-137a032ec7&limit=500&resourceVersion=0\": dial tcp 143.198.102.37:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:54:16.098208 kubelet[2217]: W0213 15:54:16.096438 2217 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.102.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.102.37:6443: connect: connection refused Feb 13 15:54:16.098208 kubelet[2217]: E0213 15:54:16.096560 2217 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.102.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.102.37:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:54:16.100396 kubelet[2217]: I0213 15:54:16.099987 2217 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:54:16.104915 kubelet[2217]: I0213 15:54:16.104853 2217 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:54:16.107673 kubelet[2217]: W0213 15:54:16.107536 2217 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:54:16.117971 kubelet[2217]: I0213 15:54:16.117084 2217 server.go:1269] "Started kubelet" Feb 13 15:54:16.126234 kubelet[2217]: I0213 15:54:16.125813 2217 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:54:16.154252 kubelet[2217]: I0213 15:54:16.151490 2217 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:54:16.154252 kubelet[2217]: I0213 15:54:16.153301 2217 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:54:16.155946 kubelet[2217]: I0213 15:54:16.155203 2217 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:54:16.155946 kubelet[2217]: I0213 15:54:16.155644 2217 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:54:16.156218 kubelet[2217]: I0213 15:54:16.156056 2217 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:54:16.163646 kubelet[2217]: I0213 15:54:16.160997 2217 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:54:16.171202 kubelet[2217]: E0213 15:54:16.166692 2217 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.1-d-137a032ec7\" not found" Feb 13 15:54:16.171202 kubelet[2217]: I0213 15:54:16.167571 2217 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:54:16.171202 kubelet[2217]: I0213 15:54:16.167668 2217 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:54:16.171202 kubelet[2217]: W0213 15:54:16.168544 2217 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.102.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.102.37:6443: connect: connection refused Feb 13 15:54:16.171202 kubelet[2217]: E0213 15:54:16.168650 2217 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.102.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.102.37:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:54:16.171202 kubelet[2217]: E0213 15:54:16.168736 2217 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.102.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-d-137a032ec7?timeout=10s\": dial tcp 143.198.102.37:6443: connect: connection refused" interval="200ms" Feb 13 15:54:16.171202 kubelet[2217]: I0213 15:54:16.170369 2217 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:54:16.171202 kubelet[2217]: I0213 15:54:16.171104 2217 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:54:16.174750 kubelet[2217]: I0213 15:54:16.174177 2217 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:54:16.179268 kubelet[2217]: E0213 15:54:16.175387 2217 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.102.37:6443/api/v1/namespaces/default/events\": dial tcp 143.198.102.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.1-d-137a032ec7.1823cf86231fdabf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.1-d-137a032ec7,UID:ci-4186.1.1-d-137a032ec7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.1-d-137a032ec7,},FirstTimestamp:2025-02-13 15:54:16.117000895 +0000 UTC m=+1.194084183,LastTimestamp:2025-02-13 15:54:16.117000895 +0000 UTC m=+1.194084183,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.1-d-137a032ec7,}" Feb 13 15:54:16.209450 kubelet[2217]: I0213 15:54:16.209407 2217 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:54:16.209450 kubelet[2217]: I0213 15:54:16.209433 2217 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:54:16.209450 kubelet[2217]: I0213 15:54:16.209462 2217 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:54:16.218133 kubelet[2217]: I0213 15:54:16.218026 2217 policy_none.go:49] "None policy: Start" Feb 13 15:54:16.219524 kubelet[2217]: I0213 15:54:16.219491 2217 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:54:16.219722 kubelet[2217]: I0213 15:54:16.219548 2217 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:54:16.224812 kubelet[2217]: I0213 15:54:16.224464 2217 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:54:16.227772 kubelet[2217]: I0213 15:54:16.227706 2217 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:54:16.228084 kubelet[2217]: I0213 15:54:16.228061 2217 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:54:16.228400 kubelet[2217]: I0213 15:54:16.228368 2217 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:54:16.228696 kubelet[2217]: E0213 15:54:16.228573 2217 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:54:16.249190 kubelet[2217]: W0213 15:54:16.241906 2217 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.102.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.102.37:6443: connect: connection refused Feb 13 15:54:16.249375 kubelet[2217]: E0213 15:54:16.248980 2217 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.102.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.102.37:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:54:16.261022 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:54:16.267576 kubelet[2217]: E0213 15:54:16.267439 2217 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.1-d-137a032ec7\" not found" Feb 13 15:54:16.279678 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:54:16.288655 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:54:16.304677 kubelet[2217]: I0213 15:54:16.304519 2217 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:54:16.305508 kubelet[2217]: I0213 15:54:16.304886 2217 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:54:16.305508 kubelet[2217]: I0213 15:54:16.304928 2217 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:54:16.305508 kubelet[2217]: I0213 15:54:16.305326 2217 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:54:16.310667 kubelet[2217]: E0213 15:54:16.310620 2217 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.1-d-137a032ec7\" not found" Feb 13 15:54:16.344609 systemd[1]: Created slice kubepods-burstable-pod8e5ed4d332e6c91cf353df9df9b2a2f7.slice - libcontainer container kubepods-burstable-pod8e5ed4d332e6c91cf353df9df9b2a2f7.slice. Feb 13 15:54:16.365866 systemd[1]: Created slice kubepods-burstable-pod5c6be6e4dafc34ddb72e1797b081418e.slice - libcontainer container kubepods-burstable-pod5c6be6e4dafc34ddb72e1797b081418e.slice. Feb 13 15:54:16.373668 kubelet[2217]: I0213 15:54:16.370942 2217 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5c6be6e4dafc34ddb72e1797b081418e-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.1-d-137a032ec7\" (UID: \"5c6be6e4dafc34ddb72e1797b081418e\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:16.373668 kubelet[2217]: I0213 15:54:16.371000 2217 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c6be6e4dafc34ddb72e1797b081418e-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.1-d-137a032ec7\" (UID: \"5c6be6e4dafc34ddb72e1797b081418e\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:16.373668 kubelet[2217]: I0213 15:54:16.371042 2217 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f6d2f5abe41c1051e6a310dccd61493c-kubeconfig\") pod \"kube-scheduler-ci-4186.1.1-d-137a032ec7\" (UID: \"f6d2f5abe41c1051e6a310dccd61493c\") " pod="kube-system/kube-scheduler-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:16.373668 kubelet[2217]: I0213 15:54:16.371078 2217 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e5ed4d332e6c91cf353df9df9b2a2f7-ca-certs\") pod \"kube-apiserver-ci-4186.1.1-d-137a032ec7\" (UID: \"8e5ed4d332e6c91cf353df9df9b2a2f7\") " pod="kube-system/kube-apiserver-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:16.373668 kubelet[2217]: I0213 15:54:16.371104 2217 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e5ed4d332e6c91cf353df9df9b2a2f7-k8s-certs\") pod \"kube-apiserver-ci-4186.1.1-d-137a032ec7\" (UID: \"8e5ed4d332e6c91cf353df9df9b2a2f7\") " pod="kube-system/kube-apiserver-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:16.374237 kubelet[2217]: I0213 15:54:16.371131 2217 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c6be6e4dafc34ddb72e1797b081418e-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.1-d-137a032ec7\" (UID: \"5c6be6e4dafc34ddb72e1797b081418e\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:16.374237 kubelet[2217]: I0213 15:54:16.371160 2217 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c6be6e4dafc34ddb72e1797b081418e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.1-d-137a032ec7\" (UID: \"5c6be6e4dafc34ddb72e1797b081418e\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:16.374237 kubelet[2217]: I0213 15:54:16.371189 2217 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e5ed4d332e6c91cf353df9df9b2a2f7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.1-d-137a032ec7\" (UID: \"8e5ed4d332e6c91cf353df9df9b2a2f7\") " pod="kube-system/kube-apiserver-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:16.374237 kubelet[2217]: I0213 15:54:16.371215 2217 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c6be6e4dafc34ddb72e1797b081418e-ca-certs\") pod \"kube-controller-manager-ci-4186.1.1-d-137a032ec7\" (UID: \"5c6be6e4dafc34ddb72e1797b081418e\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:16.374471 kubelet[2217]: E0213 15:54:16.374276 2217 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.102.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-d-137a032ec7?timeout=10s\": dial tcp 143.198.102.37:6443: connect: connection refused" interval="400ms" Feb 13 15:54:16.394395 systemd[1]: Created slice kubepods-burstable-podf6d2f5abe41c1051e6a310dccd61493c.slice - libcontainer container kubepods-burstable-podf6d2f5abe41c1051e6a310dccd61493c.slice. Feb 13 15:54:16.409033 kubelet[2217]: I0213 15:54:16.408693 2217 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.1-d-137a032ec7" Feb 13 15:54:16.409402 kubelet[2217]: E0213 15:54:16.409365 2217 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.102.37:6443/api/v1/nodes\": dial tcp 143.198.102.37:6443: connect: connection refused" node="ci-4186.1.1-d-137a032ec7" Feb 13 15:54:16.613388 kubelet[2217]: I0213 15:54:16.611361 2217 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.1-d-137a032ec7" Feb 13 15:54:16.613388 kubelet[2217]: E0213 15:54:16.611963 2217 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.102.37:6443/api/v1/nodes\": dial tcp 143.198.102.37:6443: connect: connection refused" node="ci-4186.1.1-d-137a032ec7" Feb 13 15:54:16.663268 kubelet[2217]: E0213 15:54:16.661357 2217 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:16.663646 containerd[1480]: time="2025-02-13T15:54:16.662473745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.1-d-137a032ec7,Uid:8e5ed4d332e6c91cf353df9df9b2a2f7,Namespace:kube-system,Attempt:0,}" Feb 13 15:54:16.686257 kubelet[2217]: E0213 15:54:16.684316 2217 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:16.687249 containerd[1480]: time="2025-02-13T15:54:16.687098776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.1-d-137a032ec7,Uid:5c6be6e4dafc34ddb72e1797b081418e,Namespace:kube-system,Attempt:0,}" Feb 13 15:54:16.701670 kubelet[2217]: E0213 15:54:16.700911 2217 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:16.704038 containerd[1480]: time="2025-02-13T15:54:16.703966915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.1-d-137a032ec7,Uid:f6d2f5abe41c1051e6a310dccd61493c,Namespace:kube-system,Attempt:0,}" Feb 13 15:54:16.777061 kubelet[2217]: E0213 15:54:16.775082 2217 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.102.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-d-137a032ec7?timeout=10s\": dial tcp 143.198.102.37:6443: connect: connection refused" interval="800ms" Feb 13 15:54:17.016113 kubelet[2217]: I0213 15:54:17.015903 2217 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.1-d-137a032ec7" Feb 13 15:54:17.019411 kubelet[2217]: E0213 15:54:17.018665 2217 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.102.37:6443/api/v1/nodes\": dial tcp 143.198.102.37:6443: connect: connection refused" node="ci-4186.1.1-d-137a032ec7" Feb 13 15:54:17.116015 kubelet[2217]: W0213 15:54:17.115794 2217 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.102.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.102.37:6443: connect: connection refused Feb 13 15:54:17.116015 kubelet[2217]: E0213 15:54:17.115900 2217 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.102.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.102.37:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:54:17.271741 kubelet[2217]: W0213 15:54:17.270352 2217 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.102.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.102.37:6443: connect: connection refused Feb 13 15:54:17.271741 kubelet[2217]: E0213 15:54:17.270481 2217 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.102.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.102.37:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:54:17.519151 kubelet[2217]: W0213 15:54:17.518935 2217 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.102.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-d-137a032ec7&limit=500&resourceVersion=0": dial tcp 143.198.102.37:6443: connect: connection refused Feb 13 15:54:17.519151 kubelet[2217]: E0213 15:54:17.519063 2217 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.102.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-d-137a032ec7&limit=500&resourceVersion=0\": dial tcp 143.198.102.37:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:54:17.598456 kubelet[2217]: E0213 15:54:17.597944 2217 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.102.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-d-137a032ec7?timeout=10s\": dial tcp 143.198.102.37:6443: connect: connection refused" interval="1.6s" Feb 13 15:54:17.598456 kubelet[2217]: W0213 15:54:17.598088 2217 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.102.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.102.37:6443: connect: connection refused Feb 13 15:54:17.598456 kubelet[2217]: E0213 15:54:17.598244 2217 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.102.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.102.37:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:54:17.735071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2390648851.mount: Deactivated successfully. Feb 13 15:54:17.797001 containerd[1480]: time="2025-02-13T15:54:17.796877998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:54:17.812954 containerd[1480]: time="2025-02-13T15:54:17.812843009Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:54:17.823685 kubelet[2217]: I0213 15:54:17.823257 2217 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.1-d-137a032ec7" Feb 13 15:54:17.824388 kubelet[2217]: E0213 15:54:17.824277 2217 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://143.198.102.37:6443/api/v1/nodes\": dial tcp 143.198.102.37:6443: connect: connection refused" node="ci-4186.1.1-d-137a032ec7" Feb 13 15:54:17.828638 containerd[1480]: time="2025-02-13T15:54:17.827558448Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:54:17.836023 containerd[1480]: time="2025-02-13T15:54:17.835283975Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:54:17.838473 containerd[1480]: time="2025-02-13T15:54:17.838344524Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:54:17.840416 containerd[1480]: time="2025-02-13T15:54:17.840342158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:54:17.845641 containerd[1480]: time="2025-02-13T15:54:17.845060865Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.177662793s" Feb 13 15:54:17.847880 containerd[1480]: time="2025-02-13T15:54:17.847389723Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:54:17.875740 containerd[1480]: time="2025-02-13T15:54:17.874651906Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:54:17.891058 containerd[1480]: time="2025-02-13T15:54:17.890514750Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.186276895s" Feb 13 15:54:17.905398 containerd[1480]: time="2025-02-13T15:54:17.905051691Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.217791755s" Feb 13 15:54:18.158925 kubelet[2217]: E0213 15:54:18.149283 2217 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://143.198.102.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.102.37:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:54:18.268625 containerd[1480]: time="2025-02-13T15:54:18.264436967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:54:18.268625 containerd[1480]: time="2025-02-13T15:54:18.264607135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:54:18.268625 containerd[1480]: time="2025-02-13T15:54:18.264637835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:18.268625 containerd[1480]: time="2025-02-13T15:54:18.264932946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:18.311427 containerd[1480]: time="2025-02-13T15:54:18.310746258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:54:18.311427 containerd[1480]: time="2025-02-13T15:54:18.310851602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:54:18.311427 containerd[1480]: time="2025-02-13T15:54:18.310873999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:18.311427 containerd[1480]: time="2025-02-13T15:54:18.311005862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:18.318701 containerd[1480]: time="2025-02-13T15:54:18.318229148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:54:18.318701 containerd[1480]: time="2025-02-13T15:54:18.318344160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:54:18.318701 containerd[1480]: time="2025-02-13T15:54:18.318363441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:18.318701 containerd[1480]: time="2025-02-13T15:54:18.318509375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:18.363214 systemd[1]: Started cri-containerd-8d819d661d7889dc2b2e52e002940190b8697b02715611b18cbe5f1665448f5b.scope - libcontainer container 8d819d661d7889dc2b2e52e002940190b8697b02715611b18cbe5f1665448f5b. Feb 13 15:54:18.380742 systemd[1]: Started cri-containerd-abcd737659db9e43858ab25a5971346a262ca179f12f1026568e580203a6da58.scope - libcontainer container abcd737659db9e43858ab25a5971346a262ca179f12f1026568e580203a6da58. Feb 13 15:54:18.412673 systemd[1]: Started cri-containerd-072ae95a2dc470d16dbbdc60e31df3485a15896688ebb63b1d4a9252e1fb394e.scope - libcontainer container 072ae95a2dc470d16dbbdc60e31df3485a15896688ebb63b1d4a9252e1fb394e. Feb 13 15:54:18.540788 containerd[1480]: time="2025-02-13T15:54:18.540447555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.1-d-137a032ec7,Uid:8e5ed4d332e6c91cf353df9df9b2a2f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d819d661d7889dc2b2e52e002940190b8697b02715611b18cbe5f1665448f5b\"" Feb 13 15:54:18.572550 kubelet[2217]: E0213 15:54:18.568379 2217 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:18.582184 containerd[1480]: time="2025-02-13T15:54:18.582050649Z" level=info msg="CreateContainer within sandbox \"8d819d661d7889dc2b2e52e002940190b8697b02715611b18cbe5f1665448f5b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:54:18.632290 containerd[1480]: time="2025-02-13T15:54:18.631789626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.1-d-137a032ec7,Uid:f6d2f5abe41c1051e6a310dccd61493c,Namespace:kube-system,Attempt:0,} returns sandbox id \"abcd737659db9e43858ab25a5971346a262ca179f12f1026568e580203a6da58\"" Feb 13 15:54:18.635243 kubelet[2217]: E0213 15:54:18.635193 2217 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:18.648496 containerd[1480]: time="2025-02-13T15:54:18.648259178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.1-d-137a032ec7,Uid:5c6be6e4dafc34ddb72e1797b081418e,Namespace:kube-system,Attempt:0,} returns sandbox id \"072ae95a2dc470d16dbbdc60e31df3485a15896688ebb63b1d4a9252e1fb394e\"" Feb 13 15:54:18.650241 kubelet[2217]: E0213 15:54:18.650181 2217 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:18.652980 containerd[1480]: time="2025-02-13T15:54:18.651943045Z" level=info msg="CreateContainer within sandbox \"abcd737659db9e43858ab25a5971346a262ca179f12f1026568e580203a6da58\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:54:18.655682 containerd[1480]: time="2025-02-13T15:54:18.655186796Z" level=info msg="CreateContainer within sandbox \"072ae95a2dc470d16dbbdc60e31df3485a15896688ebb63b1d4a9252e1fb394e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:54:18.761893 containerd[1480]: time="2025-02-13T15:54:18.761791603Z" level=info msg="CreateContainer within sandbox \"8d819d661d7889dc2b2e52e002940190b8697b02715611b18cbe5f1665448f5b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8bb6ebbb20b769d93c39bed80474ff1c2a347542d463d0e90cd39e539fb583b2\"" Feb 13 15:54:18.772542 containerd[1480]: time="2025-02-13T15:54:18.765620811Z" level=info msg="StartContainer for \"8bb6ebbb20b769d93c39bed80474ff1c2a347542d463d0e90cd39e539fb583b2\"" Feb 13 15:54:18.776429 containerd[1480]: time="2025-02-13T15:54:18.773019444Z" level=info msg="CreateContainer within sandbox \"abcd737659db9e43858ab25a5971346a262ca179f12f1026568e580203a6da58\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e8ba558ceae71a7cb5e12d4f0276e5cb67aa9031ae6f4f56a3e3da082cd897df\"" Feb 13 15:54:18.776429 containerd[1480]: time="2025-02-13T15:54:18.773743654Z" level=info msg="StartContainer for \"e8ba558ceae71a7cb5e12d4f0276e5cb67aa9031ae6f4f56a3e3da082cd897df\"" Feb 13 15:54:18.798463 containerd[1480]: time="2025-02-13T15:54:18.798280384Z" level=info msg="CreateContainer within sandbox \"072ae95a2dc470d16dbbdc60e31df3485a15896688ebb63b1d4a9252e1fb394e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"460d0df48d8f05419c7c5e58f2decb8aef3cf6a9b2bbbf58e416f7cde1f824c7\"" Feb 13 15:54:18.800179 containerd[1480]: time="2025-02-13T15:54:18.799698553Z" level=info msg="StartContainer for \"460d0df48d8f05419c7c5e58f2decb8aef3cf6a9b2bbbf58e416f7cde1f824c7\"" Feb 13 15:54:18.887882 systemd[1]: Started cri-containerd-e8ba558ceae71a7cb5e12d4f0276e5cb67aa9031ae6f4f56a3e3da082cd897df.scope - libcontainer container e8ba558ceae71a7cb5e12d4f0276e5cb67aa9031ae6f4f56a3e3da082cd897df. Feb 13 15:54:18.920057 systemd[1]: Started cri-containerd-8bb6ebbb20b769d93c39bed80474ff1c2a347542d463d0e90cd39e539fb583b2.scope - libcontainer container 8bb6ebbb20b769d93c39bed80474ff1c2a347542d463d0e90cd39e539fb583b2. Feb 13 15:54:18.958995 systemd[1]: Started cri-containerd-460d0df48d8f05419c7c5e58f2decb8aef3cf6a9b2bbbf58e416f7cde1f824c7.scope - libcontainer container 460d0df48d8f05419c7c5e58f2decb8aef3cf6a9b2bbbf58e416f7cde1f824c7. Feb 13 15:54:19.120106 containerd[1480]: time="2025-02-13T15:54:19.117522509Z" level=info msg="StartContainer for \"8bb6ebbb20b769d93c39bed80474ff1c2a347542d463d0e90cd39e539fb583b2\" returns successfully" Feb 13 15:54:19.120106 containerd[1480]: time="2025-02-13T15:54:19.117647090Z" level=info msg="StartContainer for \"460d0df48d8f05419c7c5e58f2decb8aef3cf6a9b2bbbf58e416f7cde1f824c7\" returns successfully" Feb 13 15:54:19.128483 kubelet[2217]: W0213 15:54:19.126151 2217 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.102.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.102.37:6443: connect: connection refused Feb 13 15:54:19.128483 kubelet[2217]: E0213 15:54:19.126232 2217 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.102.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.102.37:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:54:19.174227 containerd[1480]: time="2025-02-13T15:54:19.174152023Z" level=info msg="StartContainer for \"e8ba558ceae71a7cb5e12d4f0276e5cb67aa9031ae6f4f56a3e3da082cd897df\" returns successfully" Feb 13 15:54:19.202043 kubelet[2217]: E0213 15:54:19.199316 2217 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.102.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.1-d-137a032ec7?timeout=10s\": dial tcp 143.198.102.37:6443: connect: connection refused" interval="3.2s" Feb 13 15:54:19.272430 kubelet[2217]: E0213 15:54:19.272297 2217 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:19.279830 kubelet[2217]: E0213 15:54:19.279771 2217 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:19.285300 kubelet[2217]: E0213 15:54:19.285003 2217 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:19.294096 kubelet[2217]: W0213 15:54:19.293959 2217 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.102.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-d-137a032ec7&limit=500&resourceVersion=0": dial tcp 143.198.102.37:6443: connect: connection refused Feb 13 15:54:19.294096 kubelet[2217]: E0213 15:54:19.294053 2217 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.102.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.1-d-137a032ec7&limit=500&resourceVersion=0\": dial tcp 143.198.102.37:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:54:19.428429 kubelet[2217]: I0213 15:54:19.428252 2217 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.1-d-137a032ec7" Feb 13 15:54:20.290676 kubelet[2217]: E0213 15:54:20.290563 2217 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:22.357318 kubelet[2217]: E0213 15:54:22.357256 2217 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:22.862968 kubelet[2217]: E0213 15:54:22.862904 2217 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.1-d-137a032ec7\" not found" node="ci-4186.1.1-d-137a032ec7" Feb 13 15:54:22.917050 kubelet[2217]: E0213 15:54:22.916108 2217 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186.1.1-d-137a032ec7.1823cf86231fdabf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.1-d-137a032ec7,UID:ci-4186.1.1-d-137a032ec7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.1-d-137a032ec7,},FirstTimestamp:2025-02-13 15:54:16.117000895 +0000 UTC m=+1.194084183,LastTimestamp:2025-02-13 15:54:16.117000895 +0000 UTC m=+1.194084183,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.1-d-137a032ec7,}" Feb 13 15:54:22.978320 kubelet[2217]: E0213 15:54:22.978037 2217 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186.1.1-d-137a032ec7.1823cf862885913d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.1-d-137a032ec7,UID:ci-4186.1.1-d-137a032ec7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4186.1.1-d-137a032ec7 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4186.1.1-d-137a032ec7,},FirstTimestamp:2025-02-13 15:54:16.207552829 +0000 UTC m=+1.284636100,LastTimestamp:2025-02-13 15:54:16.207552829 +0000 UTC m=+1.284636100,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.1-d-137a032ec7,}" Feb 13 15:54:23.030395 kubelet[2217]: I0213 15:54:23.016665 2217 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186.1.1-d-137a032ec7" Feb 13 15:54:23.056783 kubelet[2217]: E0213 15:54:23.056615 2217 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186.1.1-d-137a032ec7.1823cf862885afd1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.1-d-137a032ec7,UID:ci-4186.1.1-d-137a032ec7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-4186.1.1-d-137a032ec7 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-4186.1.1-d-137a032ec7,},FirstTimestamp:2025-02-13 15:54:16.207560657 +0000 UTC m=+1.284643922,LastTimestamp:2025-02-13 15:54:16.207560657 +0000 UTC m=+1.284643922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.1-d-137a032ec7,}" Feb 13 15:54:23.097237 kubelet[2217]: I0213 15:54:23.097147 2217 apiserver.go:52] "Watching apiserver" Feb 13 15:54:23.126668 kubelet[2217]: E0213 15:54:23.126275 2217 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186.1.1-d-137a032ec7.1823cf862885ca6d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.1-d-137a032ec7,UID:ci-4186.1.1-d-137a032ec7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ci-4186.1.1-d-137a032ec7 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ci-4186.1.1-d-137a032ec7,},FirstTimestamp:2025-02-13 15:54:16.207567469 +0000 UTC m=+1.284650739,LastTimestamp:2025-02-13 15:54:16.207567469 +0000 UTC m=+1.284650739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.1-d-137a032ec7,}" Feb 13 15:54:23.168030 kubelet[2217]: I0213 15:54:23.167906 2217 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:54:23.289436 kubelet[2217]: E0213 15:54:23.285201 2217 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186.1.1-d-137a032ec7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:23.289436 kubelet[2217]: E0213 15:54:23.285627 2217 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:25.762899 systemd[1]: Started sshd@8-143.198.102.37:22-218.92.0.209:15310.service - OpenSSH per-connection server daemon (218.92.0.209:15310). Feb 13 15:54:26.159079 sshd[2508]: Connection reset by 218.92.0.209 port 15310 [preauth] Feb 13 15:54:26.161095 systemd[1]: sshd@8-143.198.102.37:22-218.92.0.209:15310.service: Deactivated successfully. Feb 13 15:54:26.522154 systemd[1]: Reloading requested from client PID 2515 ('systemctl') (unit session-7.scope)... Feb 13 15:54:26.522481 systemd[1]: Reloading... Feb 13 15:54:26.899319 zram_generator::config[2558]: No configuration found. Feb 13 15:54:27.280113 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:54:27.575215 systemd[1]: Reloading finished in 1051 ms. Feb 13 15:54:27.672139 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:54:27.693919 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:54:27.694357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:54:27.694452 systemd[1]: kubelet.service: Consumed 1.394s CPU time, 111.1M memory peak, 0B memory swap peak. Feb 13 15:54:27.710762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:54:28.138115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:54:28.180551 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:54:28.438136 kubelet[2606]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:54:28.438136 kubelet[2606]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:54:28.438136 kubelet[2606]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:54:28.438136 kubelet[2606]: I0213 15:54:28.430691 2606 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:54:28.464074 kubelet[2606]: I0213 15:54:28.463967 2606 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:54:28.466177 kubelet[2606]: I0213 15:54:28.464733 2606 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:54:28.467612 kubelet[2606]: I0213 15:54:28.467294 2606 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:54:28.473639 kubelet[2606]: I0213 15:54:28.472766 2606 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:54:28.481320 kubelet[2606]: I0213 15:54:28.480789 2606 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:54:28.501477 kubelet[2606]: E0213 15:54:28.501412 2606 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:54:28.503260 kubelet[2606]: I0213 15:54:28.502628 2606 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:54:28.511461 kubelet[2606]: I0213 15:54:28.511411 2606 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:54:28.512786 kubelet[2606]: I0213 15:54:28.512662 2606 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:54:28.515553 kubelet[2606]: I0213 15:54:28.514135 2606 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:54:28.515553 kubelet[2606]: I0213 15:54:28.514215 2606 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.1-d-137a032ec7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:54:28.515553 kubelet[2606]: I0213 15:54:28.514687 2606 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:54:28.515553 kubelet[2606]: I0213 15:54:28.514707 2606 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:54:28.516019 kubelet[2606]: I0213 15:54:28.514778 2606 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:54:28.516019 kubelet[2606]: I0213 15:54:28.514993 2606 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:54:28.516019 kubelet[2606]: I0213 15:54:28.515025 2606 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:54:28.516019 kubelet[2606]: I0213 15:54:28.515192 2606 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:54:28.516019 kubelet[2606]: I0213 15:54:28.515224 2606 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:54:28.558007 kubelet[2606]: I0213 15:54:28.557953 2606 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:54:28.560443 kubelet[2606]: I0213 15:54:28.560367 2606 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:54:28.588117 kubelet[2606]: I0213 15:54:28.585230 2606 server.go:1269] "Started kubelet" Feb 13 15:54:28.588117 kubelet[2606]: I0213 15:54:28.586633 2606 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:54:28.589401 kubelet[2606]: I0213 15:54:28.589307 2606 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:54:28.592344 kubelet[2606]: I0213 15:54:28.592294 2606 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:54:28.594813 kubelet[2606]: I0213 15:54:28.593406 2606 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:54:28.603049 kubelet[2606]: E0213 15:54:28.603003 2606 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:54:28.607486 kubelet[2606]: I0213 15:54:28.607429 2606 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:54:28.608471 kubelet[2606]: I0213 15:54:28.608263 2606 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:54:28.624205 kubelet[2606]: I0213 15:54:28.623518 2606 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:54:28.629420 kubelet[2606]: I0213 15:54:28.629371 2606 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:54:28.633228 kubelet[2606]: I0213 15:54:28.631944 2606 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:54:28.633228 kubelet[2606]: I0213 15:54:28.632098 2606 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:54:28.636742 kubelet[2606]: I0213 15:54:28.636697 2606 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:54:28.652116 kubelet[2606]: I0213 15:54:28.651465 2606 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:54:28.668063 sudo[2619]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:54:28.668738 sudo[2619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:54:28.703221 kubelet[2606]: I0213 15:54:28.702075 2606 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:54:28.720797 kubelet[2606]: I0213 15:54:28.720650 2606 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:54:28.720797 kubelet[2606]: I0213 15:54:28.720718 2606 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:54:28.721397 kubelet[2606]: I0213 15:54:28.720969 2606 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:54:28.721708 kubelet[2606]: E0213 15:54:28.721493 2606 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:54:28.821860 kubelet[2606]: E0213 15:54:28.821811 2606 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:54:28.844734 kubelet[2606]: I0213 15:54:28.844048 2606 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:54:28.844734 kubelet[2606]: I0213 15:54:28.844213 2606 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:54:28.844734 kubelet[2606]: I0213 15:54:28.844259 2606 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:54:28.844734 kubelet[2606]: I0213 15:54:28.844534 2606 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:54:28.844734 kubelet[2606]: I0213 15:54:28.844555 2606 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:54:28.844734 kubelet[2606]: I0213 15:54:28.844627 2606 policy_none.go:49] "None policy: Start" Feb 13 15:54:28.847044 kubelet[2606]: I0213 15:54:28.847005 2606 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:54:28.847920 kubelet[2606]: I0213 15:54:28.847287 2606 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:54:28.847920 kubelet[2606]: I0213 15:54:28.847767 2606 state_mem.go:75] "Updated machine memory state" Feb 13 15:54:28.864220 kubelet[2606]: I0213 15:54:28.864159 2606 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:54:28.865628 kubelet[2606]: I0213 15:54:28.865365 2606 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:54:28.865628 kubelet[2606]: I0213 15:54:28.865401 2606 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:54:28.869310 kubelet[2606]: I0213 15:54:28.868542 2606 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:54:29.003334 kubelet[2606]: I0213 15:54:28.999629 2606 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.1-d-137a032ec7" Feb 13 15:54:29.040975 kubelet[2606]: I0213 15:54:29.040912 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e5ed4d332e6c91cf353df9df9b2a2f7-k8s-certs\") pod \"kube-apiserver-ci-4186.1.1-d-137a032ec7\" (UID: \"8e5ed4d332e6c91cf353df9df9b2a2f7\") " pod="kube-system/kube-apiserver-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:29.043189 kubelet[2606]: I0213 15:54:29.042925 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e5ed4d332e6c91cf353df9df9b2a2f7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.1-d-137a032ec7\" (UID: \"8e5ed4d332e6c91cf353df9df9b2a2f7\") " pod="kube-system/kube-apiserver-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:29.043189 kubelet[2606]: I0213 15:54:29.043039 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5c6be6e4dafc34ddb72e1797b081418e-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.1-d-137a032ec7\" (UID: \"5c6be6e4dafc34ddb72e1797b081418e\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:29.043189 kubelet[2606]: I0213 15:54:29.043121 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c6be6e4dafc34ddb72e1797b081418e-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.1-d-137a032ec7\" (UID: \"5c6be6e4dafc34ddb72e1797b081418e\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:29.044117 kubelet[2606]: I0213 15:54:29.043152 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c6be6e4dafc34ddb72e1797b081418e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.1-d-137a032ec7\" (UID: \"5c6be6e4dafc34ddb72e1797b081418e\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:29.044117 kubelet[2606]: I0213 15:54:29.043667 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e5ed4d332e6c91cf353df9df9b2a2f7-ca-certs\") pod \"kube-apiserver-ci-4186.1.1-d-137a032ec7\" (UID: \"8e5ed4d332e6c91cf353df9df9b2a2f7\") " pod="kube-system/kube-apiserver-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:29.044117 kubelet[2606]: I0213 15:54:29.043745 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c6be6e4dafc34ddb72e1797b081418e-ca-certs\") pod \"kube-controller-manager-ci-4186.1.1-d-137a032ec7\" (UID: \"5c6be6e4dafc34ddb72e1797b081418e\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:29.044117 kubelet[2606]: I0213 15:54:29.044062 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c6be6e4dafc34ddb72e1797b081418e-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.1-d-137a032ec7\" (UID: \"5c6be6e4dafc34ddb72e1797b081418e\") " pod="kube-system/kube-controller-manager-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:29.045236 kubelet[2606]: I0213 15:54:29.045188 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f6d2f5abe41c1051e6a310dccd61493c-kubeconfig\") pod \"kube-scheduler-ci-4186.1.1-d-137a032ec7\" (UID: \"f6d2f5abe41c1051e6a310dccd61493c\") " pod="kube-system/kube-scheduler-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:29.063611 kubelet[2606]: W0213 15:54:29.063441 2606 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:54:29.078319 kubelet[2606]: I0213 15:54:29.078064 2606 kubelet_node_status.go:111] "Node was previously registered" node="ci-4186.1.1-d-137a032ec7" Feb 13 15:54:29.080772 kubelet[2606]: W0213 15:54:29.078170 2606 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:54:29.080772 kubelet[2606]: I0213 15:54:29.079029 2606 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186.1.1-d-137a032ec7" Feb 13 15:54:29.099838 kubelet[2606]: W0213 15:54:29.099776 2606 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:54:29.373154 kubelet[2606]: E0213 15:54:29.371277 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:29.384328 kubelet[2606]: E0213 15:54:29.384271 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:29.407185 kubelet[2606]: E0213 15:54:29.407115 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:29.561802 kubelet[2606]: I0213 15:54:29.560568 2606 apiserver.go:52] "Watching apiserver" Feb 13 15:54:29.632995 kubelet[2606]: I0213 15:54:29.632782 2606 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:54:29.810483 kubelet[2606]: E0213 15:54:29.810289 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:29.862215 kubelet[2606]: W0213 15:54:29.861700 2606 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:54:29.864191 kubelet[2606]: E0213 15:54:29.864105 2606 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186.1.1-d-137a032ec7\" already exists" pod="kube-system/kube-controller-manager-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:29.866716 kubelet[2606]: E0213 15:54:29.866060 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:29.876335 kubelet[2606]: W0213 15:54:29.874684 2606 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 15:54:29.876335 kubelet[2606]: E0213 15:54:29.874797 2606 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.1-d-137a032ec7\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.1-d-137a032ec7" Feb 13 15:54:29.876335 kubelet[2606]: E0213 15:54:29.875151 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:29.886749 kubelet[2606]: I0213 15:54:29.885296 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.1-d-137a032ec7" podStartSLOduration=0.885242558 podStartE2EDuration="885.242558ms" podCreationTimestamp="2025-02-13 15:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:54:29.878331783 +0000 UTC m=+1.640524833" watchObservedRunningTime="2025-02-13 15:54:29.885242558 +0000 UTC m=+1.647435610" Feb 13 15:54:29.955427 kubelet[2606]: I0213 15:54:29.955299 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.1-d-137a032ec7" podStartSLOduration=0.955262509 podStartE2EDuration="955.262509ms" podCreationTimestamp="2025-02-13 15:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:54:29.919995502 +0000 UTC m=+1.682188549" watchObservedRunningTime="2025-02-13 15:54:29.955262509 +0000 UTC m=+1.717455557" Feb 13 15:54:29.994613 kubelet[2606]: I0213 15:54:29.994319 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.1-d-137a032ec7" podStartSLOduration=0.99428717 podStartE2EDuration="994.28717ms" podCreationTimestamp="2025-02-13 15:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:54:29.960454392 +0000 UTC m=+1.722647441" watchObservedRunningTime="2025-02-13 15:54:29.99428717 +0000 UTC m=+1.756480223" Feb 13 15:54:30.135027 sudo[2619]: pam_unix(sudo:session): session closed for user root Feb 13 15:54:30.814507 kubelet[2606]: E0213 15:54:30.813952 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:30.814507 kubelet[2606]: E0213 15:54:30.814136 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:30.817768 kubelet[2606]: E0213 15:54:30.816547 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:31.525980 kubelet[2606]: I0213 15:54:31.523336 2606 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:54:31.527404 containerd[1480]: time="2025-02-13T15:54:31.526962727Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:54:31.532152 kubelet[2606]: I0213 15:54:31.530482 2606 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:54:31.821420 kubelet[2606]: E0213 15:54:31.820225 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:31.821420 kubelet[2606]: E0213 15:54:31.821222 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:32.837793 kubelet[2606]: E0213 15:54:32.837190 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:32.847955 kubelet[2606]: I0213 15:54:32.841371 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-cni-path\") pod \"cilium-t755d\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " pod="kube-system/cilium-t755d" Feb 13 15:54:32.844456 systemd[1]: Created slice kubepods-burstable-podabc75907_2c0a_45ef_bc33_5628b5f5ec61.slice - libcontainer container kubepods-burstable-podabc75907_2c0a_45ef_bc33_5628b5f5ec61.slice. Feb 13 15:54:32.862054 kubelet[2606]: I0213 15:54:32.856339 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-lib-modules\") pod \"cilium-t755d\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " pod="kube-system/cilium-t755d" Feb 13 15:54:32.862054 kubelet[2606]: I0213 15:54:32.856375 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-xtables-lock\") pod \"cilium-t755d\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " pod="kube-system/cilium-t755d" Feb 13 15:54:32.862054 kubelet[2606]: I0213 15:54:32.856404 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-host-proc-sys-kernel\") pod \"cilium-t755d\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " pod="kube-system/cilium-t755d" Feb 13 15:54:32.862054 kubelet[2606]: I0213 15:54:32.856451 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-cilium-run\") pod \"cilium-t755d\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " pod="kube-system/cilium-t755d" Feb 13 15:54:32.862054 kubelet[2606]: I0213 15:54:32.856480 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-hostproc\") pod \"cilium-t755d\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " pod="kube-system/cilium-t755d" Feb 13 15:54:32.862054 kubelet[2606]: I0213 15:54:32.856502 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-cilium-cgroup\") pod \"cilium-t755d\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " pod="kube-system/cilium-t755d" Feb 13 15:54:32.862410 kubelet[2606]: I0213 15:54:32.856538 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-etc-cni-netd\") pod \"cilium-t755d\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " pod="kube-system/cilium-t755d" Feb 13 15:54:32.862410 kubelet[2606]: I0213 15:54:32.856561 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abc75907-2c0a-45ef-bc33-5628b5f5ec61-clustermesh-secrets\") pod \"cilium-t755d\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " pod="kube-system/cilium-t755d" Feb 13 15:54:32.862410 kubelet[2606]: I0213 15:54:32.856611 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abc75907-2c0a-45ef-bc33-5628b5f5ec61-hubble-tls\") pod \"cilium-t755d\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " pod="kube-system/cilium-t755d" Feb 13 15:54:32.862410 kubelet[2606]: I0213 15:54:32.856638 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abc75907-2c0a-45ef-bc33-5628b5f5ec61-cilium-config-path\") pod \"cilium-t755d\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " pod="kube-system/cilium-t755d" Feb 13 15:54:32.862410 kubelet[2606]: I0213 15:54:32.856667 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c8qz\" (UniqueName: \"kubernetes.io/projected/abc75907-2c0a-45ef-bc33-5628b5f5ec61-kube-api-access-8c8qz\") pod \"cilium-t755d\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " pod="kube-system/cilium-t755d" Feb 13 15:54:32.862410 kubelet[2606]: I0213 15:54:32.856742 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-bpf-maps\") pod \"cilium-t755d\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " pod="kube-system/cilium-t755d" Feb 13 15:54:32.872246 kubelet[2606]: I0213 15:54:32.856801 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-host-proc-sys-net\") pod \"cilium-t755d\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " pod="kube-system/cilium-t755d" Feb 13 15:54:32.882369 systemd[1]: Created slice kubepods-besteffort-podc03bd812_9ac0_4a93_bc43_57dae27d8c03.slice - libcontainer container kubepods-besteffort-podc03bd812_9ac0_4a93_bc43_57dae27d8c03.slice. Feb 13 15:54:32.904781 kubelet[2606]: W0213 15:54:32.904298 2606 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4186.1.1-d-137a032ec7" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.1-d-137a032ec7' and this object Feb 13 15:54:32.905736 kubelet[2606]: E0213 15:54:32.905412 2606 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4186.1.1-d-137a032ec7\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186.1.1-d-137a032ec7' and this object" logger="UnhandledError" Feb 13 15:54:32.905736 kubelet[2606]: W0213 15:54:32.903919 2606 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186.1.1-d-137a032ec7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.1-d-137a032ec7' and this object Feb 13 15:54:32.905736 kubelet[2606]: E0213 15:54:32.905533 2606 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4186.1.1-d-137a032ec7\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186.1.1-d-137a032ec7' and this object" logger="UnhandledError" Feb 13 15:54:32.905736 kubelet[2606]: W0213 15:54:32.904053 2606 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4186.1.1-d-137a032ec7" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.1-d-137a032ec7' and this object Feb 13 15:54:32.906009 kubelet[2606]: E0213 15:54:32.905561 2606 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4186.1.1-d-137a032ec7\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186.1.1-d-137a032ec7' and this object" logger="UnhandledError" Feb 13 15:54:32.906708 kubelet[2606]: W0213 15:54:32.906359 2606 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4186.1.1-d-137a032ec7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.1-d-137a032ec7' and this object Feb 13 15:54:32.906708 kubelet[2606]: E0213 15:54:32.906544 2606 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4186.1.1-d-137a032ec7\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4186.1.1-d-137a032ec7' and this object" logger="UnhandledError" Feb 13 15:54:32.960646 kubelet[2606]: I0213 15:54:32.959428 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c03bd812-9ac0-4a93-bc43-57dae27d8c03-cilium-config-path\") pod \"cilium-operator-5d85765b45-h75kl\" (UID: \"c03bd812-9ac0-4a93-bc43-57dae27d8c03\") " pod="kube-system/cilium-operator-5d85765b45-h75kl" Feb 13 15:54:32.960646 kubelet[2606]: I0213 15:54:32.959514 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpbqt\" (UniqueName: \"kubernetes.io/projected/c03bd812-9ac0-4a93-bc43-57dae27d8c03-kube-api-access-tpbqt\") pod \"cilium-operator-5d85765b45-h75kl\" (UID: \"c03bd812-9ac0-4a93-bc43-57dae27d8c03\") " pod="kube-system/cilium-operator-5d85765b45-h75kl" Feb 13 15:54:33.262883 systemd[1]: Created slice kubepods-besteffort-podec4bf4aa_5ef5_4a4a_92fc_408b523bd9a7.slice - libcontainer container kubepods-besteffort-podec4bf4aa_5ef5_4a4a_92fc_408b523bd9a7.slice. Feb 13 15:54:33.367802 kubelet[2606]: I0213 15:54:33.367709 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec4bf4aa-5ef5-4a4a-92fc-408b523bd9a7-lib-modules\") pod \"kube-proxy-6m9cq\" (UID: \"ec4bf4aa-5ef5-4a4a-92fc-408b523bd9a7\") " pod="kube-system/kube-proxy-6m9cq" Feb 13 15:54:33.367802 kubelet[2606]: I0213 15:54:33.367790 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec4bf4aa-5ef5-4a4a-92fc-408b523bd9a7-xtables-lock\") pod \"kube-proxy-6m9cq\" (UID: \"ec4bf4aa-5ef5-4a4a-92fc-408b523bd9a7\") " pod="kube-system/kube-proxy-6m9cq" Feb 13 15:54:33.368117 kubelet[2606]: I0213 15:54:33.367862 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmp2d\" (UniqueName: \"kubernetes.io/projected/ec4bf4aa-5ef5-4a4a-92fc-408b523bd9a7-kube-api-access-fmp2d\") pod \"kube-proxy-6m9cq\" (UID: \"ec4bf4aa-5ef5-4a4a-92fc-408b523bd9a7\") " pod="kube-system/kube-proxy-6m9cq" Feb 13 15:54:33.368117 kubelet[2606]: I0213 15:54:33.367897 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ec4bf4aa-5ef5-4a4a-92fc-408b523bd9a7-kube-proxy\") pod \"kube-proxy-6m9cq\" (UID: \"ec4bf4aa-5ef5-4a4a-92fc-408b523bd9a7\") " pod="kube-system/kube-proxy-6m9cq" Feb 13 15:54:33.669062 sudo[1666]: pam_unix(sudo:session): session closed for user root Feb 13 15:54:33.675203 sshd[1665]: Connection closed by 139.178.89.65 port 45272 Feb 13 15:54:33.683380 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:33.706646 systemd[1]: sshd@7-143.198.102.37:22-139.178.89.65:45272.service: Deactivated successfully. Feb 13 15:54:33.718164 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:54:33.719276 systemd[1]: session-7.scope: Consumed 6.451s CPU time, 145.3M memory peak, 0B memory swap peak. Feb 13 15:54:33.723748 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:54:33.730650 systemd-logind[1451]: Removed session 7. Feb 13 15:54:33.969115 kubelet[2606]: E0213 15:54:33.966937 2606 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 15:54:33.969115 kubelet[2606]: E0213 15:54:33.966992 2606 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-t755d: failed to sync secret cache: timed out waiting for the condition Feb 13 15:54:33.969115 kubelet[2606]: E0213 15:54:33.967098 2606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/abc75907-2c0a-45ef-bc33-5628b5f5ec61-hubble-tls podName:abc75907-2c0a-45ef-bc33-5628b5f5ec61 nodeName:}" failed. No retries permitted until 2025-02-13 15:54:34.467069295 +0000 UTC m=+6.229262340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/abc75907-2c0a-45ef-bc33-5628b5f5ec61-hubble-tls") pod "cilium-t755d" (UID: "abc75907-2c0a-45ef-bc33-5628b5f5ec61") : failed to sync secret cache: timed out waiting for the condition Feb 13 15:54:34.085157 kubelet[2606]: E0213 15:54:34.083497 2606 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:54:34.085157 kubelet[2606]: E0213 15:54:34.083621 2606 projected.go:194] Error preparing data for projected volume kube-api-access-8c8qz for pod kube-system/cilium-t755d: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:54:34.085157 kubelet[2606]: E0213 15:54:34.083722 2606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/abc75907-2c0a-45ef-bc33-5628b5f5ec61-kube-api-access-8c8qz podName:abc75907-2c0a-45ef-bc33-5628b5f5ec61 nodeName:}" failed. No retries permitted until 2025-02-13 15:54:34.583689796 +0000 UTC m=+6.345882840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8c8qz" (UniqueName: "kubernetes.io/projected/abc75907-2c0a-45ef-bc33-5628b5f5ec61-kube-api-access-8c8qz") pod "cilium-t755d" (UID: "abc75907-2c0a-45ef-bc33-5628b5f5ec61") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:54:34.177634 kubelet[2606]: E0213 15:54:34.176476 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:34.179925 containerd[1480]: time="2025-02-13T15:54:34.179852920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6m9cq,Uid:ec4bf4aa-5ef5-4a4a-92fc-408b523bd9a7,Namespace:kube-system,Attempt:0,}" Feb 13 15:54:34.274038 containerd[1480]: time="2025-02-13T15:54:34.273260494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:54:34.274038 containerd[1480]: time="2025-02-13T15:54:34.273380688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:54:34.274038 containerd[1480]: time="2025-02-13T15:54:34.273404472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:34.274038 containerd[1480]: time="2025-02-13T15:54:34.273646022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:34.354676 systemd[1]: Started cri-containerd-f59a708f565534d10372db9e57e182f33ec4e29635cfdbe885401afd8092d92a.scope - libcontainer container f59a708f565534d10372db9e57e182f33ec4e29635cfdbe885401afd8092d92a. Feb 13 15:54:34.411617 kubelet[2606]: E0213 15:54:34.405896 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:34.414499 containerd[1480]: time="2025-02-13T15:54:34.406946183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-h75kl,Uid:c03bd812-9ac0-4a93-bc43-57dae27d8c03,Namespace:kube-system,Attempt:0,}" Feb 13 15:54:34.449461 containerd[1480]: time="2025-02-13T15:54:34.449298605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6m9cq,Uid:ec4bf4aa-5ef5-4a4a-92fc-408b523bd9a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f59a708f565534d10372db9e57e182f33ec4e29635cfdbe885401afd8092d92a\"" Feb 13 15:54:34.455901 kubelet[2606]: E0213 15:54:34.455752 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:34.471643 containerd[1480]: time="2025-02-13T15:54:34.471127141Z" level=info msg="CreateContainer within sandbox \"f59a708f565534d10372db9e57e182f33ec4e29635cfdbe885401afd8092d92a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:54:34.502890 containerd[1480]: time="2025-02-13T15:54:34.501352210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:54:34.502890 containerd[1480]: time="2025-02-13T15:54:34.501806326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:54:34.502890 containerd[1480]: time="2025-02-13T15:54:34.501855177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:34.502890 containerd[1480]: time="2025-02-13T15:54:34.502102521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:34.562414 systemd[1]: Started cri-containerd-b14398c5ff7cc0667566afe5fd364444cfdeb446b3b40af04385f1497c932045.scope - libcontainer container b14398c5ff7cc0667566afe5fd364444cfdeb446b3b40af04385f1497c932045. Feb 13 15:54:34.601612 containerd[1480]: time="2025-02-13T15:54:34.598265044Z" level=info msg="CreateContainer within sandbox \"f59a708f565534d10372db9e57e182f33ec4e29635cfdbe885401afd8092d92a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cdda5a5570fdbb2a760a2ff8351756b5a0678342a2402fe40fa73ab30933e441\"" Feb 13 15:54:34.610841 containerd[1480]: time="2025-02-13T15:54:34.610771753Z" level=info msg="StartContainer for \"cdda5a5570fdbb2a760a2ff8351756b5a0678342a2402fe40fa73ab30933e441\"" Feb 13 15:54:34.679629 kubelet[2606]: E0213 15:54:34.678335 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:34.686937 containerd[1480]: time="2025-02-13T15:54:34.686850339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t755d,Uid:abc75907-2c0a-45ef-bc33-5628b5f5ec61,Namespace:kube-system,Attempt:0,}" Feb 13 15:54:34.755686 systemd[1]: Started cri-containerd-cdda5a5570fdbb2a760a2ff8351756b5a0678342a2402fe40fa73ab30933e441.scope - libcontainer container cdda5a5570fdbb2a760a2ff8351756b5a0678342a2402fe40fa73ab30933e441. Feb 13 15:54:34.848755 containerd[1480]: time="2025-02-13T15:54:34.848249059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-h75kl,Uid:c03bd812-9ac0-4a93-bc43-57dae27d8c03,Namespace:kube-system,Attempt:0,} returns sandbox id \"b14398c5ff7cc0667566afe5fd364444cfdeb446b3b40af04385f1497c932045\"" Feb 13 15:54:34.854978 kubelet[2606]: E0213 15:54:34.852811 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:34.859611 containerd[1480]: time="2025-02-13T15:54:34.859315714Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:54:34.937356 containerd[1480]: time="2025-02-13T15:54:34.937055867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:54:34.937356 containerd[1480]: time="2025-02-13T15:54:34.937209227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:54:34.937356 containerd[1480]: time="2025-02-13T15:54:34.937235824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:34.939542 containerd[1480]: time="2025-02-13T15:54:34.937473164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:54:34.979503 containerd[1480]: time="2025-02-13T15:54:34.979408978Z" level=info msg="StartContainer for \"cdda5a5570fdbb2a760a2ff8351756b5a0678342a2402fe40fa73ab30933e441\" returns successfully" Feb 13 15:54:35.024970 systemd[1]: Started cri-containerd-6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115.scope - libcontainer container 6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115. Feb 13 15:54:35.154963 containerd[1480]: time="2025-02-13T15:54:35.153651094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t755d,Uid:abc75907-2c0a-45ef-bc33-5628b5f5ec61,Namespace:kube-system,Attempt:0,} returns sandbox id \"6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115\"" Feb 13 15:54:35.158257 kubelet[2606]: E0213 15:54:35.157425 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:35.509023 kubelet[2606]: E0213 15:54:35.507031 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:35.914344 kubelet[2606]: E0213 15:54:35.913620 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:35.918621 kubelet[2606]: E0213 15:54:35.918540 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:35.967710 kubelet[2606]: I0213 15:54:35.967254 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6m9cq" podStartSLOduration=3.967229287 podStartE2EDuration="3.967229287s" podCreationTimestamp="2025-02-13 15:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:54:35.941474183 +0000 UTC m=+7.703667233" watchObservedRunningTime="2025-02-13 15:54:35.967229287 +0000 UTC m=+7.729422339" Feb 13 15:54:36.925991 kubelet[2606]: E0213 15:54:36.922991 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:36.984512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3774741085.mount: Deactivated successfully. Feb 13 15:54:40.592478 containerd[1480]: time="2025-02-13T15:54:40.591377528Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:40.595764 containerd[1480]: time="2025-02-13T15:54:40.595657084Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 15:54:40.598264 containerd[1480]: time="2025-02-13T15:54:40.598066128Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:54:40.607399 containerd[1480]: time="2025-02-13T15:54:40.607278593Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.746268148s" Feb 13 15:54:40.607399 containerd[1480]: time="2025-02-13T15:54:40.607387001Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 15:54:40.614570 containerd[1480]: time="2025-02-13T15:54:40.614477858Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:54:40.638721 containerd[1480]: time="2025-02-13T15:54:40.638635887Z" level=info msg="CreateContainer within sandbox \"b14398c5ff7cc0667566afe5fd364444cfdeb446b3b40af04385f1497c932045\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:54:40.687034 containerd[1480]: time="2025-02-13T15:54:40.686371091Z" level=info msg="CreateContainer within sandbox \"b14398c5ff7cc0667566afe5fd364444cfdeb446b3b40af04385f1497c932045\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3\"" Feb 13 15:54:40.692234 containerd[1480]: time="2025-02-13T15:54:40.688415983Z" level=info msg="StartContainer for \"adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3\"" Feb 13 15:54:40.758468 systemd[1]: run-containerd-runc-k8s.io-adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3-runc.ZUI343.mount: Deactivated successfully. Feb 13 15:54:40.773233 systemd[1]: Started cri-containerd-adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3.scope - libcontainer container adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3. Feb 13 15:54:40.835726 containerd[1480]: time="2025-02-13T15:54:40.835635019Z" level=info msg="StartContainer for \"adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3\" returns successfully" Feb 13 15:54:40.939896 kubelet[2606]: E0213 15:54:40.939715 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:41.949420 kubelet[2606]: E0213 15:54:41.949212 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:54:52.936520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3107216880.mount: Deactivated successfully. Feb 13 15:54:55.744524 systemd[1]: Started sshd@9-143.198.102.37:22-218.92.0.157:54771.service - OpenSSH per-connection server daemon (218.92.0.157:54771). Feb 13 15:54:57.081006 sshd-session[3045]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Feb 13 15:54:59.553625 sshd[3043]: PAM: Permission denied for root from 218.92.0.157 Feb 13 15:54:59.901433 sshd-session[3046]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Feb 13 15:55:00.538969 containerd[1480]: time="2025-02-13T15:55:00.537105345Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 15:55:00.552056 containerd[1480]: time="2025-02-13T15:55:00.527737443Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:00.553445 containerd[1480]: time="2025-02-13T15:55:00.552834878Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 19.938268152s" Feb 13 15:55:00.553445 containerd[1480]: time="2025-02-13T15:55:00.552903708Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 15:55:00.568743 containerd[1480]: time="2025-02-13T15:55:00.556380656Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:55:00.592472 containerd[1480]: time="2025-02-13T15:55:00.592390259Z" level=info msg="CreateContainer within sandbox \"6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:55:00.795979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount403338617.mount: Deactivated successfully. Feb 13 15:55:00.848710 containerd[1480]: time="2025-02-13T15:55:00.847022637Z" level=info msg="CreateContainer within sandbox \"6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"816a4d57a1256fd13b921cecc73151a842603bcad3fe14819156105f4704caa7\"" Feb 13 15:55:00.851958 containerd[1480]: time="2025-02-13T15:55:00.849778342Z" level=info msg="StartContainer for \"816a4d57a1256fd13b921cecc73151a842603bcad3fe14819156105f4704caa7\"" Feb 13 15:55:01.120939 sshd[3043]: PAM: Permission denied for root from 218.92.0.157 Feb 13 15:55:01.535464 sshd-session[3060]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Feb 13 15:55:01.536012 systemd[1]: run-containerd-runc-k8s.io-816a4d57a1256fd13b921cecc73151a842603bcad3fe14819156105f4704caa7-runc.MSZ9s3.mount: Deactivated successfully. Feb 13 15:55:01.580322 systemd[1]: Started cri-containerd-816a4d57a1256fd13b921cecc73151a842603bcad3fe14819156105f4704caa7.scope - libcontainer container 816a4d57a1256fd13b921cecc73151a842603bcad3fe14819156105f4704caa7. Feb 13 15:55:01.771542 containerd[1480]: time="2025-02-13T15:55:01.770094957Z" level=info msg="StartContainer for \"816a4d57a1256fd13b921cecc73151a842603bcad3fe14819156105f4704caa7\" returns successfully" Feb 13 15:55:01.877798 systemd[1]: cri-containerd-816a4d57a1256fd13b921cecc73151a842603bcad3fe14819156105f4704caa7.scope: Deactivated successfully. Feb 13 15:55:02.124002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-816a4d57a1256fd13b921cecc73151a842603bcad3fe14819156105f4704caa7-rootfs.mount: Deactivated successfully. Feb 13 15:55:02.175236 containerd[1480]: time="2025-02-13T15:55:02.122672749Z" level=info msg="shim disconnected" id=816a4d57a1256fd13b921cecc73151a842603bcad3fe14819156105f4704caa7 namespace=k8s.io Feb 13 15:55:02.175236 containerd[1480]: time="2025-02-13T15:55:02.151910284Z" level=warning msg="cleaning up after shim disconnected" id=816a4d57a1256fd13b921cecc73151a842603bcad3fe14819156105f4704caa7 namespace=k8s.io Feb 13 15:55:02.175236 containerd[1480]: time="2025-02-13T15:55:02.151937832Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:55:02.184743 kubelet[2606]: E0213 15:55:02.184689 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:02.279283 containerd[1480]: time="2025-02-13T15:55:02.272094326Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:55:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:55:02.325313 kubelet[2606]: I0213 15:55:02.324407 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-h75kl" podStartSLOduration=24.569187529 podStartE2EDuration="30.324301073s" podCreationTimestamp="2025-02-13 15:54:32 +0000 UTC" firstStartedPulling="2025-02-13 15:54:34.857647587 +0000 UTC m=+6.619840620" lastFinishedPulling="2025-02-13 15:54:40.61276114 +0000 UTC m=+12.374954164" observedRunningTime="2025-02-13 15:54:40.976375429 +0000 UTC m=+12.738568477" watchObservedRunningTime="2025-02-13 15:55:02.324301073 +0000 UTC m=+34.086494130" Feb 13 15:55:03.221571 kubelet[2606]: E0213 15:55:03.219772 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:03.246827 containerd[1480]: time="2025-02-13T15:55:03.243987524Z" level=info msg="CreateContainer within sandbox \"6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:55:03.362908 sshd[3043]: PAM: Permission denied for root from 218.92.0.157 Feb 13 15:55:03.392468 containerd[1480]: time="2025-02-13T15:55:03.392258526Z" level=info msg="CreateContainer within sandbox \"6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"041ff95f57ecb56d5332d34961ae9b0f046f57fbc5b132ef63b9f6ccd018b232\"" Feb 13 15:55:03.394632 containerd[1480]: time="2025-02-13T15:55:03.393770318Z" level=info msg="StartContainer for \"041ff95f57ecb56d5332d34961ae9b0f046f57fbc5b132ef63b9f6ccd018b232\"" Feb 13 15:55:03.566388 systemd[1]: Started cri-containerd-041ff95f57ecb56d5332d34961ae9b0f046f57fbc5b132ef63b9f6ccd018b232.scope - libcontainer container 041ff95f57ecb56d5332d34961ae9b0f046f57fbc5b132ef63b9f6ccd018b232. Feb 13 15:55:03.586847 sshd[3043]: Received disconnect from 218.92.0.157 port 54771:11: [preauth] Feb 13 15:55:03.586847 sshd[3043]: Disconnected from authenticating user root 218.92.0.157 port 54771 [preauth] Feb 13 15:55:03.592639 systemd[1]: sshd@9-143.198.102.37:22-218.92.0.157:54771.service: Deactivated successfully. Feb 13 15:55:03.677525 containerd[1480]: time="2025-02-13T15:55:03.677126707Z" level=info msg="StartContainer for \"041ff95f57ecb56d5332d34961ae9b0f046f57fbc5b132ef63b9f6ccd018b232\" returns successfully" Feb 13 15:55:03.708425 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:55:03.714312 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:55:03.715326 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:55:03.749193 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:55:03.750301 systemd[1]: cri-containerd-041ff95f57ecb56d5332d34961ae9b0f046f57fbc5b132ef63b9f6ccd018b232.scope: Deactivated successfully. Feb 13 15:55:03.951153 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:55:03.972257 containerd[1480]: time="2025-02-13T15:55:03.971492186Z" level=info msg="shim disconnected" id=041ff95f57ecb56d5332d34961ae9b0f046f57fbc5b132ef63b9f6ccd018b232 namespace=k8s.io Feb 13 15:55:03.972257 containerd[1480]: time="2025-02-13T15:55:03.971658120Z" level=warning msg="cleaning up after shim disconnected" id=041ff95f57ecb56d5332d34961ae9b0f046f57fbc5b132ef63b9f6ccd018b232 namespace=k8s.io Feb 13 15:55:03.972257 containerd[1480]: time="2025-02-13T15:55:03.971698219Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:55:04.251379 kubelet[2606]: E0213 15:55:04.247457 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:04.286641 containerd[1480]: time="2025-02-13T15:55:04.285913821Z" level=info msg="CreateContainer within sandbox \"6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:55:04.337248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-041ff95f57ecb56d5332d34961ae9b0f046f57fbc5b132ef63b9f6ccd018b232-rootfs.mount: Deactivated successfully. Feb 13 15:55:04.458733 containerd[1480]: time="2025-02-13T15:55:04.458652366Z" level=info msg="CreateContainer within sandbox \"6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"16ca2f05c6ed1ef544a4897387c27e65d605d28d7870326abfb3c59cbe123004\"" Feb 13 15:55:04.462432 containerd[1480]: time="2025-02-13T15:55:04.462133491Z" level=info msg="StartContainer for \"16ca2f05c6ed1ef544a4897387c27e65d605d28d7870326abfb3c59cbe123004\"" Feb 13 15:55:04.674975 systemd[1]: Started cri-containerd-16ca2f05c6ed1ef544a4897387c27e65d605d28d7870326abfb3c59cbe123004.scope - libcontainer container 16ca2f05c6ed1ef544a4897387c27e65d605d28d7870326abfb3c59cbe123004. Feb 13 15:55:04.833915 containerd[1480]: time="2025-02-13T15:55:04.833800739Z" level=info msg="StartContainer for \"16ca2f05c6ed1ef544a4897387c27e65d605d28d7870326abfb3c59cbe123004\" returns successfully" Feb 13 15:55:04.839913 systemd[1]: cri-containerd-16ca2f05c6ed1ef544a4897387c27e65d605d28d7870326abfb3c59cbe123004.scope: Deactivated successfully. Feb 13 15:55:04.962616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16ca2f05c6ed1ef544a4897387c27e65d605d28d7870326abfb3c59cbe123004-rootfs.mount: Deactivated successfully. Feb 13 15:55:04.972517 containerd[1480]: time="2025-02-13T15:55:04.972419588Z" level=info msg="shim disconnected" id=16ca2f05c6ed1ef544a4897387c27e65d605d28d7870326abfb3c59cbe123004 namespace=k8s.io Feb 13 15:55:04.972517 containerd[1480]: time="2025-02-13T15:55:04.972507561Z" level=warning msg="cleaning up after shim disconnected" id=16ca2f05c6ed1ef544a4897387c27e65d605d28d7870326abfb3c59cbe123004 namespace=k8s.io Feb 13 15:55:04.972517 containerd[1480]: time="2025-02-13T15:55:04.972520092Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:55:05.256705 kubelet[2606]: E0213 15:55:05.256620 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:05.280652 containerd[1480]: time="2025-02-13T15:55:05.280343424Z" level=info msg="CreateContainer within sandbox \"6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:55:05.403844 containerd[1480]: time="2025-02-13T15:55:05.403625004Z" level=info msg="CreateContainer within sandbox \"6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"beda87396b1ada3c4b89cd6016993b85e61e349dcc93b450823f7881ec19a17f\"" Feb 13 15:55:05.407428 containerd[1480]: time="2025-02-13T15:55:05.404931902Z" level=info msg="StartContainer for \"beda87396b1ada3c4b89cd6016993b85e61e349dcc93b450823f7881ec19a17f\"" Feb 13 15:55:05.515177 systemd[1]: Started cri-containerd-beda87396b1ada3c4b89cd6016993b85e61e349dcc93b450823f7881ec19a17f.scope - libcontainer container beda87396b1ada3c4b89cd6016993b85e61e349dcc93b450823f7881ec19a17f. Feb 13 15:55:05.649147 systemd[1]: cri-containerd-beda87396b1ada3c4b89cd6016993b85e61e349dcc93b450823f7881ec19a17f.scope: Deactivated successfully. Feb 13 15:55:05.654607 containerd[1480]: time="2025-02-13T15:55:05.654300828Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc75907_2c0a_45ef_bc33_5628b5f5ec61.slice/cri-containerd-beda87396b1ada3c4b89cd6016993b85e61e349dcc93b450823f7881ec19a17f.scope/memory.events\": no such file or directory" Feb 13 15:55:05.666627 containerd[1480]: time="2025-02-13T15:55:05.663958740Z" level=info msg="StartContainer for \"beda87396b1ada3c4b89cd6016993b85e61e349dcc93b450823f7881ec19a17f\" returns successfully" Feb 13 15:55:05.750959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-beda87396b1ada3c4b89cd6016993b85e61e349dcc93b450823f7881ec19a17f-rootfs.mount: Deactivated successfully. Feb 13 15:55:05.797636 containerd[1480]: time="2025-02-13T15:55:05.793123539Z" level=info msg="shim disconnected" id=beda87396b1ada3c4b89cd6016993b85e61e349dcc93b450823f7881ec19a17f namespace=k8s.io Feb 13 15:55:05.797636 containerd[1480]: time="2025-02-13T15:55:05.793482000Z" level=warning msg="cleaning up after shim disconnected" id=beda87396b1ada3c4b89cd6016993b85e61e349dcc93b450823f7881ec19a17f namespace=k8s.io Feb 13 15:55:05.797636 containerd[1480]: time="2025-02-13T15:55:05.793499970Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:55:06.265472 kubelet[2606]: E0213 15:55:06.263354 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:06.288780 containerd[1480]: time="2025-02-13T15:55:06.285258510Z" level=info msg="CreateContainer within sandbox \"6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:55:06.363362 containerd[1480]: time="2025-02-13T15:55:06.363153231Z" level=info msg="CreateContainer within sandbox \"6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4\"" Feb 13 15:55:06.367843 containerd[1480]: time="2025-02-13T15:55:06.365105004Z" level=info msg="StartContainer for \"5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4\"" Feb 13 15:55:06.394347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount477108494.mount: Deactivated successfully. Feb 13 15:55:06.464964 systemd[1]: Started cri-containerd-5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4.scope - libcontainer container 5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4. Feb 13 15:55:06.627318 containerd[1480]: time="2025-02-13T15:55:06.627122167Z" level=info msg="StartContainer for \"5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4\" returns successfully" Feb 13 15:55:07.144397 kubelet[2606]: I0213 15:55:07.139641 2606 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 15:55:07.294021 kubelet[2606]: E0213 15:55:07.293830 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:07.323802 systemd[1]: Created slice kubepods-burstable-pod7e08a763_fc44_4bc4_8661_0804eb9023a2.slice - libcontainer container kubepods-burstable-pod7e08a763_fc44_4bc4_8661_0804eb9023a2.slice. Feb 13 15:55:07.329556 kubelet[2606]: I0213 15:55:07.327467 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhd7g\" (UniqueName: \"kubernetes.io/projected/7e08a763-fc44-4bc4-8661-0804eb9023a2-kube-api-access-jhd7g\") pod \"coredns-6f6b679f8f-pj5zs\" (UID: \"7e08a763-fc44-4bc4-8661-0804eb9023a2\") " pod="kube-system/coredns-6f6b679f8f-pj5zs" Feb 13 15:55:07.329556 kubelet[2606]: I0213 15:55:07.327557 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e08a763-fc44-4bc4-8661-0804eb9023a2-config-volume\") pod \"coredns-6f6b679f8f-pj5zs\" (UID: \"7e08a763-fc44-4bc4-8661-0804eb9023a2\") " pod="kube-system/coredns-6f6b679f8f-pj5zs" Feb 13 15:55:07.337633 kubelet[2606]: I0213 15:55:07.333011 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2023738-ad54-4b62-8f05-9f4c957991f4-config-volume\") pod \"coredns-6f6b679f8f-rr279\" (UID: \"e2023738-ad54-4b62-8f05-9f4c957991f4\") " pod="kube-system/coredns-6f6b679f8f-rr279" Feb 13 15:55:07.337633 kubelet[2606]: I0213 15:55:07.333117 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzl2k\" (UniqueName: \"kubernetes.io/projected/e2023738-ad54-4b62-8f05-9f4c957991f4-kube-api-access-mzl2k\") pod \"coredns-6f6b679f8f-rr279\" (UID: \"e2023738-ad54-4b62-8f05-9f4c957991f4\") " pod="kube-system/coredns-6f6b679f8f-rr279" Feb 13 15:55:07.394866 systemd[1]: Created slice kubepods-burstable-pode2023738_ad54_4b62_8f05_9f4c957991f4.slice - libcontainer container kubepods-burstable-pode2023738_ad54_4b62_8f05_9f4c957991f4.slice. Feb 13 15:55:07.669500 kubelet[2606]: E0213 15:55:07.669198 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:07.671398 containerd[1480]: time="2025-02-13T15:55:07.671334244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pj5zs,Uid:7e08a763-fc44-4bc4-8661-0804eb9023a2,Namespace:kube-system,Attempt:0,}" Feb 13 15:55:07.708502 kubelet[2606]: E0213 15:55:07.708000 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:07.709425 containerd[1480]: time="2025-02-13T15:55:07.709325189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rr279,Uid:e2023738-ad54-4b62-8f05-9f4c957991f4,Namespace:kube-system,Attempt:0,}" Feb 13 15:55:08.291528 kubelet[2606]: E0213 15:55:08.290784 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:09.294030 kubelet[2606]: E0213 15:55:09.293733 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:10.241365 systemd-networkd[1369]: cilium_host: Link UP Feb 13 15:55:10.252391 systemd-networkd[1369]: cilium_net: Link UP Feb 13 15:55:10.267853 systemd-networkd[1369]: cilium_net: Gained carrier Feb 13 15:55:10.268172 systemd-networkd[1369]: cilium_host: Gained carrier Feb 13 15:55:10.693805 systemd-networkd[1369]: cilium_vxlan: Link UP Feb 13 15:55:10.693819 systemd-networkd[1369]: cilium_vxlan: Gained carrier Feb 13 15:55:11.046460 systemd-networkd[1369]: cilium_host: Gained IPv6LL Feb 13 15:55:11.173205 systemd-networkd[1369]: cilium_net: Gained IPv6LL Feb 13 15:55:11.532882 kernel: NET: Registered PF_ALG protocol family Feb 13 15:55:12.078798 systemd-networkd[1369]: cilium_vxlan: Gained IPv6LL Feb 13 15:55:14.167868 systemd-networkd[1369]: lxc_health: Link UP Feb 13 15:55:14.194070 systemd-networkd[1369]: lxc_health: Gained carrier Feb 13 15:55:14.544289 systemd-networkd[1369]: lxc35f551491705: Link UP Feb 13 15:55:14.554685 kernel: eth0: renamed from tmp9fdb3 Feb 13 15:55:14.567935 systemd-networkd[1369]: lxc35f551491705: Gained carrier Feb 13 15:55:14.647552 systemd-networkd[1369]: lxc4335204fb88e: Link UP Feb 13 15:55:14.664617 kernel: eth0: renamed from tmp4384f Feb 13 15:55:14.678062 systemd-networkd[1369]: lxc4335204fb88e: Gained carrier Feb 13 15:55:14.691480 kubelet[2606]: E0213 15:55:14.691232 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:14.828639 kubelet[2606]: I0213 15:55:14.827904 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t755d" podStartSLOduration=17.410782011 podStartE2EDuration="42.827865837s" podCreationTimestamp="2025-02-13 15:54:32 +0000 UTC" firstStartedPulling="2025-02-13 15:54:35.160959067 +0000 UTC m=+6.923152086" lastFinishedPulling="2025-02-13 15:55:00.57804271 +0000 UTC m=+32.340235912" observedRunningTime="2025-02-13 15:55:07.610730093 +0000 UTC m=+39.372923134" watchObservedRunningTime="2025-02-13 15:55:14.827865837 +0000 UTC m=+46.590058882" Feb 13 15:55:15.343465 kubelet[2606]: E0213 15:55:15.343418 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:15.716851 systemd-networkd[1369]: lxc_health: Gained IPv6LL Feb 13 15:55:16.350043 kubelet[2606]: E0213 15:55:16.349976 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:16.422275 systemd-networkd[1369]: lxc4335204fb88e: Gained IPv6LL Feb 13 15:55:16.549542 systemd-networkd[1369]: lxc35f551491705: Gained IPv6LL Feb 13 15:55:25.544173 systemd[1]: Started sshd@10-143.198.102.37:22-139.178.89.65:40396.service - OpenSSH per-connection server daemon (139.178.89.65:40396). Feb 13 15:55:25.790189 sshd[3824]: Accepted publickey for core from 139.178.89.65 port 40396 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:55:25.796701 sshd-session[3824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:55:25.818993 systemd-logind[1451]: New session 8 of user core. Feb 13 15:55:25.830949 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:55:25.889027 containerd[1480]: time="2025-02-13T15:55:25.887284905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:55:25.889027 containerd[1480]: time="2025-02-13T15:55:25.887403845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:55:25.889027 containerd[1480]: time="2025-02-13T15:55:25.887432167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:25.889027 containerd[1480]: time="2025-02-13T15:55:25.887616262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:26.010973 systemd[1]: Started cri-containerd-9fdb3c49529f4e60ec52df501836efee93a88459e4f99d3e0889312d4c8ab539.scope - libcontainer container 9fdb3c49529f4e60ec52df501836efee93a88459e4f99d3e0889312d4c8ab539. Feb 13 15:55:26.104928 containerd[1480]: time="2025-02-13T15:55:26.104156475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:55:26.107881 containerd[1480]: time="2025-02-13T15:55:26.107210597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:55:26.107881 containerd[1480]: time="2025-02-13T15:55:26.107319832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:26.110335 containerd[1480]: time="2025-02-13T15:55:26.109938800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:55:26.241868 systemd[1]: Started cri-containerd-4384fc9c6853bdf581d739d6dd4c9b8404a92d1c3c11a108798d8d1f06e27953.scope - libcontainer container 4384fc9c6853bdf581d739d6dd4c9b8404a92d1c3c11a108798d8d1f06e27953. Feb 13 15:55:26.463121 containerd[1480]: time="2025-02-13T15:55:26.460170764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pj5zs,Uid:7e08a763-fc44-4bc4-8661-0804eb9023a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fdb3c49529f4e60ec52df501836efee93a88459e4f99d3e0889312d4c8ab539\"" Feb 13 15:55:26.466790 kubelet[2606]: E0213 15:55:26.464822 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:26.480756 containerd[1480]: time="2025-02-13T15:55:26.479365981Z" level=info msg="CreateContainer within sandbox \"9fdb3c49529f4e60ec52df501836efee93a88459e4f99d3e0889312d4c8ab539\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:55:26.549178 containerd[1480]: time="2025-02-13T15:55:26.544833669Z" level=info msg="CreateContainer within sandbox \"9fdb3c49529f4e60ec52df501836efee93a88459e4f99d3e0889312d4c8ab539\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d77201fdd498c80850296458a88930a04a8314916235ccb8022f7769945c4112\"" Feb 13 15:55:26.554780 containerd[1480]: time="2025-02-13T15:55:26.552449442Z" level=info msg="StartContainer for \"d77201fdd498c80850296458a88930a04a8314916235ccb8022f7769945c4112\"" Feb 13 15:55:26.616869 containerd[1480]: time="2025-02-13T15:55:26.612552824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rr279,Uid:e2023738-ad54-4b62-8f05-9f4c957991f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4384fc9c6853bdf581d739d6dd4c9b8404a92d1c3c11a108798d8d1f06e27953\"" Feb 13 15:55:26.620947 kubelet[2606]: E0213 15:55:26.620399 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:26.700505 containerd[1480]: time="2025-02-13T15:55:26.699205025Z" level=info msg="CreateContainer within sandbox \"4384fc9c6853bdf581d739d6dd4c9b8404a92d1c3c11a108798d8d1f06e27953\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:55:26.719270 systemd[1]: Started cri-containerd-d77201fdd498c80850296458a88930a04a8314916235ccb8022f7769945c4112.scope - libcontainer container d77201fdd498c80850296458a88930a04a8314916235ccb8022f7769945c4112. Feb 13 15:55:26.844252 containerd[1480]: time="2025-02-13T15:55:26.843890651Z" level=info msg="CreateContainer within sandbox \"4384fc9c6853bdf581d739d6dd4c9b8404a92d1c3c11a108798d8d1f06e27953\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a01362cfc57ab771eb20bb2c10610c9107c3f56a37703355190618318f35fed\"" Feb 13 15:55:26.851014 containerd[1480]: time="2025-02-13T15:55:26.848210779Z" level=info msg="StartContainer for \"2a01362cfc57ab771eb20bb2c10610c9107c3f56a37703355190618318f35fed\"" Feb 13 15:55:26.930808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3197369328.mount: Deactivated successfully. Feb 13 15:55:27.054018 systemd[1]: Started cri-containerd-2a01362cfc57ab771eb20bb2c10610c9107c3f56a37703355190618318f35fed.scope - libcontainer container 2a01362cfc57ab771eb20bb2c10610c9107c3f56a37703355190618318f35fed. Feb 13 15:55:27.062205 containerd[1480]: time="2025-02-13T15:55:27.060926971Z" level=info msg="StartContainer for \"d77201fdd498c80850296458a88930a04a8314916235ccb8022f7769945c4112\" returns successfully" Feb 13 15:55:27.247280 containerd[1480]: time="2025-02-13T15:55:27.246041071Z" level=info msg="StartContainer for \"2a01362cfc57ab771eb20bb2c10610c9107c3f56a37703355190618318f35fed\" returns successfully" Feb 13 15:55:27.285675 sshd[3834]: Connection closed by 139.178.89.65 port 40396 Feb 13 15:55:27.286844 sshd-session[3824]: pam_unix(sshd:session): session closed for user core Feb 13 15:55:27.304263 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:55:27.307868 systemd[1]: sshd@10-143.198.102.37:22-139.178.89.65:40396.service: Deactivated successfully. Feb 13 15:55:27.322789 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:55:27.332156 systemd-logind[1451]: Removed session 8. Feb 13 15:55:27.430548 kubelet[2606]: E0213 15:55:27.430501 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:27.439420 kubelet[2606]: E0213 15:55:27.439358 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:27.495886 kubelet[2606]: I0213 15:55:27.495804 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rr279" podStartSLOduration=55.495772543 podStartE2EDuration="55.495772543s" podCreationTimestamp="2025-02-13 15:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:55:27.494569202 +0000 UTC m=+59.256762243" watchObservedRunningTime="2025-02-13 15:55:27.495772543 +0000 UTC m=+59.257965590" Feb 13 15:55:28.443865 kubelet[2606]: E0213 15:55:28.443005 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:28.443865 kubelet[2606]: E0213 15:55:28.443005 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:28.494390 kubelet[2606]: I0213 15:55:28.492443 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pj5zs" podStartSLOduration=56.492411967 podStartE2EDuration="56.492411967s" podCreationTimestamp="2025-02-13 15:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:55:27.63965543 +0000 UTC m=+59.401848487" watchObservedRunningTime="2025-02-13 15:55:28.492411967 +0000 UTC m=+60.254605024" Feb 13 15:55:29.445641 kubelet[2606]: E0213 15:55:29.445541 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:29.449653 kubelet[2606]: E0213 15:55:29.448293 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:32.320332 systemd[1]: Started sshd@11-143.198.102.37:22-139.178.89.65:40406.service - OpenSSH per-connection server daemon (139.178.89.65:40406). Feb 13 15:55:32.499378 sshd[4012]: Accepted publickey for core from 139.178.89.65 port 40406 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:55:32.503385 sshd-session[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:55:32.538736 systemd-logind[1451]: New session 9 of user core. Feb 13 15:55:32.548955 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:55:33.062305 sshd[4014]: Connection closed by 139.178.89.65 port 40406 Feb 13 15:55:33.063391 sshd-session[4012]: pam_unix(sshd:session): session closed for user core Feb 13 15:55:33.078375 systemd[1]: sshd@11-143.198.102.37:22-139.178.89.65:40406.service: Deactivated successfully. Feb 13 15:55:33.085590 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:55:33.087128 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:55:33.090338 systemd-logind[1451]: Removed session 9. Feb 13 15:55:38.100514 systemd[1]: Started sshd@12-143.198.102.37:22-139.178.89.65:54354.service - OpenSSH per-connection server daemon (139.178.89.65:54354). Feb 13 15:55:38.269671 sshd[4028]: Accepted publickey for core from 139.178.89.65 port 54354 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:55:38.276501 sshd-session[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:55:38.298211 systemd-logind[1451]: New session 10 of user core. Feb 13 15:55:38.310027 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:55:38.623256 sshd[4030]: Connection closed by 139.178.89.65 port 54354 Feb 13 15:55:38.626221 sshd-session[4028]: pam_unix(sshd:session): session closed for user core Feb 13 15:55:38.643876 systemd[1]: sshd@12-143.198.102.37:22-139.178.89.65:54354.service: Deactivated successfully. Feb 13 15:55:38.648086 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:55:38.655810 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:55:38.660405 systemd-logind[1451]: Removed session 10. Feb 13 15:55:43.650482 systemd[1]: Started sshd@13-143.198.102.37:22-139.178.89.65:54366.service - OpenSSH per-connection server daemon (139.178.89.65:54366). Feb 13 15:55:43.807436 sshd[4041]: Accepted publickey for core from 139.178.89.65 port 54366 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:55:43.812242 sshd-session[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:55:43.832940 systemd-logind[1451]: New session 11 of user core. Feb 13 15:55:43.837973 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:55:44.114199 sshd[4043]: Connection closed by 139.178.89.65 port 54366 Feb 13 15:55:44.120125 sshd-session[4041]: pam_unix(sshd:session): session closed for user core Feb 13 15:55:44.138863 systemd[1]: sshd@13-143.198.102.37:22-139.178.89.65:54366.service: Deactivated successfully. Feb 13 15:55:44.143010 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:55:44.149396 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:55:44.160201 systemd[1]: Started sshd@14-143.198.102.37:22-139.178.89.65:54368.service - OpenSSH per-connection server daemon (139.178.89.65:54368). Feb 13 15:55:44.170707 systemd-logind[1451]: Removed session 11. Feb 13 15:55:44.312175 sshd[4055]: Accepted publickey for core from 139.178.89.65 port 54368 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:55:44.319155 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:55:44.342886 systemd-logind[1451]: New session 12 of user core. Feb 13 15:55:44.360076 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:55:44.835126 sshd[4057]: Connection closed by 139.178.89.65 port 54368 Feb 13 15:55:44.837213 sshd-session[4055]: pam_unix(sshd:session): session closed for user core Feb 13 15:55:44.856956 systemd[1]: sshd@14-143.198.102.37:22-139.178.89.65:54368.service: Deactivated successfully. Feb 13 15:55:44.871104 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:55:44.873966 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:55:44.888203 systemd[1]: Started sshd@15-143.198.102.37:22-139.178.89.65:60396.service - OpenSSH per-connection server daemon (139.178.89.65:60396). Feb 13 15:55:44.893193 systemd-logind[1451]: Removed session 12. Feb 13 15:55:45.096660 sshd[4066]: Accepted publickey for core from 139.178.89.65 port 60396 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:55:45.101334 sshd-session[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:55:45.112267 systemd-logind[1451]: New session 13 of user core. Feb 13 15:55:45.123097 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:55:45.406847 sshd[4068]: Connection closed by 139.178.89.65 port 60396 Feb 13 15:55:45.407854 sshd-session[4066]: pam_unix(sshd:session): session closed for user core Feb 13 15:55:45.412982 systemd[1]: sshd@15-143.198.102.37:22-139.178.89.65:60396.service: Deactivated successfully. Feb 13 15:55:45.419413 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:55:45.424789 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:55:45.426740 systemd-logind[1451]: Removed session 13. Feb 13 15:55:45.723176 kubelet[2606]: E0213 15:55:45.723121 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:46.731389 kubelet[2606]: E0213 15:55:46.724667 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:46.733130 kubelet[2606]: E0213 15:55:46.732871 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:50.428342 systemd[1]: Started sshd@16-143.198.102.37:22-139.178.89.65:60408.service - OpenSSH per-connection server daemon (139.178.89.65:60408). Feb 13 15:55:50.530483 sshd[4080]: Accepted publickey for core from 139.178.89.65 port 60408 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:55:50.535135 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:55:50.547452 systemd-logind[1451]: New session 14 of user core. Feb 13 15:55:50.555557 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:55:50.883658 sshd[4082]: Connection closed by 139.178.89.65 port 60408 Feb 13 15:55:50.888013 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Feb 13 15:55:50.899081 systemd[1]: sshd@16-143.198.102.37:22-139.178.89.65:60408.service: Deactivated successfully. Feb 13 15:55:50.908603 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:55:50.914211 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:55:50.917531 systemd-logind[1451]: Removed session 14. Feb 13 15:55:51.724080 kubelet[2606]: E0213 15:55:51.722148 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:52.723652 kubelet[2606]: E0213 15:55:52.722555 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:55:55.932241 systemd[1]: Started sshd@17-143.198.102.37:22-139.178.89.65:51130.service - OpenSSH per-connection server daemon (139.178.89.65:51130). Feb 13 15:55:56.039836 sshd[4092]: Accepted publickey for core from 139.178.89.65 port 51130 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:55:56.042928 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:55:56.056449 systemd-logind[1451]: New session 15 of user core. Feb 13 15:55:56.075181 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:55:56.407650 sshd[4094]: Connection closed by 139.178.89.65 port 51130 Feb 13 15:55:56.408573 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Feb 13 15:55:56.416008 systemd[1]: sshd@17-143.198.102.37:22-139.178.89.65:51130.service: Deactivated successfully. Feb 13 15:55:56.423465 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:55:56.429043 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:55:56.433193 systemd-logind[1451]: Removed session 15. Feb 13 15:56:01.430446 systemd[1]: Started sshd@18-143.198.102.37:22-139.178.89.65:51142.service - OpenSSH per-connection server daemon (139.178.89.65:51142). Feb 13 15:56:01.636148 sshd[4105]: Accepted publickey for core from 139.178.89.65 port 51142 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:56:01.639865 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:01.649747 systemd-logind[1451]: New session 16 of user core. Feb 13 15:56:01.661232 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:56:02.161120 sshd[4107]: Connection closed by 139.178.89.65 port 51142 Feb 13 15:56:02.166786 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:02.183057 systemd[1]: sshd@18-143.198.102.37:22-139.178.89.65:51142.service: Deactivated successfully. Feb 13 15:56:02.190397 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:56:02.208179 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:56:02.219689 systemd[1]: Started sshd@19-143.198.102.37:22-139.178.89.65:51146.service - OpenSSH per-connection server daemon (139.178.89.65:51146). Feb 13 15:56:02.240315 systemd-logind[1451]: Removed session 16. Feb 13 15:56:02.393667 sshd[4118]: Accepted publickey for core from 139.178.89.65 port 51146 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:56:02.393026 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:02.413429 systemd-logind[1451]: New session 17 of user core. Feb 13 15:56:02.427852 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:56:03.280692 sshd[4120]: Connection closed by 139.178.89.65 port 51146 Feb 13 15:56:03.284435 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:03.300550 systemd[1]: sshd@19-143.198.102.37:22-139.178.89.65:51146.service: Deactivated successfully. Feb 13 15:56:03.304317 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:56:03.312832 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:56:03.364510 systemd[1]: Started sshd@20-143.198.102.37:22-139.178.89.65:51150.service - OpenSSH per-connection server daemon (139.178.89.65:51150). Feb 13 15:56:03.368232 systemd-logind[1451]: Removed session 17. Feb 13 15:56:03.570416 sshd[4129]: Accepted publickey for core from 139.178.89.65 port 51150 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:56:03.572482 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:03.584286 systemd-logind[1451]: New session 18 of user core. Feb 13 15:56:03.598635 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:56:07.183641 sshd[4131]: Connection closed by 139.178.89.65 port 51150 Feb 13 15:56:07.188183 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:07.210061 systemd[1]: sshd@20-143.198.102.37:22-139.178.89.65:51150.service: Deactivated successfully. Feb 13 15:56:07.222367 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:56:07.224469 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:56:07.234251 systemd-logind[1451]: Removed session 18. Feb 13 15:56:07.251004 systemd[1]: Started sshd@21-143.198.102.37:22-139.178.89.65:41380.service - OpenSSH per-connection server daemon (139.178.89.65:41380). Feb 13 15:56:07.427414 sshd[4149]: Accepted publickey for core from 139.178.89.65 port 41380 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:56:07.431176 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:07.445705 systemd-logind[1451]: New session 19 of user core. Feb 13 15:56:07.453631 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:56:08.406424 sshd[4151]: Connection closed by 139.178.89.65 port 41380 Feb 13 15:56:08.409937 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:08.421093 systemd[1]: sshd@21-143.198.102.37:22-139.178.89.65:41380.service: Deactivated successfully. Feb 13 15:56:08.433295 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:56:08.436775 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:56:08.452412 systemd[1]: Started sshd@22-143.198.102.37:22-139.178.89.65:41396.service - OpenSSH per-connection server daemon (139.178.89.65:41396). Feb 13 15:56:08.458143 systemd-logind[1451]: Removed session 19. Feb 13 15:56:08.554025 sshd[4161]: Accepted publickey for core from 139.178.89.65 port 41396 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:56:08.557754 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:08.579642 systemd-logind[1451]: New session 20 of user core. Feb 13 15:56:08.585017 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:56:08.872623 sshd[4163]: Connection closed by 139.178.89.65 port 41396 Feb 13 15:56:08.873872 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:08.883958 systemd[1]: sshd@22-143.198.102.37:22-139.178.89.65:41396.service: Deactivated successfully. Feb 13 15:56:08.893136 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:56:08.898463 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:56:08.901177 systemd-logind[1451]: Removed session 20. Feb 13 15:56:13.903481 systemd[1]: Started sshd@23-143.198.102.37:22-139.178.89.65:41404.service - OpenSSH per-connection server daemon (139.178.89.65:41404). Feb 13 15:56:14.018708 sshd[4174]: Accepted publickey for core from 139.178.89.65 port 41404 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:56:14.019862 sshd-session[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:14.038914 systemd-logind[1451]: New session 21 of user core. Feb 13 15:56:14.050210 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:56:14.257735 sshd[4176]: Connection closed by 139.178.89.65 port 41404 Feb 13 15:56:14.259655 sshd-session[4174]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:14.264977 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:56:14.269180 systemd[1]: sshd@23-143.198.102.37:22-139.178.89.65:41404.service: Deactivated successfully. Feb 13 15:56:14.277996 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:56:14.281085 systemd-logind[1451]: Removed session 21. Feb 13 15:56:19.283074 systemd[1]: Started sshd@24-143.198.102.37:22-139.178.89.65:58510.service - OpenSSH per-connection server daemon (139.178.89.65:58510). Feb 13 15:56:19.311196 systemd[1]: Started sshd@25-143.198.102.37:22-218.92.0.157:27551.service - OpenSSH per-connection server daemon (218.92.0.157:27551). Feb 13 15:56:19.433817 sshd[4190]: Accepted publickey for core from 139.178.89.65 port 58510 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:56:19.437492 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:19.458245 systemd-logind[1451]: New session 22 of user core. Feb 13 15:56:19.468617 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:56:19.729465 sshd[4195]: Connection closed by 139.178.89.65 port 58510 Feb 13 15:56:19.731105 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:19.741065 systemd[1]: sshd@24-143.198.102.37:22-139.178.89.65:58510.service: Deactivated successfully. Feb 13 15:56:19.750274 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:56:19.753997 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:56:19.756989 systemd-logind[1451]: Removed session 22. Feb 13 15:56:20.376558 sshd-session[4204]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Feb 13 15:56:21.716082 sshd[4192]: PAM: Permission denied for root from 218.92.0.157 Feb 13 15:56:21.993868 sshd-session[4205]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Feb 13 15:56:23.811807 sshd[4192]: PAM: Permission denied for root from 218.92.0.157 Feb 13 15:56:24.090846 sshd-session[4206]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Feb 13 15:56:24.756379 systemd[1]: Started sshd@26-143.198.102.37:22-139.178.89.65:47504.service - OpenSSH per-connection server daemon (139.178.89.65:47504). Feb 13 15:56:24.867235 sshd[4208]: Accepted publickey for core from 139.178.89.65 port 47504 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:56:24.870176 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:24.882113 systemd-logind[1451]: New session 23 of user core. Feb 13 15:56:24.893174 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:56:25.164772 sshd[4210]: Connection closed by 139.178.89.65 port 47504 Feb 13 15:56:25.165122 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:25.172014 systemd[1]: sshd@26-143.198.102.37:22-139.178.89.65:47504.service: Deactivated successfully. Feb 13 15:56:25.175846 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:56:25.181090 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:56:25.186265 systemd-logind[1451]: Removed session 23. Feb 13 15:56:25.842662 sshd[4192]: PAM: Permission denied for root from 218.92.0.157 Feb 13 15:56:25.982136 sshd[4192]: Received disconnect from 218.92.0.157 port 27551:11: [preauth] Feb 13 15:56:25.982136 sshd[4192]: Disconnected from authenticating user root 218.92.0.157 port 27551 [preauth] Feb 13 15:56:25.985248 systemd[1]: sshd@25-143.198.102.37:22-218.92.0.157:27551.service: Deactivated successfully. Feb 13 15:56:30.209574 systemd[1]: Started sshd@27-143.198.102.37:22-139.178.89.65:47520.service - OpenSSH per-connection server daemon (139.178.89.65:47520). Feb 13 15:56:30.299119 sshd[4225]: Accepted publickey for core from 139.178.89.65 port 47520 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:56:30.300835 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:30.311640 systemd-logind[1451]: New session 24 of user core. Feb 13 15:56:30.323962 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:56:30.650278 sshd[4227]: Connection closed by 139.178.89.65 port 47520 Feb 13 15:56:30.649660 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:30.672062 systemd[1]: Started sshd@28-143.198.102.37:22-139.178.89.65:47534.service - OpenSSH per-connection server daemon (139.178.89.65:47534). Feb 13 15:56:30.672695 systemd[1]: sshd@27-143.198.102.37:22-139.178.89.65:47520.service: Deactivated successfully. Feb 13 15:56:30.681059 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:56:30.684478 systemd-logind[1451]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:56:30.691248 systemd-logind[1451]: Removed session 24. Feb 13 15:56:30.765227 sshd[4235]: Accepted publickey for core from 139.178.89.65 port 47534 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:56:30.768713 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:30.804138 systemd-logind[1451]: New session 25 of user core. Feb 13 15:56:30.813391 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:56:33.466653 containerd[1480]: time="2025-02-13T15:56:33.465952598Z" level=info msg="StopContainer for \"adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3\" with timeout 30 (s)" Feb 13 15:56:33.466653 containerd[1480]: time="2025-02-13T15:56:33.466557114Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:56:33.469051 containerd[1480]: time="2025-02-13T15:56:33.468546908Z" level=info msg="Stop container \"adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3\" with signal terminated" Feb 13 15:56:33.482622 containerd[1480]: time="2025-02-13T15:56:33.482379548Z" level=info msg="StopContainer for \"5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4\" with timeout 2 (s)" Feb 13 15:56:33.483650 containerd[1480]: time="2025-02-13T15:56:33.483416236Z" level=info msg="Stop container \"5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4\" with signal terminated" Feb 13 15:56:33.502934 systemd[1]: cri-containerd-adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3.scope: Deactivated successfully. Feb 13 15:56:33.510567 systemd-networkd[1369]: lxc_health: Link DOWN Feb 13 15:56:33.510599 systemd-networkd[1369]: lxc_health: Lost carrier Feb 13 15:56:33.556797 systemd[1]: cri-containerd-5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4.scope: Deactivated successfully. Feb 13 15:56:33.557054 systemd[1]: cri-containerd-5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4.scope: Consumed 12.885s CPU time. Feb 13 15:56:33.605178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3-rootfs.mount: Deactivated successfully. Feb 13 15:56:33.626345 containerd[1480]: time="2025-02-13T15:56:33.625755362Z" level=info msg="shim disconnected" id=adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3 namespace=k8s.io Feb 13 15:56:33.626345 containerd[1480]: time="2025-02-13T15:56:33.625886902Z" level=warning msg="cleaning up after shim disconnected" id=adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3 namespace=k8s.io Feb 13 15:56:33.626345 containerd[1480]: time="2025-02-13T15:56:33.625923274Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:56:33.658922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4-rootfs.mount: Deactivated successfully. Feb 13 15:56:33.676904 containerd[1480]: time="2025-02-13T15:56:33.676812159Z" level=info msg="shim disconnected" id=5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4 namespace=k8s.io Feb 13 15:56:33.677357 containerd[1480]: time="2025-02-13T15:56:33.677314538Z" level=warning msg="cleaning up after shim disconnected" id=5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4 namespace=k8s.io Feb 13 15:56:33.677505 containerd[1480]: time="2025-02-13T15:56:33.677482369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:56:33.677937 containerd[1480]: time="2025-02-13T15:56:33.677026812Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:56:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:56:33.697770 containerd[1480]: time="2025-02-13T15:56:33.697664846Z" level=info msg="StopContainer for \"adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3\" returns successfully" Feb 13 15:56:33.702977 containerd[1480]: time="2025-02-13T15:56:33.702896305Z" level=info msg="StopPodSandbox for \"b14398c5ff7cc0667566afe5fd364444cfdeb446b3b40af04385f1497c932045\"" Feb 13 15:56:33.719624 containerd[1480]: time="2025-02-13T15:56:33.719320456Z" level=info msg="Container to stop \"adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:56:33.731718 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b14398c5ff7cc0667566afe5fd364444cfdeb446b3b40af04385f1497c932045-shm.mount: Deactivated successfully. Feb 13 15:56:33.757397 systemd[1]: cri-containerd-b14398c5ff7cc0667566afe5fd364444cfdeb446b3b40af04385f1497c932045.scope: Deactivated successfully. Feb 13 15:56:33.775138 containerd[1480]: time="2025-02-13T15:56:33.774482398Z" level=info msg="StopContainer for \"5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4\" returns successfully" Feb 13 15:56:33.779533 containerd[1480]: time="2025-02-13T15:56:33.778360978Z" level=info msg="StopPodSandbox for \"6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115\"" Feb 13 15:56:33.783128 containerd[1480]: time="2025-02-13T15:56:33.780900576Z" level=info msg="Container to stop \"041ff95f57ecb56d5332d34961ae9b0f046f57fbc5b132ef63b9f6ccd018b232\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:56:33.783128 containerd[1480]: time="2025-02-13T15:56:33.782635484Z" level=info msg="Container to stop \"16ca2f05c6ed1ef544a4897387c27e65d605d28d7870326abfb3c59cbe123004\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:56:33.787124 containerd[1480]: time="2025-02-13T15:56:33.782675383Z" level=info msg="Container to stop \"beda87396b1ada3c4b89cd6016993b85e61e349dcc93b450823f7881ec19a17f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:56:33.787124 containerd[1480]: time="2025-02-13T15:56:33.786709310Z" level=info msg="Container to stop \"5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:56:33.787124 containerd[1480]: time="2025-02-13T15:56:33.786743890Z" level=info msg="Container to stop \"816a4d57a1256fd13b921cecc73151a842603bcad3fe14819156105f4704caa7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:56:33.798227 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115-shm.mount: Deactivated successfully. Feb 13 15:56:33.826362 systemd[1]: cri-containerd-6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115.scope: Deactivated successfully. Feb 13 15:56:33.898980 containerd[1480]: time="2025-02-13T15:56:33.898691887Z" level=info msg="shim disconnected" id=b14398c5ff7cc0667566afe5fd364444cfdeb446b3b40af04385f1497c932045 namespace=k8s.io Feb 13 15:56:33.898980 containerd[1480]: time="2025-02-13T15:56:33.898775261Z" level=warning msg="cleaning up after shim disconnected" id=b14398c5ff7cc0667566afe5fd364444cfdeb446b3b40af04385f1497c932045 namespace=k8s.io Feb 13 15:56:33.898980 containerd[1480]: time="2025-02-13T15:56:33.898791279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:56:33.905085 containerd[1480]: time="2025-02-13T15:56:33.903694296Z" level=info msg="shim disconnected" id=6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115 namespace=k8s.io Feb 13 15:56:33.905085 containerd[1480]: time="2025-02-13T15:56:33.904549773Z" level=warning msg="cleaning up after shim disconnected" id=6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115 namespace=k8s.io Feb 13 15:56:33.905085 containerd[1480]: time="2025-02-13T15:56:33.904573982Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:56:33.935789 containerd[1480]: time="2025-02-13T15:56:33.935700170Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:56:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:56:33.939277 containerd[1480]: time="2025-02-13T15:56:33.939199031Z" level=info msg="TearDown network for sandbox \"b14398c5ff7cc0667566afe5fd364444cfdeb446b3b40af04385f1497c932045\" successfully" Feb 13 15:56:33.939277 containerd[1480]: time="2025-02-13T15:56:33.939247230Z" level=info msg="StopPodSandbox for \"b14398c5ff7cc0667566afe5fd364444cfdeb446b3b40af04385f1497c932045\" returns successfully" Feb 13 15:56:33.957327 kubelet[2606]: E0213 15:56:33.956815 2606 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:56:33.979858 containerd[1480]: time="2025-02-13T15:56:33.978899152Z" level=info msg="TearDown network for sandbox \"6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115\" successfully" Feb 13 15:56:33.979858 containerd[1480]: time="2025-02-13T15:56:33.978963405Z" level=info msg="StopPodSandbox for \"6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115\" returns successfully" Feb 13 15:56:33.988607 kubelet[2606]: I0213 15:56:33.987734 2606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c03bd812-9ac0-4a93-bc43-57dae27d8c03-cilium-config-path\") pod \"c03bd812-9ac0-4a93-bc43-57dae27d8c03\" (UID: \"c03bd812-9ac0-4a93-bc43-57dae27d8c03\") " Feb 13 15:56:33.988607 kubelet[2606]: I0213 15:56:33.987823 2606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpbqt\" (UniqueName: \"kubernetes.io/projected/c03bd812-9ac0-4a93-bc43-57dae27d8c03-kube-api-access-tpbqt\") pod \"c03bd812-9ac0-4a93-bc43-57dae27d8c03\" (UID: \"c03bd812-9ac0-4a93-bc43-57dae27d8c03\") " Feb 13 15:56:34.005086 kubelet[2606]: I0213 15:56:34.004997 2606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03bd812-9ac0-4a93-bc43-57dae27d8c03-kube-api-access-tpbqt" (OuterVolumeSpecName: "kube-api-access-tpbqt") pod "c03bd812-9ac0-4a93-bc43-57dae27d8c03" (UID: "c03bd812-9ac0-4a93-bc43-57dae27d8c03"). InnerVolumeSpecName "kube-api-access-tpbqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:56:34.014228 kubelet[2606]: I0213 15:56:34.014147 2606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03bd812-9ac0-4a93-bc43-57dae27d8c03-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c03bd812-9ac0-4a93-bc43-57dae27d8c03" (UID: "c03bd812-9ac0-4a93-bc43-57dae27d8c03"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:56:34.092123 kubelet[2606]: I0213 15:56:34.089018 2606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-bpf-maps\") pod \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " Feb 13 15:56:34.092123 kubelet[2606]: I0213 15:56:34.092003 2606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abc75907-2c0a-45ef-bc33-5628b5f5ec61-cilium-config-path\") pod \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " Feb 13 15:56:34.094373 kubelet[2606]: I0213 15:56:34.092514 2606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-xtables-lock\") pod \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " Feb 13 15:56:34.094373 kubelet[2606]: I0213 15:56:34.092553 2606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-cilium-run\") pod \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " Feb 13 15:56:34.094373 kubelet[2606]: I0213 15:56:34.092611 2606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-lib-modules\") pod \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " Feb 13 15:56:34.094373 kubelet[2606]: I0213 15:56:34.092643 2606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-cilium-cgroup\") pod \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " Feb 13 15:56:34.094373 kubelet[2606]: I0213 15:56:34.092678 2606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abc75907-2c0a-45ef-bc33-5628b5f5ec61-clustermesh-secrets\") pod \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " Feb 13 15:56:34.094373 kubelet[2606]: I0213 15:56:34.092706 2606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-host-proc-sys-kernel\") pod \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " Feb 13 15:56:34.094932 kubelet[2606]: I0213 15:56:34.092737 2606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-hostproc\") pod \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " Feb 13 15:56:34.094932 kubelet[2606]: I0213 15:56:34.092771 2606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abc75907-2c0a-45ef-bc33-5628b5f5ec61-hubble-tls\") pod \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " Feb 13 15:56:34.094932 kubelet[2606]: I0213 15:56:34.092800 2606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c8qz\" (UniqueName: \"kubernetes.io/projected/abc75907-2c0a-45ef-bc33-5628b5f5ec61-kube-api-access-8c8qz\") pod \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " Feb 13 15:56:34.094932 kubelet[2606]: I0213 15:56:34.092834 2606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-cni-path\") pod \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " Feb 13 15:56:34.094932 kubelet[2606]: I0213 15:56:34.092859 2606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-host-proc-sys-net\") pod \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " Feb 13 15:56:34.094932 kubelet[2606]: I0213 15:56:34.092888 2606 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-etc-cni-netd\") pod \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\" (UID: \"abc75907-2c0a-45ef-bc33-5628b5f5ec61\") " Feb 13 15:56:34.095950 kubelet[2606]: I0213 15:56:34.092960 2606 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c03bd812-9ac0-4a93-bc43-57dae27d8c03-cilium-config-path\") on node \"ci-4186.1.1-d-137a032ec7\" DevicePath \"\"" Feb 13 15:56:34.095950 kubelet[2606]: I0213 15:56:34.092977 2606 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tpbqt\" (UniqueName: \"kubernetes.io/projected/c03bd812-9ac0-4a93-bc43-57dae27d8c03-kube-api-access-tpbqt\") on node \"ci-4186.1.1-d-137a032ec7\" DevicePath \"\"" Feb 13 15:56:34.095950 kubelet[2606]: I0213 15:56:34.093051 2606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "abc75907-2c0a-45ef-bc33-5628b5f5ec61" (UID: "abc75907-2c0a-45ef-bc33-5628b5f5ec61"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:34.095950 kubelet[2606]: I0213 15:56:34.093106 2606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "abc75907-2c0a-45ef-bc33-5628b5f5ec61" (UID: "abc75907-2c0a-45ef-bc33-5628b5f5ec61"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:34.095950 kubelet[2606]: I0213 15:56:34.094635 2606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "abc75907-2c0a-45ef-bc33-5628b5f5ec61" (UID: "abc75907-2c0a-45ef-bc33-5628b5f5ec61"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:34.096511 kubelet[2606]: I0213 15:56:34.094687 2606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "abc75907-2c0a-45ef-bc33-5628b5f5ec61" (UID: "abc75907-2c0a-45ef-bc33-5628b5f5ec61"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:34.096511 kubelet[2606]: I0213 15:56:34.094714 2606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "abc75907-2c0a-45ef-bc33-5628b5f5ec61" (UID: "abc75907-2c0a-45ef-bc33-5628b5f5ec61"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:34.096511 kubelet[2606]: I0213 15:56:34.094739 2606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "abc75907-2c0a-45ef-bc33-5628b5f5ec61" (UID: "abc75907-2c0a-45ef-bc33-5628b5f5ec61"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:34.096511 kubelet[2606]: I0213 15:56:34.094761 2606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "abc75907-2c0a-45ef-bc33-5628b5f5ec61" (UID: "abc75907-2c0a-45ef-bc33-5628b5f5ec61"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:34.096511 kubelet[2606]: I0213 15:56:34.095324 2606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-hostproc" (OuterVolumeSpecName: "hostproc") pod "abc75907-2c0a-45ef-bc33-5628b5f5ec61" (UID: "abc75907-2c0a-45ef-bc33-5628b5f5ec61"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:34.097540 kubelet[2606]: I0213 15:56:34.096962 2606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-cni-path" (OuterVolumeSpecName: "cni-path") pod "abc75907-2c0a-45ef-bc33-5628b5f5ec61" (UID: "abc75907-2c0a-45ef-bc33-5628b5f5ec61"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:34.098612 kubelet[2606]: I0213 15:56:34.097958 2606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "abc75907-2c0a-45ef-bc33-5628b5f5ec61" (UID: "abc75907-2c0a-45ef-bc33-5628b5f5ec61"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:56:34.101865 kubelet[2606]: I0213 15:56:34.101796 2606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc75907-2c0a-45ef-bc33-5628b5f5ec61-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "abc75907-2c0a-45ef-bc33-5628b5f5ec61" (UID: "abc75907-2c0a-45ef-bc33-5628b5f5ec61"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:56:34.102764 kubelet[2606]: I0213 15:56:34.102705 2606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abc75907-2c0a-45ef-bc33-5628b5f5ec61-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "abc75907-2c0a-45ef-bc33-5628b5f5ec61" (UID: "abc75907-2c0a-45ef-bc33-5628b5f5ec61"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:56:34.106211 kubelet[2606]: I0213 15:56:34.105986 2606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc75907-2c0a-45ef-bc33-5628b5f5ec61-kube-api-access-8c8qz" (OuterVolumeSpecName: "kube-api-access-8c8qz") pod "abc75907-2c0a-45ef-bc33-5628b5f5ec61" (UID: "abc75907-2c0a-45ef-bc33-5628b5f5ec61"). InnerVolumeSpecName "kube-api-access-8c8qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:56:34.106662 kubelet[2606]: I0213 15:56:34.106546 2606 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc75907-2c0a-45ef-bc33-5628b5f5ec61-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "abc75907-2c0a-45ef-bc33-5628b5f5ec61" (UID: "abc75907-2c0a-45ef-bc33-5628b5f5ec61"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:56:34.193934 kubelet[2606]: I0213 15:56:34.193577 2606 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-cilium-run\") on node \"ci-4186.1.1-d-137a032ec7\" DevicePath \"\"" Feb 13 15:56:34.193934 kubelet[2606]: I0213 15:56:34.193696 2606 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/abc75907-2c0a-45ef-bc33-5628b5f5ec61-cilium-config-path\") on node \"ci-4186.1.1-d-137a032ec7\" DevicePath \"\"" Feb 13 15:56:34.193934 kubelet[2606]: I0213 15:56:34.193714 2606 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-xtables-lock\") on node \"ci-4186.1.1-d-137a032ec7\" DevicePath \"\"" Feb 13 15:56:34.193934 kubelet[2606]: I0213 15:56:34.193730 2606 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-lib-modules\") on node \"ci-4186.1.1-d-137a032ec7\" DevicePath \"\"" Feb 13 15:56:34.193934 kubelet[2606]: I0213 15:56:34.193746 2606 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-cilium-cgroup\") on node \"ci-4186.1.1-d-137a032ec7\" DevicePath \"\"" Feb 13 15:56:34.193934 kubelet[2606]: I0213 15:56:34.193764 2606 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/abc75907-2c0a-45ef-bc33-5628b5f5ec61-clustermesh-secrets\") on node \"ci-4186.1.1-d-137a032ec7\" DevicePath \"\"" Feb 13 15:56:34.193934 kubelet[2606]: I0213 15:56:34.193779 2606 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-host-proc-sys-kernel\") on node \"ci-4186.1.1-d-137a032ec7\" DevicePath \"\"" Feb 13 15:56:34.193934 kubelet[2606]: I0213 15:56:34.193798 2606 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8c8qz\" (UniqueName: \"kubernetes.io/projected/abc75907-2c0a-45ef-bc33-5628b5f5ec61-kube-api-access-8c8qz\") on node \"ci-4186.1.1-d-137a032ec7\" DevicePath \"\"" Feb 13 15:56:34.194464 kubelet[2606]: I0213 15:56:34.193813 2606 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-hostproc\") on node \"ci-4186.1.1-d-137a032ec7\" DevicePath \"\"" Feb 13 15:56:34.194464 kubelet[2606]: I0213 15:56:34.193827 2606 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/abc75907-2c0a-45ef-bc33-5628b5f5ec61-hubble-tls\") on node \"ci-4186.1.1-d-137a032ec7\" DevicePath \"\"" Feb 13 15:56:34.194464 kubelet[2606]: I0213 15:56:34.193845 2606 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-cni-path\") on node \"ci-4186.1.1-d-137a032ec7\" DevicePath \"\"" Feb 13 15:56:34.194464 kubelet[2606]: I0213 15:56:34.193857 2606 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-host-proc-sys-net\") on node \"ci-4186.1.1-d-137a032ec7\" DevicePath \"\"" Feb 13 15:56:34.194464 kubelet[2606]: I0213 15:56:34.193870 2606 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-etc-cni-netd\") on node \"ci-4186.1.1-d-137a032ec7\" DevicePath \"\"" Feb 13 15:56:34.194464 kubelet[2606]: I0213 15:56:34.193889 2606 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/abc75907-2c0a-45ef-bc33-5628b5f5ec61-bpf-maps\") on node \"ci-4186.1.1-d-137a032ec7\" DevicePath \"\"" Feb 13 15:56:34.393915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6009f8fc1f4f20431c332d0fe04c504b6b9d4129d93c4e99e7d99b6faddc5115-rootfs.mount: Deactivated successfully. Feb 13 15:56:34.394771 systemd[1]: var-lib-kubelet-pods-abc75907\x2d2c0a\x2d45ef\x2dbc33\x2d5628b5f5ec61-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8c8qz.mount: Deactivated successfully. Feb 13 15:56:34.394983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b14398c5ff7cc0667566afe5fd364444cfdeb446b3b40af04385f1497c932045-rootfs.mount: Deactivated successfully. Feb 13 15:56:34.395621 systemd[1]: var-lib-kubelet-pods-abc75907\x2d2c0a\x2d45ef\x2dbc33\x2d5628b5f5ec61-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:56:34.395887 systemd[1]: var-lib-kubelet-pods-c03bd812\x2d9ac0\x2d4a93\x2dbc43\x2d57dae27d8c03-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtpbqt.mount: Deactivated successfully. Feb 13 15:56:34.396087 systemd[1]: var-lib-kubelet-pods-abc75907\x2d2c0a\x2d45ef\x2dbc33\x2d5628b5f5ec61-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:56:34.744708 systemd[1]: Removed slice kubepods-burstable-podabc75907_2c0a_45ef_bc33_5628b5f5ec61.slice - libcontainer container kubepods-burstable-podabc75907_2c0a_45ef_bc33_5628b5f5ec61.slice. Feb 13 15:56:34.745269 systemd[1]: kubepods-burstable-podabc75907_2c0a_45ef_bc33_5628b5f5ec61.slice: Consumed 13.025s CPU time. Feb 13 15:56:34.763720 systemd[1]: Removed slice kubepods-besteffort-podc03bd812_9ac0_4a93_bc43_57dae27d8c03.slice - libcontainer container kubepods-besteffort-podc03bd812_9ac0_4a93_bc43_57dae27d8c03.slice. Feb 13 15:56:34.873057 kubelet[2606]: I0213 15:56:34.872173 2606 scope.go:117] "RemoveContainer" containerID="adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3" Feb 13 15:56:34.903984 containerd[1480]: time="2025-02-13T15:56:34.903217868Z" level=info msg="RemoveContainer for \"adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3\"" Feb 13 15:56:34.916502 containerd[1480]: time="2025-02-13T15:56:34.916418752Z" level=info msg="RemoveContainer for \"adab2ac83dcb871b20374c05961f14c185bf27e29b081e7b815e5a86a20d82b3\" returns successfully" Feb 13 15:56:34.918114 kubelet[2606]: I0213 15:56:34.916951 2606 scope.go:117] "RemoveContainer" containerID="5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4" Feb 13 15:56:34.921235 containerd[1480]: time="2025-02-13T15:56:34.921137365Z" level=info msg="RemoveContainer for \"5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4\"" Feb 13 15:56:34.932477 containerd[1480]: time="2025-02-13T15:56:34.932401091Z" level=info msg="RemoveContainer for \"5de79c107c1a9f8549256e4afbc7b8afdbadf744d3f19d2822d15446f87cb8d4\" returns successfully" Feb 13 15:56:34.933280 kubelet[2606]: I0213 15:56:34.932844 2606 scope.go:117] "RemoveContainer" containerID="beda87396b1ada3c4b89cd6016993b85e61e349dcc93b450823f7881ec19a17f" Feb 13 15:56:34.935989 containerd[1480]: time="2025-02-13T15:56:34.935901095Z" level=info msg="RemoveContainer for \"beda87396b1ada3c4b89cd6016993b85e61e349dcc93b450823f7881ec19a17f\"" Feb 13 15:56:34.942632 containerd[1480]: time="2025-02-13T15:56:34.942530180Z" level=info msg="RemoveContainer for \"beda87396b1ada3c4b89cd6016993b85e61e349dcc93b450823f7881ec19a17f\" returns successfully" Feb 13 15:56:34.944360 kubelet[2606]: I0213 15:56:34.943500 2606 scope.go:117] "RemoveContainer" containerID="16ca2f05c6ed1ef544a4897387c27e65d605d28d7870326abfb3c59cbe123004" Feb 13 15:56:34.951930 containerd[1480]: time="2025-02-13T15:56:34.951871167Z" level=info msg="RemoveContainer for \"16ca2f05c6ed1ef544a4897387c27e65d605d28d7870326abfb3c59cbe123004\"" Feb 13 15:56:34.959052 containerd[1480]: time="2025-02-13T15:56:34.958975587Z" level=info msg="RemoveContainer for \"16ca2f05c6ed1ef544a4897387c27e65d605d28d7870326abfb3c59cbe123004\" returns successfully" Feb 13 15:56:34.959575 kubelet[2606]: I0213 15:56:34.959479 2606 scope.go:117] "RemoveContainer" containerID="041ff95f57ecb56d5332d34961ae9b0f046f57fbc5b132ef63b9f6ccd018b232" Feb 13 15:56:34.961867 containerd[1480]: time="2025-02-13T15:56:34.961817179Z" level=info msg="RemoveContainer for \"041ff95f57ecb56d5332d34961ae9b0f046f57fbc5b132ef63b9f6ccd018b232\"" Feb 13 15:56:34.972191 containerd[1480]: time="2025-02-13T15:56:34.969332806Z" level=info msg="RemoveContainer for \"041ff95f57ecb56d5332d34961ae9b0f046f57fbc5b132ef63b9f6ccd018b232\" returns successfully" Feb 13 15:56:34.972449 kubelet[2606]: I0213 15:56:34.970012 2606 scope.go:117] "RemoveContainer" containerID="816a4d57a1256fd13b921cecc73151a842603bcad3fe14819156105f4704caa7" Feb 13 15:56:34.976624 containerd[1480]: time="2025-02-13T15:56:34.976541754Z" level=info msg="RemoveContainer for \"816a4d57a1256fd13b921cecc73151a842603bcad3fe14819156105f4704caa7\"" Feb 13 15:56:34.985190 containerd[1480]: time="2025-02-13T15:56:34.985004211Z" level=info msg="RemoveContainer for \"816a4d57a1256fd13b921cecc73151a842603bcad3fe14819156105f4704caa7\" returns successfully" Feb 13 15:56:35.216502 sshd[4239]: Connection closed by 139.178.89.65 port 47534 Feb 13 15:56:35.218698 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:35.245638 systemd[1]: sshd@28-143.198.102.37:22-139.178.89.65:47534.service: Deactivated successfully. Feb 13 15:56:35.249197 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:56:35.249602 systemd[1]: session-25.scope: Consumed 1.310s CPU time. Feb 13 15:56:35.251973 systemd-logind[1451]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:56:35.278563 systemd[1]: Started sshd@29-143.198.102.37:22-139.178.89.65:41962.service - OpenSSH per-connection server daemon (139.178.89.65:41962). Feb 13 15:56:35.282775 systemd-logind[1451]: Removed session 25. Feb 13 15:56:35.418755 sshd[4400]: Accepted publickey for core from 139.178.89.65 port 41962 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:56:35.420056 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:35.446789 systemd-logind[1451]: New session 26 of user core. Feb 13 15:56:35.458339 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:56:36.729184 kubelet[2606]: I0213 15:56:36.729124 2606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abc75907-2c0a-45ef-bc33-5628b5f5ec61" path="/var/lib/kubelet/pods/abc75907-2c0a-45ef-bc33-5628b5f5ec61/volumes" Feb 13 15:56:36.732391 kubelet[2606]: I0213 15:56:36.732306 2606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03bd812-9ac0-4a93-bc43-57dae27d8c03" path="/var/lib/kubelet/pods/c03bd812-9ac0-4a93-bc43-57dae27d8c03/volumes" Feb 13 15:56:37.105784 sshd[4402]: Connection closed by 139.178.89.65 port 41962 Feb 13 15:56:37.105432 sshd-session[4400]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:37.134372 systemd[1]: sshd@29-143.198.102.37:22-139.178.89.65:41962.service: Deactivated successfully. Feb 13 15:56:37.142283 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:56:37.149746 systemd-logind[1451]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:56:37.154368 systemd-logind[1451]: Removed session 26. Feb 13 15:56:37.176842 systemd[1]: Started sshd@30-143.198.102.37:22-139.178.89.65:41964.service - OpenSSH per-connection server daemon (139.178.89.65:41964). Feb 13 15:56:37.215660 kubelet[2606]: E0213 15:56:37.213865 2606 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="abc75907-2c0a-45ef-bc33-5628b5f5ec61" containerName="mount-cgroup" Feb 13 15:56:37.215660 kubelet[2606]: E0213 15:56:37.213924 2606 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="abc75907-2c0a-45ef-bc33-5628b5f5ec61" containerName="mount-bpf-fs" Feb 13 15:56:37.215660 kubelet[2606]: E0213 15:56:37.213934 2606 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="abc75907-2c0a-45ef-bc33-5628b5f5ec61" containerName="cilium-agent" Feb 13 15:56:37.215660 kubelet[2606]: E0213 15:56:37.213944 2606 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c03bd812-9ac0-4a93-bc43-57dae27d8c03" containerName="cilium-operator" Feb 13 15:56:37.215660 kubelet[2606]: E0213 15:56:37.213953 2606 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="abc75907-2c0a-45ef-bc33-5628b5f5ec61" containerName="apply-sysctl-overwrites" Feb 13 15:56:37.215660 kubelet[2606]: E0213 15:56:37.213962 2606 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="abc75907-2c0a-45ef-bc33-5628b5f5ec61" containerName="clean-cilium-state" Feb 13 15:56:37.215660 kubelet[2606]: I0213 15:56:37.214006 2606 memory_manager.go:354] "RemoveStaleState removing state" podUID="c03bd812-9ac0-4a93-bc43-57dae27d8c03" containerName="cilium-operator" Feb 13 15:56:37.215660 kubelet[2606]: I0213 15:56:37.214018 2606 memory_manager.go:354] "RemoveStaleState removing state" podUID="abc75907-2c0a-45ef-bc33-5628b5f5ec61" containerName="cilium-agent" Feb 13 15:56:37.303458 sshd[4413]: Accepted publickey for core from 139.178.89.65 port 41964 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:56:37.306573 systemd[1]: Created slice kubepods-burstable-pod4c691f1c_8e30_4b2a_b86a_d097c19de354.slice - libcontainer container kubepods-burstable-pod4c691f1c_8e30_4b2a_b86a_d097c19de354.slice. Feb 13 15:56:37.310676 sshd-session[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:37.330890 systemd-logind[1451]: New session 27 of user core. Feb 13 15:56:37.337966 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:56:37.363121 kubelet[2606]: I0213 15:56:37.362931 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcgkc\" (UniqueName: \"kubernetes.io/projected/4c691f1c-8e30-4b2a-b86a-d097c19de354-kube-api-access-qcgkc\") pod \"cilium-bwk2p\" (UID: \"4c691f1c-8e30-4b2a-b86a-d097c19de354\") " pod="kube-system/cilium-bwk2p" Feb 13 15:56:37.364631 kubelet[2606]: I0213 15:56:37.363366 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c691f1c-8e30-4b2a-b86a-d097c19de354-hostproc\") pod \"cilium-bwk2p\" (UID: \"4c691f1c-8e30-4b2a-b86a-d097c19de354\") " pod="kube-system/cilium-bwk2p" Feb 13 15:56:37.364631 kubelet[2606]: I0213 15:56:37.363405 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c691f1c-8e30-4b2a-b86a-d097c19de354-clustermesh-secrets\") pod \"cilium-bwk2p\" (UID: \"4c691f1c-8e30-4b2a-b86a-d097c19de354\") " pod="kube-system/cilium-bwk2p" Feb 13 15:56:37.364631 kubelet[2606]: I0213 15:56:37.363439 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c691f1c-8e30-4b2a-b86a-d097c19de354-host-proc-sys-net\") pod \"cilium-bwk2p\" (UID: \"4c691f1c-8e30-4b2a-b86a-d097c19de354\") " pod="kube-system/cilium-bwk2p" Feb 13 15:56:37.364631 kubelet[2606]: I0213 15:56:37.363479 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c691f1c-8e30-4b2a-b86a-d097c19de354-etc-cni-netd\") pod \"cilium-bwk2p\" (UID: \"4c691f1c-8e30-4b2a-b86a-d097c19de354\") " pod="kube-system/cilium-bwk2p" Feb 13 15:56:37.364631 kubelet[2606]: I0213 15:56:37.363511 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c691f1c-8e30-4b2a-b86a-d097c19de354-lib-modules\") pod \"cilium-bwk2p\" (UID: \"4c691f1c-8e30-4b2a-b86a-d097c19de354\") " pod="kube-system/cilium-bwk2p" Feb 13 15:56:37.364631 kubelet[2606]: I0213 15:56:37.363627 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c691f1c-8e30-4b2a-b86a-d097c19de354-xtables-lock\") pod \"cilium-bwk2p\" (UID: \"4c691f1c-8e30-4b2a-b86a-d097c19de354\") " pod="kube-system/cilium-bwk2p" Feb 13 15:56:37.365045 kubelet[2606]: I0213 15:56:37.363659 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4c691f1c-8e30-4b2a-b86a-d097c19de354-cilium-ipsec-secrets\") pod \"cilium-bwk2p\" (UID: \"4c691f1c-8e30-4b2a-b86a-d097c19de354\") " pod="kube-system/cilium-bwk2p" Feb 13 15:56:37.365045 kubelet[2606]: I0213 15:56:37.363731 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c691f1c-8e30-4b2a-b86a-d097c19de354-host-proc-sys-kernel\") pod \"cilium-bwk2p\" (UID: \"4c691f1c-8e30-4b2a-b86a-d097c19de354\") " pod="kube-system/cilium-bwk2p" Feb 13 15:56:37.365045 kubelet[2606]: I0213 15:56:37.363774 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c691f1c-8e30-4b2a-b86a-d097c19de354-cilium-run\") pod \"cilium-bwk2p\" (UID: \"4c691f1c-8e30-4b2a-b86a-d097c19de354\") " pod="kube-system/cilium-bwk2p" Feb 13 15:56:37.365045 kubelet[2606]: I0213 15:56:37.363811 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c691f1c-8e30-4b2a-b86a-d097c19de354-cilium-cgroup\") pod \"cilium-bwk2p\" (UID: \"4c691f1c-8e30-4b2a-b86a-d097c19de354\") " pod="kube-system/cilium-bwk2p" Feb 13 15:56:37.365045 kubelet[2606]: I0213 15:56:37.363840 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c691f1c-8e30-4b2a-b86a-d097c19de354-cni-path\") pod \"cilium-bwk2p\" (UID: \"4c691f1c-8e30-4b2a-b86a-d097c19de354\") " pod="kube-system/cilium-bwk2p" Feb 13 15:56:37.365045 kubelet[2606]: I0213 15:56:37.363873 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c691f1c-8e30-4b2a-b86a-d097c19de354-hubble-tls\") pod \"cilium-bwk2p\" (UID: \"4c691f1c-8e30-4b2a-b86a-d097c19de354\") " pod="kube-system/cilium-bwk2p" Feb 13 15:56:37.365262 kubelet[2606]: I0213 15:56:37.363904 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c691f1c-8e30-4b2a-b86a-d097c19de354-bpf-maps\") pod \"cilium-bwk2p\" (UID: \"4c691f1c-8e30-4b2a-b86a-d097c19de354\") " pod="kube-system/cilium-bwk2p" Feb 13 15:56:37.365262 kubelet[2606]: I0213 15:56:37.363942 2606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c691f1c-8e30-4b2a-b86a-d097c19de354-cilium-config-path\") pod \"cilium-bwk2p\" (UID: \"4c691f1c-8e30-4b2a-b86a-d097c19de354\") " pod="kube-system/cilium-bwk2p" Feb 13 15:56:37.423888 sshd[4415]: Connection closed by 139.178.89.65 port 41964 Feb 13 15:56:37.426089 sshd-session[4413]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:37.442208 systemd[1]: sshd@30-143.198.102.37:22-139.178.89.65:41964.service: Deactivated successfully. Feb 13 15:56:37.445677 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:56:37.449623 systemd-logind[1451]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:56:37.453298 systemd-logind[1451]: Removed session 27. Feb 13 15:56:37.466193 systemd[1]: Started sshd@31-143.198.102.37:22-139.178.89.65:41968.service - OpenSSH per-connection server daemon (139.178.89.65:41968). Feb 13 15:56:37.614647 kubelet[2606]: E0213 15:56:37.614429 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:56:37.618079 containerd[1480]: time="2025-02-13T15:56:37.616824140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bwk2p,Uid:4c691f1c-8e30-4b2a-b86a-d097c19de354,Namespace:kube-system,Attempt:0,}" Feb 13 15:56:37.621377 sshd[4421]: Accepted publickey for core from 139.178.89.65 port 41968 ssh2: RSA SHA256:xbQMFxKGhsFroWszVX4n07fPkTy8VMnJgGT8GFjL/e4 Feb 13 15:56:37.627098 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:56:37.663885 systemd-logind[1451]: New session 28 of user core. Feb 13 15:56:37.706516 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:56:37.737851 containerd[1480]: time="2025-02-13T15:56:37.737429007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:56:37.737851 containerd[1480]: time="2025-02-13T15:56:37.737548958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:56:37.737851 containerd[1480]: time="2025-02-13T15:56:37.737572650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:56:37.737851 containerd[1480]: time="2025-02-13T15:56:37.737768505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:56:37.783490 systemd[1]: Started cri-containerd-2af0ab2cbdf6c7d921d5673f1bb68efe3c66fd3465ff4cc2cc7b381373dadddb.scope - libcontainer container 2af0ab2cbdf6c7d921d5673f1bb68efe3c66fd3465ff4cc2cc7b381373dadddb. Feb 13 15:56:37.924565 containerd[1480]: time="2025-02-13T15:56:37.922253035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bwk2p,Uid:4c691f1c-8e30-4b2a-b86a-d097c19de354,Namespace:kube-system,Attempt:0,} returns sandbox id \"2af0ab2cbdf6c7d921d5673f1bb68efe3c66fd3465ff4cc2cc7b381373dadddb\"" Feb 13 15:56:37.925731 kubelet[2606]: E0213 15:56:37.925682 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:56:37.938726 containerd[1480]: time="2025-02-13T15:56:37.938629599Z" level=info msg="CreateContainer within sandbox \"2af0ab2cbdf6c7d921d5673f1bb68efe3c66fd3465ff4cc2cc7b381373dadddb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:56:37.998860 containerd[1480]: time="2025-02-13T15:56:37.998000764Z" level=info msg="CreateContainer within sandbox \"2af0ab2cbdf6c7d921d5673f1bb68efe3c66fd3465ff4cc2cc7b381373dadddb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a7fbd584e394facaabe7a184e26197711b3dd5061b9b7785793ff520cfbc5b17\"" Feb 13 15:56:38.002376 containerd[1480]: time="2025-02-13T15:56:38.001494599Z" level=info msg="StartContainer for \"a7fbd584e394facaabe7a184e26197711b3dd5061b9b7785793ff520cfbc5b17\"" Feb 13 15:56:38.165641 systemd[1]: Started cri-containerd-a7fbd584e394facaabe7a184e26197711b3dd5061b9b7785793ff520cfbc5b17.scope - libcontainer container a7fbd584e394facaabe7a184e26197711b3dd5061b9b7785793ff520cfbc5b17. Feb 13 15:56:38.289821 containerd[1480]: time="2025-02-13T15:56:38.282993727Z" level=info msg="StartContainer for \"a7fbd584e394facaabe7a184e26197711b3dd5061b9b7785793ff520cfbc5b17\" returns successfully" Feb 13 15:56:38.313882 systemd[1]: cri-containerd-a7fbd584e394facaabe7a184e26197711b3dd5061b9b7785793ff520cfbc5b17.scope: Deactivated successfully. Feb 13 15:56:38.418889 containerd[1480]: time="2025-02-13T15:56:38.418075666Z" level=info msg="shim disconnected" id=a7fbd584e394facaabe7a184e26197711b3dd5061b9b7785793ff520cfbc5b17 namespace=k8s.io Feb 13 15:56:38.418889 containerd[1480]: time="2025-02-13T15:56:38.418155158Z" level=warning msg="cleaning up after shim disconnected" id=a7fbd584e394facaabe7a184e26197711b3dd5061b9b7785793ff520cfbc5b17 namespace=k8s.io Feb 13 15:56:38.418889 containerd[1480]: time="2025-02-13T15:56:38.418169442Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:56:38.922540 kubelet[2606]: E0213 15:56:38.922404 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:56:38.927695 containerd[1480]: time="2025-02-13T15:56:38.927156129Z" level=info msg="CreateContainer within sandbox \"2af0ab2cbdf6c7d921d5673f1bb68efe3c66fd3465ff4cc2cc7b381373dadddb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:56:38.956409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2673559186.mount: Deactivated successfully. Feb 13 15:56:38.966155 kubelet[2606]: E0213 15:56:38.966104 2606 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:56:38.980918 containerd[1480]: time="2025-02-13T15:56:38.980703591Z" level=info msg="CreateContainer within sandbox \"2af0ab2cbdf6c7d921d5673f1bb68efe3c66fd3465ff4cc2cc7b381373dadddb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"052c77bdddf9e07c431bf1e2012aff2203263ac0fe515a00bbb1c368bb3d3e23\"" Feb 13 15:56:38.984811 containerd[1480]: time="2025-02-13T15:56:38.984628604Z" level=info msg="StartContainer for \"052c77bdddf9e07c431bf1e2012aff2203263ac0fe515a00bbb1c368bb3d3e23\"" Feb 13 15:56:39.067133 systemd[1]: Started cri-containerd-052c77bdddf9e07c431bf1e2012aff2203263ac0fe515a00bbb1c368bb3d3e23.scope - libcontainer container 052c77bdddf9e07c431bf1e2012aff2203263ac0fe515a00bbb1c368bb3d3e23. Feb 13 15:56:39.130434 containerd[1480]: time="2025-02-13T15:56:39.130259684Z" level=info msg="StartContainer for \"052c77bdddf9e07c431bf1e2012aff2203263ac0fe515a00bbb1c368bb3d3e23\" returns successfully" Feb 13 15:56:39.143161 systemd[1]: cri-containerd-052c77bdddf9e07c431bf1e2012aff2203263ac0fe515a00bbb1c368bb3d3e23.scope: Deactivated successfully. Feb 13 15:56:39.198770 containerd[1480]: time="2025-02-13T15:56:39.197866427Z" level=info msg="shim disconnected" id=052c77bdddf9e07c431bf1e2012aff2203263ac0fe515a00bbb1c368bb3d3e23 namespace=k8s.io Feb 13 15:56:39.198770 containerd[1480]: time="2025-02-13T15:56:39.197953248Z" level=warning msg="cleaning up after shim disconnected" id=052c77bdddf9e07c431bf1e2012aff2203263ac0fe515a00bbb1c368bb3d3e23 namespace=k8s.io Feb 13 15:56:39.198770 containerd[1480]: time="2025-02-13T15:56:39.197963985Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:56:39.488448 systemd[1]: run-containerd-runc-k8s.io-052c77bdddf9e07c431bf1e2012aff2203263ac0fe515a00bbb1c368bb3d3e23-runc.vYI1yr.mount: Deactivated successfully. Feb 13 15:56:39.490869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-052c77bdddf9e07c431bf1e2012aff2203263ac0fe515a00bbb1c368bb3d3e23-rootfs.mount: Deactivated successfully. Feb 13 15:56:39.929314 kubelet[2606]: E0213 15:56:39.928896 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:56:39.937214 containerd[1480]: time="2025-02-13T15:56:39.936524642Z" level=info msg="CreateContainer within sandbox \"2af0ab2cbdf6c7d921d5673f1bb68efe3c66fd3465ff4cc2cc7b381373dadddb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:56:39.976386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3040406374.mount: Deactivated successfully. Feb 13 15:56:40.002711 containerd[1480]: time="2025-02-13T15:56:40.002533067Z" level=info msg="CreateContainer within sandbox \"2af0ab2cbdf6c7d921d5673f1bb68efe3c66fd3465ff4cc2cc7b381373dadddb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5d295a25811015b1f46fbf1258f4005ed849a081a0df00ee99f432be52b6fddc\"" Feb 13 15:56:40.006727 containerd[1480]: time="2025-02-13T15:56:40.004520780Z" level=info msg="StartContainer for \"5d295a25811015b1f46fbf1258f4005ed849a081a0df00ee99f432be52b6fddc\"" Feb 13 15:56:40.101564 systemd[1]: Started cri-containerd-5d295a25811015b1f46fbf1258f4005ed849a081a0df00ee99f432be52b6fddc.scope - libcontainer container 5d295a25811015b1f46fbf1258f4005ed849a081a0df00ee99f432be52b6fddc. Feb 13 15:56:40.196767 containerd[1480]: time="2025-02-13T15:56:40.196508805Z" level=info msg="StartContainer for \"5d295a25811015b1f46fbf1258f4005ed849a081a0df00ee99f432be52b6fddc\" returns successfully" Feb 13 15:56:40.226888 systemd[1]: cri-containerd-5d295a25811015b1f46fbf1258f4005ed849a081a0df00ee99f432be52b6fddc.scope: Deactivated successfully. Feb 13 15:56:40.330909 containerd[1480]: time="2025-02-13T15:56:40.327935719Z" level=info msg="shim disconnected" id=5d295a25811015b1f46fbf1258f4005ed849a081a0df00ee99f432be52b6fddc namespace=k8s.io Feb 13 15:56:40.330909 containerd[1480]: time="2025-02-13T15:56:40.328025389Z" level=warning msg="cleaning up after shim disconnected" id=5d295a25811015b1f46fbf1258f4005ed849a081a0df00ee99f432be52b6fddc namespace=k8s.io Feb 13 15:56:40.330909 containerd[1480]: time="2025-02-13T15:56:40.328037827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:56:40.493055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d295a25811015b1f46fbf1258f4005ed849a081a0df00ee99f432be52b6fddc-rootfs.mount: Deactivated successfully. Feb 13 15:56:40.940776 kubelet[2606]: E0213 15:56:40.937426 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:56:40.958003 containerd[1480]: time="2025-02-13T15:56:40.957454350Z" level=info msg="CreateContainer within sandbox \"2af0ab2cbdf6c7d921d5673f1bb68efe3c66fd3465ff4cc2cc7b381373dadddb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:56:41.081453 containerd[1480]: time="2025-02-13T15:56:41.081221356Z" level=info msg="CreateContainer within sandbox \"2af0ab2cbdf6c7d921d5673f1bb68efe3c66fd3465ff4cc2cc7b381373dadddb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9b2acbe8349e817a2d60f6af654ff3218fee2c19aa261968923a8f709a574ff0\"" Feb 13 15:56:41.084447 containerd[1480]: time="2025-02-13T15:56:41.084250787Z" level=info msg="StartContainer for \"9b2acbe8349e817a2d60f6af654ff3218fee2c19aa261968923a8f709a574ff0\"" Feb 13 15:56:41.195788 systemd[1]: Started cri-containerd-9b2acbe8349e817a2d60f6af654ff3218fee2c19aa261968923a8f709a574ff0.scope - libcontainer container 9b2acbe8349e817a2d60f6af654ff3218fee2c19aa261968923a8f709a574ff0. Feb 13 15:56:41.311084 containerd[1480]: time="2025-02-13T15:56:41.310718362Z" level=info msg="StartContainer for \"9b2acbe8349e817a2d60f6af654ff3218fee2c19aa261968923a8f709a574ff0\" returns successfully" Feb 13 15:56:41.315223 systemd[1]: cri-containerd-9b2acbe8349e817a2d60f6af654ff3218fee2c19aa261968923a8f709a574ff0.scope: Deactivated successfully. Feb 13 15:56:41.407714 containerd[1480]: time="2025-02-13T15:56:41.407443359Z" level=info msg="shim disconnected" id=9b2acbe8349e817a2d60f6af654ff3218fee2c19aa261968923a8f709a574ff0 namespace=k8s.io Feb 13 15:56:41.407714 containerd[1480]: time="2025-02-13T15:56:41.407665125Z" level=warning msg="cleaning up after shim disconnected" id=9b2acbe8349e817a2d60f6af654ff3218fee2c19aa261968923a8f709a574ff0 namespace=k8s.io Feb 13 15:56:41.407714 containerd[1480]: time="2025-02-13T15:56:41.407681467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:56:41.492725 systemd[1]: run-containerd-runc-k8s.io-9b2acbe8349e817a2d60f6af654ff3218fee2c19aa261968923a8f709a574ff0-runc.KNVVee.mount: Deactivated successfully. Feb 13 15:56:41.493084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b2acbe8349e817a2d60f6af654ff3218fee2c19aa261968923a8f709a574ff0-rootfs.mount: Deactivated successfully. Feb 13 15:56:41.947059 kubelet[2606]: E0213 15:56:41.944217 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:56:41.949676 containerd[1480]: time="2025-02-13T15:56:41.948068008Z" level=info msg="CreateContainer within sandbox \"2af0ab2cbdf6c7d921d5673f1bb68efe3c66fd3465ff4cc2cc7b381373dadddb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:56:41.994120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1236276617.mount: Deactivated successfully. Feb 13 15:56:42.011730 containerd[1480]: time="2025-02-13T15:56:42.010531080Z" level=info msg="CreateContainer within sandbox \"2af0ab2cbdf6c7d921d5673f1bb68efe3c66fd3465ff4cc2cc7b381373dadddb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"913039ce86b2a46669c6dac41ea92c513e33f791493cae1ab03e625e78f97b84\"" Feb 13 15:56:42.015739 containerd[1480]: time="2025-02-13T15:56:42.014046812Z" level=info msg="StartContainer for \"913039ce86b2a46669c6dac41ea92c513e33f791493cae1ab03e625e78f97b84\"" Feb 13 15:56:42.018722 kubelet[2606]: I0213 15:56:42.018657 2606 setters.go:600] "Node became not ready" node="ci-4186.1.1-d-137a032ec7" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:56:42Z","lastTransitionTime":"2025-02-13T15:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:56:42.145501 systemd[1]: Started cri-containerd-913039ce86b2a46669c6dac41ea92c513e33f791493cae1ab03e625e78f97b84.scope - libcontainer container 913039ce86b2a46669c6dac41ea92c513e33f791493cae1ab03e625e78f97b84. Feb 13 15:56:42.236867 containerd[1480]: time="2025-02-13T15:56:42.236781324Z" level=info msg="StartContainer for \"913039ce86b2a46669c6dac41ea92c513e33f791493cae1ab03e625e78f97b84\" returns successfully" Feb 13 15:56:42.968767 kubelet[2606]: E0213 15:56:42.968273 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:56:43.338716 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 15:56:43.967492 kubelet[2606]: E0213 15:56:43.967419 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:56:47.432347 systemd[1]: run-containerd-runc-k8s.io-913039ce86b2a46669c6dac41ea92c513e33f791493cae1ab03e625e78f97b84-runc.ieiZIq.mount: Deactivated successfully. Feb 13 15:56:48.729158 kubelet[2606]: E0213 15:56:48.728541 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:56:50.070459 systemd-networkd[1369]: lxc_health: Link UP Feb 13 15:56:50.082002 systemd-networkd[1369]: lxc_health: Gained carrier Feb 13 15:56:51.142777 systemd-networkd[1369]: lxc_health: Gained IPv6LL Feb 13 15:56:51.622719 kubelet[2606]: E0213 15:56:51.622653 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:56:51.671321 kubelet[2606]: I0213 15:56:51.670946 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bwk2p" podStartSLOduration=14.670909697 podStartE2EDuration="14.670909697s" podCreationTimestamp="2025-02-13 15:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:56:43.015850715 +0000 UTC m=+134.778043762" watchObservedRunningTime="2025-02-13 15:56:51.670909697 +0000 UTC m=+143.433102734" Feb 13 15:56:52.061036 kubelet[2606]: E0213 15:56:52.060969 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:56:53.066481 kubelet[2606]: E0213 15:56:53.065155 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:56:53.729530 kubelet[2606]: E0213 15:56:53.727180 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:56:54.733340 systemd[1]: run-containerd-runc-k8s.io-913039ce86b2a46669c6dac41ea92c513e33f791493cae1ab03e625e78f97b84-runc.3n0Nts.mount: Deactivated successfully. Feb 13 15:56:55.726405 kubelet[2606]: E0213 15:56:55.726350 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 15:56:57.130198 systemd[1]: run-containerd-runc-k8s.io-913039ce86b2a46669c6dac41ea92c513e33f791493cae1ab03e625e78f97b84-runc.PHDQib.mount: Deactivated successfully. Feb 13 15:56:57.263832 sshd[4435]: Connection closed by 139.178.89.65 port 41968 Feb 13 15:56:57.269242 sshd-session[4421]: pam_unix(sshd:session): session closed for user core Feb 13 15:56:57.278797 systemd[1]: sshd@31-143.198.102.37:22-139.178.89.65:41968.service: Deactivated successfully. Feb 13 15:56:57.290795 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:56:57.300041 systemd-logind[1451]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:56:57.307008 systemd-logind[1451]: Removed session 28.