Dec 13 09:10:43.012920 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 09:10:43.012948 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 09:10:43.012962 kernel: BIOS-provided physical RAM map: Dec 13 09:10:43.012970 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 09:10:43.012977 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 09:10:43.012984 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 09:10:43.012993 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Dec 13 09:10:43.013000 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Dec 13 09:10:43.013006 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 09:10:43.013015 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 09:10:43.013022 kernel: NX (Execute Disable) protection: active Dec 13 09:10:43.013028 kernel: APIC: Static calls initialized Dec 13 09:10:43.013035 kernel: SMBIOS 2.8 present. Dec 13 09:10:43.013041 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 13 09:10:43.013049 kernel: Hypervisor detected: KVM Dec 13 09:10:43.013059 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 09:10:43.013066 kernel: kvm-clock: using sched offset of 4228852961 cycles Dec 13 09:10:43.013074 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 09:10:43.013081 kernel: tsc: Detected 1995.312 MHz processor Dec 13 09:10:43.013088 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 09:10:43.013096 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 09:10:43.013103 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Dec 13 09:10:43.013110 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 09:10:43.013117 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 09:10:43.013126 kernel: ACPI: Early table checksum verification disabled Dec 13 09:10:43.013133 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Dec 13 09:10:43.013141 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.013148 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.013155 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.013161 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 09:10:43.013168 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.013175 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.013199 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.013214 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:10:43.013225 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Dec 13 09:10:43.013237 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Dec 13 09:10:43.013248 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 09:10:43.013259 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Dec 13 09:10:43.013271 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Dec 13 09:10:43.013283 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Dec 13 09:10:43.013304 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Dec 13 09:10:43.013317 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 09:10:43.013329 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 09:10:43.013340 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 09:10:43.013350 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 09:10:43.013362 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Dec 13 09:10:43.013370 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Dec 13 09:10:43.013381 kernel: Zone ranges: Dec 13 09:10:43.013389 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 09:10:43.013396 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Dec 13 09:10:43.013404 kernel: Normal empty Dec 13 09:10:43.013411 kernel: Movable zone start for each node Dec 13 09:10:43.013418 kernel: Early memory node ranges Dec 13 09:10:43.013427 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 09:10:43.013439 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Dec 13 09:10:43.013450 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Dec 13 09:10:43.013461 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 09:10:43.013469 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 09:10:43.013476 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Dec 13 09:10:43.013484 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 09:10:43.013491 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 09:10:43.013498 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 09:10:43.013506 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 09:10:43.013513 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 09:10:43.013525 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 09:10:43.013541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 09:10:43.013555 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 09:10:43.013568 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 09:10:43.013581 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 09:10:43.013595 kernel: TSC deadline timer available Dec 13 09:10:43.013607 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 09:10:43.013619 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 09:10:43.013633 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 09:10:43.013646 kernel: Booting paravirtualized kernel on KVM Dec 13 09:10:43.013658 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 09:10:43.013669 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 09:10:43.013676 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 09:10:43.013684 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 09:10:43.013691 kernel: pcpu-alloc: [0] 0 1 Dec 13 09:10:43.013698 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 09:10:43.013707 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 09:10:43.013715 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 09:10:43.013725 kernel: random: crng init done Dec 13 09:10:43.013733 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 09:10:43.013740 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 09:10:43.013751 kernel: Fallback order for Node 0: 0 Dec 13 09:10:43.013759 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Dec 13 09:10:43.013766 kernel: Policy zone: DMA32 Dec 13 09:10:43.013776 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 09:10:43.013785 kernel: Memory: 1971188K/2096600K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Dec 13 09:10:43.013792 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 09:10:43.013802 kernel: Kernel/User page tables isolation: enabled Dec 13 09:10:43.013809 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 09:10:43.013817 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 09:10:43.013824 kernel: Dynamic Preempt: voluntary Dec 13 09:10:43.013832 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 09:10:43.013840 kernel: rcu: RCU event tracing is enabled. Dec 13 09:10:43.013848 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 09:10:43.013855 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 09:10:43.013863 kernel: Rude variant of Tasks RCU enabled. Dec 13 09:10:43.013873 kernel: Tracing variant of Tasks RCU enabled. Dec 13 09:10:43.013881 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 09:10:43.013889 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 09:10:43.013896 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 09:10:43.013918 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 09:10:43.013926 kernel: Console: colour VGA+ 80x25 Dec 13 09:10:43.013933 kernel: printk: console [tty0] enabled Dec 13 09:10:43.013941 kernel: printk: console [ttyS0] enabled Dec 13 09:10:43.013948 kernel: ACPI: Core revision 20230628 Dec 13 09:10:43.013959 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 09:10:43.013966 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 09:10:43.013974 kernel: x2apic enabled Dec 13 09:10:43.013981 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 09:10:43.013989 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 09:10:43.013997 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Dec 13 09:10:43.014005 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Dec 13 09:10:43.014012 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 09:10:43.014020 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 09:10:43.014038 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 09:10:43.014046 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 09:10:43.014054 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 09:10:43.014064 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 09:10:43.014072 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 09:10:43.014080 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 09:10:43.014089 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 09:10:43.014097 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 09:10:43.014105 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 09:10:43.014116 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 09:10:43.014124 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 09:10:43.014132 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 09:10:43.014140 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 09:10:43.014149 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 09:10:43.014157 kernel: Freeing SMP alternatives memory: 32K Dec 13 09:10:43.014165 kernel: pid_max: default: 32768 minimum: 301 Dec 13 09:10:43.014176 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 09:10:43.014184 kernel: landlock: Up and running. Dec 13 09:10:43.014192 kernel: SELinux: Initializing. Dec 13 09:10:43.014200 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 09:10:43.014208 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 09:10:43.014217 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 13 09:10:43.014225 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:10:43.014233 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:10:43.014241 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:10:43.014252 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 13 09:10:43.014260 kernel: signal: max sigframe size: 1776 Dec 13 09:10:43.014268 kernel: rcu: Hierarchical SRCU implementation. Dec 13 09:10:43.014277 kernel: rcu: Max phase no-delay instances is 400. Dec 13 09:10:43.014285 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 09:10:43.014293 kernel: smp: Bringing up secondary CPUs ... Dec 13 09:10:43.014301 kernel: smpboot: x86: Booting SMP configuration: Dec 13 09:10:43.014309 kernel: .... node #0, CPUs: #1 Dec 13 09:10:43.014318 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 09:10:43.014328 kernel: smpboot: Max logical packages: 1 Dec 13 09:10:43.014336 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Dec 13 09:10:43.014345 kernel: devtmpfs: initialized Dec 13 09:10:43.014353 kernel: x86/mm: Memory block size: 128MB Dec 13 09:10:43.014361 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 09:10:43.014369 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 09:10:43.014381 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 09:10:43.014389 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 09:10:43.014397 kernel: audit: initializing netlink subsys (disabled) Dec 13 09:10:43.014408 kernel: audit: type=2000 audit(1734081042.132:1): state=initialized audit_enabled=0 res=1 Dec 13 09:10:43.014416 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 09:10:43.014424 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 09:10:43.014432 kernel: cpuidle: using governor menu Dec 13 09:10:43.014440 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 09:10:43.014448 kernel: dca service started, version 1.12.1 Dec 13 09:10:43.014456 kernel: PCI: Using configuration type 1 for base access Dec 13 09:10:43.014464 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 09:10:43.014472 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 09:10:43.014483 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 09:10:43.014491 kernel: ACPI: Added _OSI(Module Device) Dec 13 09:10:43.014499 kernel: ACPI: Added _OSI(Processor Device) Dec 13 09:10:43.014507 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 09:10:43.014515 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 09:10:43.014523 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 09:10:43.014531 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 09:10:43.014539 kernel: ACPI: Interpreter enabled Dec 13 09:10:43.014547 kernel: ACPI: PM: (supports S0 S5) Dec 13 09:10:43.014555 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 09:10:43.014565 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 09:10:43.014573 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 09:10:43.014581 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 09:10:43.014589 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 09:10:43.014831 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 09:10:43.014963 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 09:10:43.015062 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 09:10:43.015077 kernel: acpiphp: Slot [3] registered Dec 13 09:10:43.015085 kernel: acpiphp: Slot [4] registered Dec 13 09:10:43.015094 kernel: acpiphp: Slot [5] registered Dec 13 09:10:43.015103 kernel: acpiphp: Slot [6] registered Dec 13 09:10:43.015111 kernel: acpiphp: Slot [7] registered Dec 13 09:10:43.015119 kernel: acpiphp: Slot [8] registered Dec 13 09:10:43.015127 kernel: acpiphp: Slot [9] registered Dec 13 09:10:43.015135 kernel: acpiphp: Slot [10] registered Dec 13 09:10:43.015143 kernel: acpiphp: Slot [11] registered Dec 13 09:10:43.015153 kernel: acpiphp: Slot [12] registered Dec 13 09:10:43.015162 kernel: acpiphp: Slot [13] registered Dec 13 09:10:43.015170 kernel: acpiphp: Slot [14] registered Dec 13 09:10:43.015178 kernel: acpiphp: Slot [15] registered Dec 13 09:10:43.015185 kernel: acpiphp: Slot [16] registered Dec 13 09:10:43.015193 kernel: acpiphp: Slot [17] registered Dec 13 09:10:43.015201 kernel: acpiphp: Slot [18] registered Dec 13 09:10:43.015209 kernel: acpiphp: Slot [19] registered Dec 13 09:10:43.015217 kernel: acpiphp: Slot [20] registered Dec 13 09:10:43.015228 kernel: acpiphp: Slot [21] registered Dec 13 09:10:43.015236 kernel: acpiphp: Slot [22] registered Dec 13 09:10:43.015244 kernel: acpiphp: Slot [23] registered Dec 13 09:10:43.015251 kernel: acpiphp: Slot [24] registered Dec 13 09:10:43.015260 kernel: acpiphp: Slot [25] registered Dec 13 09:10:43.015268 kernel: acpiphp: Slot [26] registered Dec 13 09:10:43.015276 kernel: acpiphp: Slot [27] registered Dec 13 09:10:43.015284 kernel: acpiphp: Slot [28] registered Dec 13 09:10:43.015291 kernel: acpiphp: Slot [29] registered Dec 13 09:10:43.015299 kernel: acpiphp: Slot [30] registered Dec 13 09:10:43.015310 kernel: acpiphp: Slot [31] registered Dec 13 09:10:43.015318 kernel: PCI host bridge to bus 0000:00 Dec 13 09:10:43.015429 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 09:10:43.015518 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 09:10:43.015605 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 09:10:43.015689 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 09:10:43.015773 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 09:10:43.015863 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 09:10:43.016036 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 09:10:43.016185 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 09:10:43.016297 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 09:10:43.016417 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Dec 13 09:10:43.016533 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 09:10:43.016681 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 09:10:43.016781 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 09:10:43.016875 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 09:10:43.017014 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Dec 13 09:10:43.017143 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Dec 13 09:10:43.017319 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 09:10:43.017468 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 09:10:43.017584 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 09:10:43.017694 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 09:10:43.017799 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 09:10:43.017946 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 09:10:43.018072 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Dec 13 09:10:43.018179 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 09:10:43.018305 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 09:10:43.018444 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 09:10:43.018612 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Dec 13 09:10:43.018757 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Dec 13 09:10:43.018874 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 09:10:43.019014 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 09:10:43.019114 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Dec 13 09:10:43.019221 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Dec 13 09:10:43.019345 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 09:10:43.019469 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Dec 13 09:10:43.019574 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Dec 13 09:10:43.019673 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Dec 13 09:10:43.019766 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 09:10:43.019870 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Dec 13 09:10:43.020033 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 09:10:43.020126 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Dec 13 09:10:43.020219 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 09:10:43.020334 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Dec 13 09:10:43.020430 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Dec 13 09:10:43.020539 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Dec 13 09:10:43.020640 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Dec 13 09:10:43.020750 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 09:10:43.020846 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Dec 13 09:10:43.021096 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 13 09:10:43.021113 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 09:10:43.021126 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 09:10:43.021137 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 09:10:43.021149 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 09:10:43.021166 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 09:10:43.021197 kernel: iommu: Default domain type: Translated Dec 13 09:10:43.021210 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 09:10:43.021224 kernel: PCI: Using ACPI for IRQ routing Dec 13 09:10:43.021238 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 09:10:43.021254 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 09:10:43.021269 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Dec 13 09:10:43.021430 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 09:10:43.021567 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 09:10:43.021715 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 09:10:43.021735 kernel: vgaarb: loaded Dec 13 09:10:43.021751 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 09:10:43.021765 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 09:10:43.021781 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 09:10:43.021796 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 09:10:43.021815 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 09:10:43.021829 kernel: pnp: PnP ACPI init Dec 13 09:10:43.021843 kernel: pnp: PnP ACPI: found 4 devices Dec 13 09:10:43.021864 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 09:10:43.021879 kernel: NET: Registered PF_INET protocol family Dec 13 09:10:43.021895 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 09:10:43.021928 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 09:10:43.021943 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 09:10:43.021958 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 09:10:43.021974 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 09:10:43.021988 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 09:10:43.022007 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 09:10:43.022022 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 09:10:43.022038 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 09:10:43.022053 kernel: NET: Registered PF_XDP protocol family Dec 13 09:10:43.022200 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 09:10:43.022331 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 09:10:43.022442 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 09:10:43.022529 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 09:10:43.022613 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 09:10:43.022720 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 09:10:43.022818 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 09:10:43.022830 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 09:10:43.022975 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 44596 usecs Dec 13 09:10:43.022988 kernel: PCI: CLS 0 bytes, default 64 Dec 13 09:10:43.022997 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 09:10:43.023005 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Dec 13 09:10:43.023014 kernel: Initialise system trusted keyrings Dec 13 09:10:43.023028 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 09:10:43.023036 kernel: Key type asymmetric registered Dec 13 09:10:43.023046 kernel: Asymmetric key parser 'x509' registered Dec 13 09:10:43.023054 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 09:10:43.023062 kernel: io scheduler mq-deadline registered Dec 13 09:10:43.023071 kernel: io scheduler kyber registered Dec 13 09:10:43.023079 kernel: io scheduler bfq registered Dec 13 09:10:43.023087 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 09:10:43.023096 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 09:10:43.023108 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 09:10:43.023121 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 09:10:43.023135 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 09:10:43.023149 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 09:10:43.023161 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 09:10:43.023169 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 09:10:43.023177 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 09:10:43.023319 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 09:10:43.023340 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 09:10:43.023459 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 09:10:43.023591 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T09:10:42 UTC (1734081042) Dec 13 09:10:43.023708 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 09:10:43.023723 kernel: intel_pstate: CPU model not supported Dec 13 09:10:43.023735 kernel: NET: Registered PF_INET6 protocol family Dec 13 09:10:43.023746 kernel: Segment Routing with IPv6 Dec 13 09:10:43.023758 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 09:10:43.023770 kernel: NET: Registered PF_PACKET protocol family Dec 13 09:10:43.023791 kernel: Key type dns_resolver registered Dec 13 09:10:43.023802 kernel: IPI shorthand broadcast: enabled Dec 13 09:10:43.023815 kernel: sched_clock: Marking stable (1233005031, 184767063)->(1466311468, -48539374) Dec 13 09:10:43.023828 kernel: registered taskstats version 1 Dec 13 09:10:43.023841 kernel: Loading compiled-in X.509 certificates Dec 13 09:10:43.023850 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 09:10:43.023863 kernel: Key type .fscrypt registered Dec 13 09:10:43.023874 kernel: Key type fscrypt-provisioning registered Dec 13 09:10:43.023883 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 09:10:43.023894 kernel: ima: Allocated hash algorithm: sha1 Dec 13 09:10:43.023969 kernel: ima: No architecture policies found Dec 13 09:10:43.023977 kernel: clk: Disabling unused clocks Dec 13 09:10:43.023985 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 09:10:43.023995 kernel: Write protecting the kernel read-only data: 36864k Dec 13 09:10:43.024035 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 09:10:43.024052 kernel: Run /init as init process Dec 13 09:10:43.024066 kernel: with arguments: Dec 13 09:10:43.024086 kernel: /init Dec 13 09:10:43.024101 kernel: with environment: Dec 13 09:10:43.024115 kernel: HOME=/ Dec 13 09:10:43.024130 kernel: TERM=linux Dec 13 09:10:43.024146 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 09:10:43.024167 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 09:10:43.024187 systemd[1]: Detected virtualization kvm. Dec 13 09:10:43.024202 systemd[1]: Detected architecture x86-64. Dec 13 09:10:43.024214 systemd[1]: Running in initrd. Dec 13 09:10:43.024223 systemd[1]: No hostname configured, using default hostname. Dec 13 09:10:43.024232 systemd[1]: Hostname set to . Dec 13 09:10:43.024241 systemd[1]: Initializing machine ID from VM UUID. Dec 13 09:10:43.024253 systemd[1]: Queued start job for default target initrd.target. Dec 13 09:10:43.024263 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:10:43.024272 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:10:43.024282 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 09:10:43.024294 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 09:10:43.024303 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 09:10:43.024312 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 09:10:43.024323 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 09:10:43.024332 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 09:10:43.024342 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:10:43.024351 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:10:43.024363 systemd[1]: Reached target paths.target - Path Units. Dec 13 09:10:43.024377 systemd[1]: Reached target slices.target - Slice Units. Dec 13 09:10:43.024394 systemd[1]: Reached target swap.target - Swaps. Dec 13 09:10:43.024413 systemd[1]: Reached target timers.target - Timer Units. Dec 13 09:10:43.024425 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 09:10:43.024437 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 09:10:43.024446 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 09:10:43.024455 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 09:10:43.024467 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:10:43.024476 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 09:10:43.024485 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:10:43.024494 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 09:10:43.024503 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 09:10:43.024512 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 09:10:43.024524 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 09:10:43.024533 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 09:10:43.024542 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 09:10:43.024552 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 09:10:43.024560 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:43.024570 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 09:10:43.024616 systemd-journald[183]: Collecting audit messages is disabled. Dec 13 09:10:43.024642 systemd-journald[183]: Journal started Dec 13 09:10:43.024664 systemd-journald[183]: Runtime Journal (/run/log/journal/a9b1cca7bff645f5a8062844c07c46ec) is 4.9M, max 39.3M, 34.4M free. Dec 13 09:10:43.029236 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:10:43.029328 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 09:10:43.032712 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 09:10:43.036029 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 09:10:43.054136 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 09:10:43.112166 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 09:10:43.112198 kernel: Bridge firewalling registered Dec 13 09:10:43.072134 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 09:10:43.115188 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 09:10:43.116197 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 09:10:43.120071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:43.123386 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 09:10:43.135262 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:10:43.138215 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:10:43.149130 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 09:10:43.151352 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:10:43.164181 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:10:43.166189 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:10:43.172272 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 09:10:43.181152 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 09:10:43.182327 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:10:43.206375 dracut-cmdline[216]: dracut-dracut-053 Dec 13 09:10:43.215304 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 09:10:43.224633 systemd-resolved[218]: Positive Trust Anchors: Dec 13 09:10:43.224653 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 09:10:43.224688 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 09:10:43.228994 systemd-resolved[218]: Defaulting to hostname 'linux'. Dec 13 09:10:43.230849 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 09:10:43.234881 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:10:43.338002 kernel: SCSI subsystem initialized Dec 13 09:10:43.349993 kernel: Loading iSCSI transport class v2.0-870. Dec 13 09:10:43.365053 kernel: iscsi: registered transport (tcp) Dec 13 09:10:43.394118 kernel: iscsi: registered transport (qla4xxx) Dec 13 09:10:43.394205 kernel: QLogic iSCSI HBA Driver Dec 13 09:10:43.459754 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 09:10:43.468287 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 09:10:43.508166 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 09:10:43.508266 kernel: device-mapper: uevent: version 1.0.3 Dec 13 09:10:43.509430 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 09:10:43.564533 kernel: raid6: avx2x4 gen() 25422 MB/s Dec 13 09:10:43.579097 kernel: raid6: avx2x2 gen() 23992 MB/s Dec 13 09:10:43.596302 kernel: raid6: avx2x1 gen() 21837 MB/s Dec 13 09:10:43.596399 kernel: raid6: using algorithm avx2x4 gen() 25422 MB/s Dec 13 09:10:43.615955 kernel: raid6: .... xor() 7562 MB/s, rmw enabled Dec 13 09:10:43.616043 kernel: raid6: using avx2x2 recovery algorithm Dec 13 09:10:43.641972 kernel: xor: automatically using best checksumming function avx Dec 13 09:10:43.831969 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 09:10:43.849602 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 09:10:43.864319 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:10:43.881764 systemd-udevd[402]: Using default interface naming scheme 'v255'. Dec 13 09:10:43.887023 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:10:43.895145 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 09:10:43.923384 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Dec 13 09:10:43.970564 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 09:10:43.978246 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 09:10:44.044748 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:10:44.052300 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 09:10:44.083493 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 09:10:44.086614 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 09:10:44.087613 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:10:44.090260 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 09:10:44.098263 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 09:10:44.127827 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 09:10:44.151935 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 13 09:10:44.204691 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 09:10:44.204869 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 09:10:44.204882 kernel: GPT:9289727 != 125829119 Dec 13 09:10:44.204943 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 09:10:44.204961 kernel: GPT:9289727 != 125829119 Dec 13 09:10:44.204975 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 09:10:44.204992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:10:44.205009 kernel: scsi host0: Virtio SCSI HBA Dec 13 09:10:44.205162 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 09:10:44.205368 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 13 09:10:44.247924 kernel: ACPI: bus type USB registered Dec 13 09:10:44.247971 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Dec 13 09:10:44.248168 kernel: usbcore: registered new interface driver usbfs Dec 13 09:10:44.248198 kernel: usbcore: registered new interface driver hub Dec 13 09:10:44.248220 kernel: usbcore: registered new device driver usb Dec 13 09:10:44.214581 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 09:10:44.214745 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:10:44.216208 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:10:44.217012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:10:44.217601 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:44.218578 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:44.228119 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:44.324060 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 09:10:44.324141 kernel: AES CTR mode by8 optimization enabled Dec 13 09:10:44.335946 kernel: libata version 3.00 loaded. Dec 13 09:10:44.381960 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (456) Dec 13 09:10:44.382041 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 09:10:44.429919 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) Dec 13 09:10:44.429949 kernel: scsi host1: ata_piix Dec 13 09:10:44.430157 kernel: scsi host2: ata_piix Dec 13 09:10:44.430336 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Dec 13 09:10:44.430372 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Dec 13 09:10:44.383895 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:44.394145 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 09:10:44.401566 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 09:10:44.408349 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:10:44.420760 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 09:10:44.422170 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 09:10:44.439953 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 09:10:44.454983 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 13 09:10:44.469936 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 13 09:10:44.470136 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 13 09:10:44.470260 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 13 09:10:44.470422 kernel: hub 1-0:1.0: USB hub found Dec 13 09:10:44.470616 kernel: hub 1-0:1.0: 2 ports detected Dec 13 09:10:44.456137 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 09:10:44.465787 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:10:44.475987 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:10:44.476553 disk-uuid[549]: Primary Header is updated. Dec 13 09:10:44.476553 disk-uuid[549]: Secondary Entries is updated. Dec 13 09:10:44.476553 disk-uuid[549]: Secondary Header is updated. Dec 13 09:10:45.495047 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 09:10:45.496120 disk-uuid[551]: The operation has completed successfully. Dec 13 09:10:45.551216 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 09:10:45.551339 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 09:10:45.560265 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 09:10:45.564306 sh[562]: Success Dec 13 09:10:45.583001 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 09:10:45.671177 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 09:10:45.674867 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 09:10:45.676480 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 09:10:45.703077 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 09:10:45.703164 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:10:45.703179 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 09:10:45.704775 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 09:10:45.707011 kernel: BTRFS info (device dm-0): using free space tree Dec 13 09:10:45.715897 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 09:10:45.717708 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 09:10:45.724209 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 09:10:45.727951 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 09:10:45.742708 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:10:45.742794 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:10:45.742812 kernel: BTRFS info (device vda6): using free space tree Dec 13 09:10:45.751979 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 09:10:45.767049 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 09:10:45.770227 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:10:45.778676 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 09:10:45.788614 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 09:10:45.933872 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 09:10:45.942411 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 09:10:45.956384 ignition[652]: Ignition 2.19.0 Dec 13 09:10:45.956399 ignition[652]: Stage: fetch-offline Dec 13 09:10:45.956464 ignition[652]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:45.956475 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:45.960126 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 09:10:45.956595 ignition[652]: parsed url from cmdline: "" Dec 13 09:10:45.956599 ignition[652]: no config URL provided Dec 13 09:10:45.956611 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 09:10:45.956621 ignition[652]: no config at "/usr/lib/ignition/user.ign" Dec 13 09:10:45.956627 ignition[652]: failed to fetch config: resource requires networking Dec 13 09:10:45.956987 ignition[652]: Ignition finished successfully Dec 13 09:10:45.978783 systemd-networkd[752]: lo: Link UP Dec 13 09:10:45.978799 systemd-networkd[752]: lo: Gained carrier Dec 13 09:10:45.981556 systemd-networkd[752]: Enumeration completed Dec 13 09:10:45.981721 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 09:10:45.982154 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 09:10:45.982158 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 13 09:10:45.983787 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:10:45.983793 systemd-networkd[752]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 09:10:45.984590 systemd-networkd[752]: eth0: Link UP Dec 13 09:10:45.984594 systemd-networkd[752]: eth0: Gained carrier Dec 13 09:10:45.984602 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 13 09:10:45.985026 systemd[1]: Reached target network.target - Network. Dec 13 09:10:45.986352 systemd-networkd[752]: eth1: Link UP Dec 13 09:10:45.986356 systemd-networkd[752]: eth1: Gained carrier Dec 13 09:10:45.986366 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:10:45.994578 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 09:10:45.999049 systemd-networkd[752]: eth0: DHCPv4 address 146.190.159.183/20, gateway 146.190.144.1 acquired from 169.254.169.253 Dec 13 09:10:46.011614 systemd-networkd[752]: eth1: DHCPv4 address 10.124.0.9/20, gateway 10.124.0.1 acquired from 169.254.169.253 Dec 13 09:10:46.042740 ignition[755]: Ignition 2.19.0 Dec 13 09:10:46.042755 ignition[755]: Stage: fetch Dec 13 09:10:46.043039 ignition[755]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:46.043051 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:46.043209 ignition[755]: parsed url from cmdline: "" Dec 13 09:10:46.043213 ignition[755]: no config URL provided Dec 13 09:10:46.043218 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 09:10:46.043228 ignition[755]: no config at "/usr/lib/ignition/user.ign" Dec 13 09:10:46.043247 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 13 09:10:46.060107 ignition[755]: GET result: OK Dec 13 09:10:46.060291 ignition[755]: parsing config with SHA512: b32b12bd684bbe90e45879bb231ec58fa3b599b735a918fa69a83a904a4de028debbdb821e4543dd47aad4a2c88cc79ae60c19b4e7778a3091ef81d723d73715 Dec 13 09:10:46.069797 unknown[755]: fetched base config from "system" Dec 13 09:10:46.069815 unknown[755]: fetched base config from "system" Dec 13 09:10:46.069825 unknown[755]: fetched user config from "digitalocean" Dec 13 09:10:46.071539 ignition[755]: fetch: fetch complete Dec 13 09:10:46.074048 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 09:10:46.071546 ignition[755]: fetch: fetch passed Dec 13 09:10:46.071627 ignition[755]: Ignition finished successfully Dec 13 09:10:46.083233 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 09:10:46.123094 ignition[762]: Ignition 2.19.0 Dec 13 09:10:46.123117 ignition[762]: Stage: kargs Dec 13 09:10:46.123568 ignition[762]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:46.123589 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:46.128102 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 09:10:46.125744 ignition[762]: kargs: kargs passed Dec 13 09:10:46.125875 ignition[762]: Ignition finished successfully Dec 13 09:10:46.140347 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 09:10:46.172821 ignition[769]: Ignition 2.19.0 Dec 13 09:10:46.172839 ignition[769]: Stage: disks Dec 13 09:10:46.174034 ignition[769]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:46.174053 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:46.175807 ignition[769]: disks: disks passed Dec 13 09:10:46.175887 ignition[769]: Ignition finished successfully Dec 13 09:10:46.180485 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 09:10:46.187157 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 09:10:46.188129 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 09:10:46.188869 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 09:10:46.191004 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 09:10:46.192697 systemd[1]: Reached target basic.target - Basic System. Dec 13 09:10:46.201427 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 09:10:46.239076 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 09:10:46.244167 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 09:10:46.258359 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 09:10:46.391930 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 09:10:46.392618 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 09:10:46.394383 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 09:10:46.405138 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 09:10:46.408692 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 09:10:46.412229 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Dec 13 09:10:46.421948 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (785) Dec 13 09:10:46.426941 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:10:46.430119 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 09:10:46.436832 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:10:46.436878 kernel: BTRFS info (device vda6): using free space tree Dec 13 09:10:46.430859 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 09:10:46.430928 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 09:10:46.441596 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 09:10:46.451128 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 09:10:46.453372 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 09:10:46.457716 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 09:10:46.547963 coreos-metadata[787]: Dec 13 09:10:46.547 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:10:46.554694 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 09:10:46.556348 coreos-metadata[788]: Dec 13 09:10:46.555 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:10:46.564076 coreos-metadata[787]: Dec 13 09:10:46.562 INFO Fetch successful Dec 13 09:10:46.565616 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Dec 13 09:10:46.568464 coreos-metadata[788]: Dec 13 09:10:46.566 INFO Fetch successful Dec 13 09:10:46.573897 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Dec 13 09:10:46.575074 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Dec 13 09:10:46.578704 coreos-metadata[788]: Dec 13 09:10:46.578 INFO wrote hostname ci-4081.2.1-b-8823ebc6cf to /sysroot/etc/hostname Dec 13 09:10:46.582093 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 09:10:46.581394 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 09:10:46.590379 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 09:10:46.733403 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 09:10:46.741459 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 09:10:46.757338 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 09:10:46.768292 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 09:10:46.770377 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:10:46.793873 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 09:10:46.819962 ignition[906]: INFO : Ignition 2.19.0 Dec 13 09:10:46.819962 ignition[906]: INFO : Stage: mount Dec 13 09:10:46.819962 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:46.819962 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:46.825855 ignition[906]: INFO : mount: mount passed Dec 13 09:10:46.825855 ignition[906]: INFO : Ignition finished successfully Dec 13 09:10:46.825611 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 09:10:46.836129 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 09:10:46.859335 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 09:10:46.884039 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (917) Dec 13 09:10:46.887253 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 09:10:46.887361 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 09:10:46.889742 kernel: BTRFS info (device vda6): using free space tree Dec 13 09:10:46.898975 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 09:10:46.903218 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 09:10:46.947952 ignition[934]: INFO : Ignition 2.19.0 Dec 13 09:10:46.947952 ignition[934]: INFO : Stage: files Dec 13 09:10:46.947952 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:46.947952 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:46.951845 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Dec 13 09:10:46.952872 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 09:10:46.952872 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 09:10:46.959748 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 09:10:46.961007 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 09:10:46.961007 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 09:10:46.960436 unknown[934]: wrote ssh authorized keys file for user: core Dec 13 09:10:46.964652 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 09:10:46.964652 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 09:10:47.012584 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 09:10:47.098558 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 09:10:47.098558 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 09:10:47.098558 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 09:10:47.603440 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 09:10:47.619407 systemd-networkd[752]: eth0: Gained IPv6LL Dec 13 09:10:47.715670 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 09:10:47.715670 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 09:10:47.719109 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 09:10:47.719109 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 09:10:47.719109 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 09:10:47.719109 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 09:10:47.719109 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 09:10:47.719109 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 09:10:47.719109 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 09:10:47.719109 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 09:10:47.719109 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 09:10:47.719109 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 09:10:47.719109 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 09:10:47.719109 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 09:10:47.719109 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 09:10:47.811363 systemd-networkd[752]: eth1: Gained IPv6LL Dec 13 09:10:48.161038 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 09:10:48.675741 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 09:10:48.675741 ignition[934]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 09:10:48.680349 ignition[934]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 09:10:48.680349 ignition[934]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 09:10:48.680349 ignition[934]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 09:10:48.680349 ignition[934]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 09:10:48.691835 ignition[934]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 09:10:48.691835 ignition[934]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 09:10:48.691835 ignition[934]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 09:10:48.691835 ignition[934]: INFO : files: files passed Dec 13 09:10:48.691835 ignition[934]: INFO : Ignition finished successfully Dec 13 09:10:48.683587 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 09:10:48.712490 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 09:10:48.716224 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 09:10:48.750707 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 09:10:48.751117 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 09:10:48.776728 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:10:48.776728 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:10:48.782722 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:10:48.793136 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 09:10:48.798746 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 09:10:48.821763 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 09:10:48.884304 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 09:10:48.884597 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 09:10:48.887824 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 09:10:48.888784 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 09:10:48.889834 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 09:10:48.918499 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 09:10:48.950769 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 09:10:48.961570 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 09:10:49.014692 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:10:49.015922 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:10:49.020204 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 09:10:49.029578 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 09:10:49.033570 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 09:10:49.038810 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 09:10:49.039815 systemd[1]: Stopped target basic.target - Basic System. Dec 13 09:10:49.040836 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 09:10:49.042003 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 09:10:49.043148 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 09:10:49.044341 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 09:10:49.045442 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 09:10:49.053581 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 09:10:49.055612 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 09:10:49.057575 systemd[1]: Stopped target swap.target - Swaps. Dec 13 09:10:49.061585 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 09:10:49.062026 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 09:10:49.064616 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:10:49.065980 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:10:49.069592 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 09:10:49.070480 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:10:49.081796 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 09:10:49.082744 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 09:10:49.085937 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 09:10:49.086346 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 09:10:49.092988 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 09:10:49.093652 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 09:10:49.094738 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 09:10:49.094942 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 09:10:49.113341 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 09:10:49.116745 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 09:10:49.117942 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:10:49.139234 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 09:10:49.141853 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 09:10:49.143968 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:10:49.147147 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 09:10:49.147342 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 09:10:49.166327 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 09:10:49.167664 ignition[987]: INFO : Ignition 2.19.0 Dec 13 09:10:49.167664 ignition[987]: INFO : Stage: umount Dec 13 09:10:49.167664 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:10:49.172563 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 13 09:10:49.172563 ignition[987]: INFO : umount: umount passed Dec 13 09:10:49.168025 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 09:10:49.187110 ignition[987]: INFO : Ignition finished successfully Dec 13 09:10:49.178620 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 09:10:49.181012 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 09:10:49.184879 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 09:10:49.185397 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 09:10:49.186280 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 09:10:49.186367 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 09:10:49.187167 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 09:10:49.187245 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 09:10:49.188545 systemd[1]: Stopped target network.target - Network. Dec 13 09:10:49.190844 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 09:10:49.190969 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 09:10:49.192799 systemd[1]: Stopped target paths.target - Path Units. Dec 13 09:10:49.194393 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 09:10:49.199223 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:10:49.202228 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 09:10:49.203330 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 09:10:49.207597 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 09:10:49.210061 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 09:10:49.211922 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 09:10:49.212128 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 09:10:49.227888 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 09:10:49.228064 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 09:10:49.229394 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 09:10:49.229504 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 09:10:49.230716 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 09:10:49.232363 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 09:10:49.235833 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 09:10:49.241010 systemd-networkd[752]: eth1: DHCPv6 lease lost Dec 13 09:10:49.279704 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 09:10:49.280570 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 09:10:49.282405 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 09:10:49.282593 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 09:10:49.285170 systemd-networkd[752]: eth0: DHCPv6 lease lost Dec 13 09:10:49.290060 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 09:10:49.292037 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 09:10:49.294754 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 09:10:49.295037 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 09:10:49.301229 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 09:10:49.301335 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:10:49.335360 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 09:10:49.336232 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 09:10:49.336345 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 09:10:49.337268 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 09:10:49.337344 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:10:49.338392 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 09:10:49.338470 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 09:10:49.339300 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 09:10:49.339425 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:10:49.340494 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:10:49.372857 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 09:10:49.373657 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 09:10:49.380261 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 09:10:49.380722 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:10:49.383436 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 09:10:49.383571 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 09:10:49.385514 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 09:10:49.385593 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:10:49.386544 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 09:10:49.386638 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 09:10:49.394241 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 09:10:49.394364 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 09:10:49.396389 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 09:10:49.396607 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:10:49.405507 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 09:10:49.414722 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 09:10:49.414859 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:10:49.415841 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:10:49.415961 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:49.433223 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 09:10:49.434512 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 09:10:49.437799 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 09:10:49.450540 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 09:10:49.472499 systemd[1]: Switching root. Dec 13 09:10:49.514886 systemd-journald[183]: Journal stopped Dec 13 09:10:51.928341 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 09:10:51.928513 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 09:10:51.928547 kernel: SELinux: policy capability open_perms=1 Dec 13 09:10:51.928572 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 09:10:51.928588 kernel: SELinux: policy capability always_check_network=0 Dec 13 09:10:51.928602 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 09:10:51.928619 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 09:10:51.928636 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 09:10:51.928655 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 09:10:51.928683 kernel: audit: type=1403 audit(1734081049.970:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 09:10:51.928709 systemd[1]: Successfully loaded SELinux policy in 70.845ms. Dec 13 09:10:51.928739 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.057ms. Dec 13 09:10:51.928761 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 09:10:51.928781 systemd[1]: Detected virtualization kvm. Dec 13 09:10:51.928801 systemd[1]: Detected architecture x86-64. Dec 13 09:10:51.928819 systemd[1]: Detected first boot. Dec 13 09:10:51.928838 systemd[1]: Hostname set to . Dec 13 09:10:51.928859 systemd[1]: Initializing machine ID from VM UUID. Dec 13 09:10:51.931594 zram_generator::config[1031]: No configuration found. Dec 13 09:10:51.931640 systemd[1]: Populated /etc with preset unit settings. Dec 13 09:10:51.931666 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 09:10:51.931686 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 09:10:51.931735 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 09:10:51.931758 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 09:10:51.931786 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 09:10:51.931814 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 09:10:51.931834 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 09:10:51.931851 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 09:10:51.931877 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 09:10:51.931899 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 09:10:51.931995 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 09:10:51.932015 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:10:51.932037 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:10:51.932090 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 09:10:51.932112 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 09:10:51.932132 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 09:10:51.932173 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 09:10:51.932195 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 09:10:51.932217 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:10:51.932238 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 09:10:51.932261 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 09:10:51.932281 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 09:10:51.932306 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 09:10:51.932327 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:10:51.932347 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 09:10:51.932367 systemd[1]: Reached target slices.target - Slice Units. Dec 13 09:10:51.932430 systemd[1]: Reached target swap.target - Swaps. Dec 13 09:10:51.932452 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 09:10:51.932474 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 09:10:51.932497 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:10:51.932519 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 09:10:51.932546 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:10:51.932572 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 09:10:51.932590 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 09:10:51.932610 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 09:10:51.932630 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 09:10:51.932652 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:51.932672 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 09:10:51.932694 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 09:10:51.932722 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 09:10:51.932749 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 09:10:51.932768 systemd[1]: Reached target machines.target - Containers. Dec 13 09:10:51.932789 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 09:10:51.932809 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:10:51.932830 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 09:10:51.932880 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 09:10:51.932923 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:10:51.932944 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 09:10:51.932965 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:10:51.936071 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 09:10:51.936107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:10:51.936130 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 09:10:51.936148 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 09:10:51.936171 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 09:10:51.936193 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 09:10:51.936215 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 09:10:51.936237 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 09:10:51.936264 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 09:10:51.936284 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 09:10:51.936306 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 09:10:51.936327 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 09:10:51.936350 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 09:10:51.936373 kernel: fuse: init (API version 7.39) Dec 13 09:10:51.936396 systemd[1]: Stopped verity-setup.service. Dec 13 09:10:51.936416 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:51.936436 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 09:10:51.936461 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 09:10:51.936480 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 09:10:51.936501 kernel: loop: module loaded Dec 13 09:10:51.936522 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 09:10:51.936543 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 09:10:51.936629 systemd-journald[1104]: Collecting audit messages is disabled. Dec 13 09:10:51.936675 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 09:10:51.936698 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:10:51.936719 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 09:10:51.936738 systemd-journald[1104]: Journal started Dec 13 09:10:51.936799 systemd-journald[1104]: Runtime Journal (/run/log/journal/a9b1cca7bff645f5a8062844c07c46ec) is 4.9M, max 39.3M, 34.4M free. Dec 13 09:10:51.939538 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 09:10:51.317037 systemd[1]: Queued start job for default target multi-user.target. Dec 13 09:10:51.364937 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 09:10:51.365744 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 09:10:51.948011 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 09:10:51.957502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:10:51.958153 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:10:51.961979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:10:51.962344 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:10:51.963871 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 09:10:51.965206 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 09:10:51.972685 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:10:51.973111 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:10:51.986443 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 09:10:51.996330 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 09:10:52.010209 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 09:10:52.012470 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 09:10:52.056000 kernel: ACPI: bus type drm_connector registered Dec 13 09:10:52.056166 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 09:10:52.074122 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 09:10:52.075397 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 09:10:52.075467 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 09:10:52.081335 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 09:10:52.105530 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 09:10:52.120961 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 09:10:52.123274 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:10:52.132251 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 09:10:52.140319 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 09:10:52.143087 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:10:52.154269 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 09:10:52.155178 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:10:52.159621 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:10:52.163465 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 09:10:52.172838 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 09:10:52.174470 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 09:10:52.174666 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 09:10:52.176086 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 09:10:52.177603 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 09:10:52.179254 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 09:10:52.214488 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 09:10:52.218018 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 09:10:52.230320 systemd-journald[1104]: Time spent on flushing to /var/log/journal/a9b1cca7bff645f5a8062844c07c46ec is 182.490ms for 990 entries. Dec 13 09:10:52.230320 systemd-journald[1104]: System Journal (/var/log/journal/a9b1cca7bff645f5a8062844c07c46ec) is 8.0M, max 195.6M, 187.6M free. Dec 13 09:10:52.482530 systemd-journald[1104]: Received client request to flush runtime journal. Dec 13 09:10:52.482669 kernel: loop0: detected capacity change from 0 to 140768 Dec 13 09:10:52.482714 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 09:10:52.482762 kernel: loop1: detected capacity change from 0 to 142488 Dec 13 09:10:52.223402 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 09:10:52.237621 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 09:10:52.257176 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:10:52.314128 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:10:52.323577 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 09:10:52.351128 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 09:10:52.354973 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 09:10:52.375714 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 09:10:52.452109 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 09:10:52.471440 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 09:10:52.491872 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 09:10:52.552943 kernel: loop2: detected capacity change from 0 to 210664 Dec 13 09:10:52.588776 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Dec 13 09:10:52.588806 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Dec 13 09:10:52.606959 kernel: loop3: detected capacity change from 0 to 8 Dec 13 09:10:52.606061 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:10:52.641956 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 09:10:52.756152 kernel: loop5: detected capacity change from 0 to 142488 Dec 13 09:10:52.795103 kernel: loop6: detected capacity change from 0 to 210664 Dec 13 09:10:52.869288 kernel: loop7: detected capacity change from 0 to 8 Dec 13 09:10:52.870218 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Dec 13 09:10:52.871194 (sd-merge)[1176]: Merged extensions into '/usr'. Dec 13 09:10:52.882112 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 09:10:52.882728 systemd[1]: Reloading... Dec 13 09:10:53.208871 zram_generator::config[1206]: No configuration found. Dec 13 09:10:53.626371 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:10:53.758952 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 09:10:53.784237 systemd[1]: Reloading finished in 900 ms. Dec 13 09:10:53.847961 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 09:10:53.851573 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 09:10:53.871294 systemd[1]: Starting ensure-sysext.service... Dec 13 09:10:53.881507 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 09:10:53.915766 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Dec 13 09:10:53.915795 systemd[1]: Reloading... Dec 13 09:10:53.997427 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 09:10:54.000727 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 09:10:54.007646 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 09:10:54.012669 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Dec 13 09:10:54.016205 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Dec 13 09:10:54.037527 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 09:10:54.039104 systemd-tmpfiles[1247]: Skipping /boot Dec 13 09:10:54.068998 zram_generator::config[1274]: No configuration found. Dec 13 09:10:54.106055 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 09:10:54.106077 systemd-tmpfiles[1247]: Skipping /boot Dec 13 09:10:54.352278 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:10:54.442407 systemd[1]: Reloading finished in 526 ms. Dec 13 09:10:54.466371 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 09:10:54.467968 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:10:54.504633 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 09:10:54.512266 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 09:10:54.526529 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 09:10:54.538318 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 09:10:54.545891 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:10:54.550212 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 09:10:54.566622 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:54.567150 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:10:54.580246 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:10:54.583792 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:10:54.591460 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:10:54.592500 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:10:54.592732 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:54.601083 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:54.601420 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:10:54.601678 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:10:54.614500 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 09:10:54.616418 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:54.631636 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:54.632130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:10:54.643143 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 09:10:54.645675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:10:54.648186 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:54.654505 systemd[1]: Finished ensure-sysext.service. Dec 13 09:10:54.669778 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 09:10:54.681335 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 09:10:54.689301 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 09:10:54.705961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:10:54.706223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:10:54.731848 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 09:10:54.734566 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Dec 13 09:10:54.749032 augenrules[1351]: No rules Dec 13 09:10:54.753145 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 09:10:54.767452 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 09:10:54.768439 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 09:10:54.772218 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:10:54.772488 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:10:54.779835 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:10:54.790119 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:10:54.790824 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:10:54.792863 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:10:54.800534 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 09:10:54.804253 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:10:54.821377 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 09:10:54.829023 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 09:10:54.831442 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 09:10:54.850147 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 09:10:55.048398 systemd-resolved[1329]: Positive Trust Anchors: Dec 13 09:10:55.049043 systemd-networkd[1365]: lo: Link UP Dec 13 09:10:55.049056 systemd-networkd[1365]: lo: Gained carrier Dec 13 09:10:55.050359 systemd-networkd[1365]: Enumeration completed Dec 13 09:10:55.050552 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 09:10:55.053930 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 09:10:55.053987 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 09:10:55.061486 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 09:10:55.065878 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 09:10:55.066707 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 09:10:55.071559 systemd-resolved[1329]: Using system hostname 'ci-4081.2.1-b-8823ebc6cf'. Dec 13 09:10:55.075686 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 09:10:55.079241 systemd[1]: Reached target network.target - Network. Dec 13 09:10:55.079967 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:10:55.083379 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 09:10:55.095120 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1375) Dec 13 09:10:55.120412 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1376) Dec 13 09:10:55.123136 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 13 09:10:55.123949 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:55.124169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:10:55.132303 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:10:55.135226 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:10:55.138351 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:10:55.139481 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:10:55.139554 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 09:10:55.139581 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 09:10:55.147964 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1376) Dec 13 09:10:55.174979 kernel: ISO 9660 Extensions: RRIP_1991A Dec 13 09:10:55.178011 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 13 09:10:55.180433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:10:55.180978 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:10:55.195248 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:10:55.197061 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:10:55.226535 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:10:55.227781 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:10:55.230548 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:10:55.230696 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:10:55.255620 systemd-networkd[1365]: eth1: Configuring with /run/systemd/network/10-4e:b1:a2:a0:f5:c4.network. Dec 13 09:10:55.258949 systemd-networkd[1365]: eth1: Link UP Dec 13 09:10:55.258962 systemd-networkd[1365]: eth1: Gained carrier Dec 13 09:10:55.264249 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Dec 13 09:10:55.278798 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 09:10:55.285186 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 09:10:55.293805 systemd-networkd[1365]: eth0: Configuring with /run/systemd/network/10-a6:15:c8:87:79:6e.network. Dec 13 09:10:55.297626 systemd-networkd[1365]: eth0: Link UP Dec 13 09:10:55.297638 systemd-networkd[1365]: eth0: Gained carrier Dec 13 09:10:55.308965 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 09:10:55.326969 kernel: ACPI: button: Power Button [PWRF] Dec 13 09:10:55.332447 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 09:10:55.358171 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 09:10:55.432637 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 09:10:55.435942 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 13 09:10:55.441279 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 13 09:10:55.446728 kernel: Console: switching to colour dummy device 80x25 Dec 13 09:10:55.446944 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 09:10:55.446968 kernel: [drm] features: -context_init Dec 13 09:10:55.448946 kernel: [drm] number of scanouts: 1 Dec 13 09:10:55.449036 kernel: [drm] number of cap sets: 0 Dec 13 09:10:55.450989 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Dec 13 09:10:55.470636 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 09:10:55.470736 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 09:10:55.482964 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 09:10:55.498047 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 09:10:55.504492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:55.514404 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:10:55.514811 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:55.533413 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:55.554222 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:10:55.554539 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:55.568291 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:10:55.734702 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:10:55.771938 kernel: EDAC MC: Ver: 3.0.0 Dec 13 09:10:55.803057 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 09:10:55.811246 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 09:10:55.834963 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 09:10:55.864667 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 09:10:55.867322 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:10:55.867462 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 09:10:55.867688 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 09:10:55.867796 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 09:10:55.868741 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 09:10:55.870158 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 09:10:55.870318 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 09:10:55.870411 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 09:10:55.870467 systemd[1]: Reached target paths.target - Path Units. Dec 13 09:10:55.870568 systemd[1]: Reached target timers.target - Timer Units. Dec 13 09:10:55.873086 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 09:10:55.878415 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 09:10:55.886818 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 09:10:55.898401 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 09:10:55.902802 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 09:10:55.905545 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 09:10:55.908437 systemd[1]: Reached target basic.target - Basic System. Dec 13 09:10:55.909617 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 09:10:55.909660 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 09:10:55.911371 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 09:10:55.919301 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 09:10:55.933280 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 09:10:55.971243 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 09:10:55.984495 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 09:10:55.991192 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 09:10:55.991803 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 09:10:55.997296 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 09:10:56.010130 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 09:10:56.018256 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 09:10:56.027194 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 09:10:56.036963 jq[1437]: false Dec 13 09:10:56.033218 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 09:10:56.035328 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 09:10:56.036165 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 09:10:56.039184 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 09:10:56.044047 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 09:10:56.047092 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 09:10:56.068669 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 09:10:56.069081 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 09:10:56.078126 coreos-metadata[1433]: Dec 13 09:10:56.069 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:10:56.101200 coreos-metadata[1433]: Dec 13 09:10:56.087 INFO Fetch successful Dec 13 09:10:56.126383 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 09:10:56.127034 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 09:10:56.142606 jq[1445]: true Dec 13 09:10:56.145951 dbus-daemon[1434]: [system] SELinux support is enabled Dec 13 09:10:56.153262 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 09:10:56.157793 update_engine[1444]: I20241213 09:10:56.157686 1444 main.cc:92] Flatcar Update Engine starting Dec 13 09:10:56.175948 update_engine[1444]: I20241213 09:10:56.175657 1444 update_check_scheduler.cc:74] Next update check in 7m25s Dec 13 09:10:56.176809 extend-filesystems[1438]: Found loop4 Dec 13 09:10:56.208493 extend-filesystems[1438]: Found loop5 Dec 13 09:10:56.208493 extend-filesystems[1438]: Found loop6 Dec 13 09:10:56.208493 extend-filesystems[1438]: Found loop7 Dec 13 09:10:56.208493 extend-filesystems[1438]: Found vda Dec 13 09:10:56.208493 extend-filesystems[1438]: Found vda1 Dec 13 09:10:56.208493 extend-filesystems[1438]: Found vda2 Dec 13 09:10:56.208493 extend-filesystems[1438]: Found vda3 Dec 13 09:10:56.208493 extend-filesystems[1438]: Found usr Dec 13 09:10:56.208493 extend-filesystems[1438]: Found vda4 Dec 13 09:10:56.208493 extend-filesystems[1438]: Found vda6 Dec 13 09:10:56.208493 extend-filesystems[1438]: Found vda7 Dec 13 09:10:56.208493 extend-filesystems[1438]: Found vda9 Dec 13 09:10:56.208493 extend-filesystems[1438]: Checking size of /dev/vda9 Dec 13 09:10:56.179858 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 09:10:56.310028 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Dec 13 09:10:56.310108 extend-filesystems[1438]: Resized partition /dev/vda9 Dec 13 09:10:56.180997 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 09:10:56.327582 extend-filesystems[1477]: resize2fs 1.47.1 (20-May-2024) Dec 13 09:10:56.336035 tar[1455]: linux-amd64/helm Dec 13 09:10:56.183880 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 09:10:56.187144 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 13 09:10:56.336870 jq[1463]: true Dec 13 09:10:56.187200 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 09:10:56.190348 systemd[1]: Started update-engine.service - Update Engine. Dec 13 09:10:56.200408 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 09:10:56.201240 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 09:10:56.203684 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 09:10:56.206286 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 09:10:56.273517 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 09:10:56.274564 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 09:10:56.395948 systemd-networkd[1365]: eth1: Gained IPv6LL Dec 13 09:10:56.409678 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 09:10:56.411396 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 09:10:56.428286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:10:56.437247 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 09:10:56.494077 systemd-logind[1443]: New seat seat0. Dec 13 09:10:56.506570 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 09:10:56.506607 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 09:10:56.507075 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 09:10:56.532421 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1367) Dec 13 09:10:56.534269 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Dec 13 09:10:56.537863 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 09:10:56.558476 systemd[1]: Starting sshkeys.service... Dec 13 09:10:56.611864 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 09:10:56.644414 systemd-networkd[1365]: eth0: Gained IPv6LL Dec 13 09:10:56.706958 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 09:10:56.720064 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 09:10:56.730439 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 09:10:56.768463 extend-filesystems[1477]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 09:10:56.768463 extend-filesystems[1477]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 09:10:56.768463 extend-filesystems[1477]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 09:10:56.758711 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 09:10:56.786290 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Dec 13 09:10:56.786290 extend-filesystems[1438]: Found vdb Dec 13 09:10:56.760195 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 09:10:56.892496 coreos-metadata[1513]: Dec 13 09:10:56.890 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 13 09:10:56.899199 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 09:10:56.911738 coreos-metadata[1513]: Dec 13 09:10:56.909 INFO Fetch successful Dec 13 09:10:56.951148 unknown[1513]: wrote ssh authorized keys file for user: core Dec 13 09:10:57.020072 update-ssh-keys[1524]: Updated "/home/core/.ssh/authorized_keys" Dec 13 09:10:57.031742 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 09:10:57.041826 systemd[1]: Finished sshkeys.service. Dec 13 09:10:57.121311 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 09:10:57.177797 containerd[1457]: time="2024-12-13T09:10:57.175691733Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 09:10:57.213558 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 09:10:57.226575 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 09:10:57.265173 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 09:10:57.265502 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 09:10:57.280534 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 09:10:57.294933 containerd[1457]: time="2024-12-13T09:10:57.294832395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:57.303558 containerd[1457]: time="2024-12-13T09:10:57.303469461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:10:57.305884 containerd[1457]: time="2024-12-13T09:10:57.303946905Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 09:10:57.307958 containerd[1457]: time="2024-12-13T09:10:57.306421219Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 09:10:57.308629 containerd[1457]: time="2024-12-13T09:10:57.308443091Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 09:10:57.308629 containerd[1457]: time="2024-12-13T09:10:57.308500753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:57.308629 containerd[1457]: time="2024-12-13T09:10:57.308589578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:10:57.308811 containerd[1457]: time="2024-12-13T09:10:57.308794239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:57.309722 containerd[1457]: time="2024-12-13T09:10:57.309684628Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:10:57.309985 containerd[1457]: time="2024-12-13T09:10:57.309955778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:57.310218 containerd[1457]: time="2024-12-13T09:10:57.310177711Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:10:57.310314 containerd[1457]: time="2024-12-13T09:10:57.310296465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:57.312375 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 09:10:57.314276 containerd[1457]: time="2024-12-13T09:10:57.313630238Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:57.314276 containerd[1457]: time="2024-12-13T09:10:57.314034346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:10:57.314276 containerd[1457]: time="2024-12-13T09:10:57.314242428Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:10:57.314561 containerd[1457]: time="2024-12-13T09:10:57.314488211Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 09:10:57.314746 containerd[1457]: time="2024-12-13T09:10:57.314723833Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 09:10:57.314899 containerd[1457]: time="2024-12-13T09:10:57.314884184Z" level=info msg="metadata content store policy set" policy=shared Dec 13 09:10:57.324107 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 09:10:57.328364 containerd[1457]: time="2024-12-13T09:10:57.328306044Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 09:10:57.329011 containerd[1457]: time="2024-12-13T09:10:57.328609551Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 09:10:57.332959 containerd[1457]: time="2024-12-13T09:10:57.330119106Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 09:10:57.332959 containerd[1457]: time="2024-12-13T09:10:57.332130734Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 09:10:57.332959 containerd[1457]: time="2024-12-13T09:10:57.332167025Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 09:10:57.332959 containerd[1457]: time="2024-12-13T09:10:57.332495790Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 09:10:57.334951 containerd[1457]: time="2024-12-13T09:10:57.334446440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 09:10:57.335379 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 09:10:57.336109 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 09:10:57.337464 containerd[1457]: time="2024-12-13T09:10:57.337063447Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 09:10:57.337464 containerd[1457]: time="2024-12-13T09:10:57.337096891Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 09:10:57.337464 containerd[1457]: time="2024-12-13T09:10:57.337146071Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 09:10:57.337464 containerd[1457]: time="2024-12-13T09:10:57.337171679Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 09:10:57.337464 containerd[1457]: time="2024-12-13T09:10:57.337194211Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 09:10:57.337464 containerd[1457]: time="2024-12-13T09:10:57.337234182Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 09:10:57.337464 containerd[1457]: time="2024-12-13T09:10:57.337258696Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 09:10:57.337464 containerd[1457]: time="2024-12-13T09:10:57.337286430Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 09:10:57.337464 containerd[1457]: time="2024-12-13T09:10:57.337335297Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 09:10:57.337464 containerd[1457]: time="2024-12-13T09:10:57.337353260Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 09:10:57.337464 containerd[1457]: time="2024-12-13T09:10:57.337383846Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 09:10:57.337464 containerd[1457]: time="2024-12-13T09:10:57.337406770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.337464 containerd[1457]: time="2024-12-13T09:10:57.337424523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.337464 containerd[1457]: time="2024-12-13T09:10:57.337437839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.340149 containerd[1457]: time="2024-12-13T09:10:57.337891680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.340149 containerd[1457]: time="2024-12-13T09:10:57.339432515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.340149 containerd[1457]: time="2024-12-13T09:10:57.339532788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.340149 containerd[1457]: time="2024-12-13T09:10:57.339547908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.340149 containerd[1457]: time="2024-12-13T09:10:57.339563153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.340149 containerd[1457]: time="2024-12-13T09:10:57.339577119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.340149 containerd[1457]: time="2024-12-13T09:10:57.339609376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.340149 containerd[1457]: time="2024-12-13T09:10:57.339626146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.340149 containerd[1457]: time="2024-12-13T09:10:57.339650784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.340149 containerd[1457]: time="2024-12-13T09:10:57.339667482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.340149 containerd[1457]: time="2024-12-13T09:10:57.339700683Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 09:10:57.340149 containerd[1457]: time="2024-12-13T09:10:57.339728114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.340149 containerd[1457]: time="2024-12-13T09:10:57.339790156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.340149 containerd[1457]: time="2024-12-13T09:10:57.339809909Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 09:10:57.340484 containerd[1457]: time="2024-12-13T09:10:57.339934226Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 09:10:57.340484 containerd[1457]: time="2024-12-13T09:10:57.339959142Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 09:10:57.340484 containerd[1457]: time="2024-12-13T09:10:57.340083494Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 09:10:57.340484 containerd[1457]: time="2024-12-13T09:10:57.340098387Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 09:10:57.340484 containerd[1457]: time="2024-12-13T09:10:57.340108165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.344934 containerd[1457]: time="2024-12-13T09:10:57.340967796Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 09:10:57.344934 containerd[1457]: time="2024-12-13T09:10:57.340997696Z" level=info msg="NRI interface is disabled by configuration." Dec 13 09:10:57.344934 containerd[1457]: time="2024-12-13T09:10:57.341026982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 09:10:57.345071 containerd[1457]: time="2024-12-13T09:10:57.344349469Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 09:10:57.345071 containerd[1457]: time="2024-12-13T09:10:57.344529075Z" level=info msg="Connect containerd service" Dec 13 09:10:57.345071 containerd[1457]: time="2024-12-13T09:10:57.344630574Z" level=info msg="using legacy CRI server" Dec 13 09:10:57.345071 containerd[1457]: time="2024-12-13T09:10:57.344664724Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 09:10:57.345071 containerd[1457]: time="2024-12-13T09:10:57.344828213Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 09:10:57.347674 containerd[1457]: time="2024-12-13T09:10:57.346853265Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 09:10:57.347674 containerd[1457]: time="2024-12-13T09:10:57.347400582Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 09:10:57.347674 containerd[1457]: time="2024-12-13T09:10:57.347484138Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 09:10:57.347674 containerd[1457]: time="2024-12-13T09:10:57.347575322Z" level=info msg="Start subscribing containerd event" Dec 13 09:10:57.347674 containerd[1457]: time="2024-12-13T09:10:57.347629812Z" level=info msg="Start recovering state" Dec 13 09:10:57.347917 containerd[1457]: time="2024-12-13T09:10:57.347701568Z" level=info msg="Start event monitor" Dec 13 09:10:57.347917 containerd[1457]: time="2024-12-13T09:10:57.347716284Z" level=info msg="Start snapshots syncer" Dec 13 09:10:57.347917 containerd[1457]: time="2024-12-13T09:10:57.347728099Z" level=info msg="Start cni network conf syncer for default" Dec 13 09:10:57.347917 containerd[1457]: time="2024-12-13T09:10:57.347735126Z" level=info msg="Start streaming server" Dec 13 09:10:57.347894 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 09:10:57.353285 containerd[1457]: time="2024-12-13T09:10:57.352462600Z" level=info msg="containerd successfully booted in 0.183132s" Dec 13 09:10:57.482920 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 09:10:57.497070 systemd[1]: Started sshd@0-146.190.159.183:22-147.75.109.163:43412.service - OpenSSH per-connection server daemon (147.75.109.163:43412). Dec 13 09:10:57.643002 sshd[1549]: Accepted publickey for core from 147.75.109.163 port 43412 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:10:57.648366 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:10:57.670989 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 09:10:57.686718 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 09:10:57.695532 systemd-logind[1443]: New session 1 of user core. Dec 13 09:10:57.700265 tar[1455]: linux-amd64/LICENSE Dec 13 09:10:57.700265 tar[1455]: linux-amd64/README.md Dec 13 09:10:57.733846 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 09:10:57.737046 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 09:10:57.752342 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 09:10:57.769462 (systemd)[1556]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 09:10:57.956056 systemd[1556]: Queued start job for default target default.target. Dec 13 09:10:57.963401 systemd[1556]: Created slice app.slice - User Application Slice. Dec 13 09:10:57.963459 systemd[1556]: Reached target paths.target - Paths. Dec 13 09:10:57.963482 systemd[1556]: Reached target timers.target - Timers. Dec 13 09:10:57.966121 systemd[1556]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 09:10:57.990314 systemd[1556]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 09:10:57.990539 systemd[1556]: Reached target sockets.target - Sockets. Dec 13 09:10:57.990564 systemd[1556]: Reached target basic.target - Basic System. Dec 13 09:10:57.990633 systemd[1556]: Reached target default.target - Main User Target. Dec 13 09:10:57.990678 systemd[1556]: Startup finished in 208ms. Dec 13 09:10:57.990820 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 09:10:58.002634 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 09:10:58.093595 systemd[1]: Started sshd@1-146.190.159.183:22-147.75.109.163:43420.service - OpenSSH per-connection server daemon (147.75.109.163:43420). Dec 13 09:10:58.178031 sshd[1567]: Accepted publickey for core from 147.75.109.163 port 43420 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:10:58.180179 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:10:58.187968 systemd-logind[1443]: New session 2 of user core. Dec 13 09:10:58.196250 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 09:10:58.277839 sshd[1567]: pam_unix(sshd:session): session closed for user core Dec 13 09:10:58.289735 systemd[1]: sshd@1-146.190.159.183:22-147.75.109.163:43420.service: Deactivated successfully. Dec 13 09:10:58.293850 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 09:10:58.297764 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Dec 13 09:10:58.306300 systemd[1]: Started sshd@2-146.190.159.183:22-147.75.109.163:43426.service - OpenSSH per-connection server daemon (147.75.109.163:43426). Dec 13 09:10:58.314817 systemd-logind[1443]: Removed session 2. Dec 13 09:10:58.371430 sshd[1574]: Accepted publickey for core from 147.75.109.163 port 43426 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:10:58.372146 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:10:58.379759 systemd-logind[1443]: New session 3 of user core. Dec 13 09:10:58.386544 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 09:10:58.428190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:10:58.432536 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 09:10:58.435560 systemd[1]: Startup finished in 1.411s (kernel) + 7.189s (initrd) + 8.534s (userspace) = 17.135s. Dec 13 09:10:58.436762 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:10:58.476883 sshd[1574]: pam_unix(sshd:session): session closed for user core Dec 13 09:10:58.482455 systemd[1]: sshd@2-146.190.159.183:22-147.75.109.163:43426.service: Deactivated successfully. Dec 13 09:10:58.486800 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 09:10:58.489929 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Dec 13 09:10:58.492318 systemd-logind[1443]: Removed session 3. Dec 13 09:10:59.355607 kubelet[1582]: E1213 09:10:59.355509 1582 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:10:59.360134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:10:59.360352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:10:59.361293 systemd[1]: kubelet.service: Consumed 1.582s CPU time. Dec 13 09:11:02.548505 systemd-timesyncd[1346]: Contacted time server 23.150.40.242:123 (1.flatcar.pool.ntp.org). Dec 13 09:11:02.548608 systemd-timesyncd[1346]: Initial clock synchronization to Fri 2024-12-13 09:11:02.548166 UTC. Dec 13 09:11:02.548809 systemd-resolved[1329]: Clock change detected. Flushing caches. Dec 13 09:11:09.398406 systemd[1]: Started sshd@3-146.190.159.183:22-147.75.109.163:43332.service - OpenSSH per-connection server daemon (147.75.109.163:43332). Dec 13 09:11:09.437834 sshd[1598]: Accepted publickey for core from 147.75.109.163 port 43332 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:11:09.440108 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:11:09.447958 systemd-logind[1443]: New session 4 of user core. Dec 13 09:11:09.457359 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 09:11:09.522271 sshd[1598]: pam_unix(sshd:session): session closed for user core Dec 13 09:11:09.533144 systemd[1]: sshd@3-146.190.159.183:22-147.75.109.163:43332.service: Deactivated successfully. Dec 13 09:11:09.535266 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 09:11:09.536994 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Dec 13 09:11:09.553824 systemd[1]: Started sshd@4-146.190.159.183:22-147.75.109.163:43338.service - OpenSSH per-connection server daemon (147.75.109.163:43338). Dec 13 09:11:09.555921 systemd-logind[1443]: Removed session 4. Dec 13 09:11:09.593097 sshd[1605]: Accepted publickey for core from 147.75.109.163 port 43338 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:11:09.595916 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:11:09.602004 systemd-logind[1443]: New session 5 of user core. Dec 13 09:11:09.612235 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 09:11:09.669805 sshd[1605]: pam_unix(sshd:session): session closed for user core Dec 13 09:11:09.682139 systemd[1]: sshd@4-146.190.159.183:22-147.75.109.163:43338.service: Deactivated successfully. Dec 13 09:11:09.685734 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 09:11:09.687656 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Dec 13 09:11:09.693393 systemd[1]: Started sshd@5-146.190.159.183:22-147.75.109.163:43352.service - OpenSSH per-connection server daemon (147.75.109.163:43352). Dec 13 09:11:09.695513 systemd-logind[1443]: Removed session 5. Dec 13 09:11:09.746590 sshd[1612]: Accepted publickey for core from 147.75.109.163 port 43352 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:11:09.748667 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:11:09.756211 systemd-logind[1443]: New session 6 of user core. Dec 13 09:11:09.763582 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 09:11:09.830685 sshd[1612]: pam_unix(sshd:session): session closed for user core Dec 13 09:11:09.848822 systemd[1]: sshd@5-146.190.159.183:22-147.75.109.163:43352.service: Deactivated successfully. Dec 13 09:11:09.851029 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 09:11:09.853040 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Dec 13 09:11:09.862420 systemd[1]: Started sshd@6-146.190.159.183:22-147.75.109.163:43368.service - OpenSSH per-connection server daemon (147.75.109.163:43368). Dec 13 09:11:09.864432 systemd-logind[1443]: Removed session 6. Dec 13 09:11:09.922624 sshd[1619]: Accepted publickey for core from 147.75.109.163 port 43368 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:11:09.926191 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:11:09.941256 systemd-logind[1443]: New session 7 of user core. Dec 13 09:11:09.953493 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 09:11:10.036099 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 09:11:10.036535 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:11:10.058254 sudo[1622]: pam_unix(sudo:session): session closed for user root Dec 13 09:11:10.064342 sshd[1619]: pam_unix(sshd:session): session closed for user core Dec 13 09:11:10.080205 systemd[1]: sshd@6-146.190.159.183:22-147.75.109.163:43368.service: Deactivated successfully. Dec 13 09:11:10.083252 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 09:11:10.100204 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Dec 13 09:11:10.108581 systemd[1]: Started sshd@7-146.190.159.183:22-147.75.109.163:43378.service - OpenSSH per-connection server daemon (147.75.109.163:43378). Dec 13 09:11:10.110379 systemd-logind[1443]: Removed session 7. Dec 13 09:11:10.176471 sshd[1627]: Accepted publickey for core from 147.75.109.163 port 43378 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:11:10.179613 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:11:10.189005 systemd-logind[1443]: New session 8 of user core. Dec 13 09:11:10.200430 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 09:11:10.270201 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 09:11:10.271419 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:11:10.272978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 09:11:10.281389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:10.289999 sudo[1631]: pam_unix(sudo:session): session closed for user root Dec 13 09:11:10.301261 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 09:11:10.301769 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:11:10.326480 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 09:11:10.336329 auditctl[1637]: No rules Dec 13 09:11:10.339445 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 09:11:10.340015 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 09:11:10.352502 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 09:11:10.420171 augenrules[1655]: No rules Dec 13 09:11:10.423205 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 09:11:10.425408 sudo[1630]: pam_unix(sudo:session): session closed for user root Dec 13 09:11:10.435306 sshd[1627]: pam_unix(sshd:session): session closed for user core Dec 13 09:11:10.456560 systemd[1]: Started sshd@8-146.190.159.183:22-147.75.109.163:43392.service - OpenSSH per-connection server daemon (147.75.109.163:43392). Dec 13 09:11:10.458830 systemd[1]: sshd@7-146.190.159.183:22-147.75.109.163:43378.service: Deactivated successfully. Dec 13 09:11:10.477025 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 09:11:10.479643 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Dec 13 09:11:10.487508 systemd-logind[1443]: Removed session 8. Dec 13 09:11:10.536270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:10.547210 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:11:10.575377 sshd[1661]: Accepted publickey for core from 147.75.109.163 port 43392 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:11:10.577998 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:11:10.590053 systemd-logind[1443]: New session 9 of user core. Dec 13 09:11:10.594333 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 09:11:10.671841 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 09:11:10.672327 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:11:10.684105 kubelet[1670]: E1213 09:11:10.683974 1670 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:11:10.696706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:11:10.696909 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:11:11.610661 (dockerd)[1695]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 09:11:11.611878 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 09:11:12.447779 dockerd[1695]: time="2024-12-13T09:11:12.446322521Z" level=info msg="Starting up" Dec 13 09:11:12.710043 dockerd[1695]: time="2024-12-13T09:11:12.709687911Z" level=info msg="Loading containers: start." Dec 13 09:11:13.018978 kernel: Initializing XFRM netlink socket Dec 13 09:11:13.243195 systemd-networkd[1365]: docker0: Link UP Dec 13 09:11:13.305172 dockerd[1695]: time="2024-12-13T09:11:13.304915997Z" level=info msg="Loading containers: done." Dec 13 09:11:13.364203 dockerd[1695]: time="2024-12-13T09:11:13.364109105Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 09:11:13.364559 dockerd[1695]: time="2024-12-13T09:11:13.364277873Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 09:11:13.364559 dockerd[1695]: time="2024-12-13T09:11:13.364455667Z" level=info msg="Daemon has completed initialization" Dec 13 09:11:13.371995 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck732196817-merged.mount: Deactivated successfully. Dec 13 09:11:13.562368 dockerd[1695]: time="2024-12-13T09:11:13.562233338Z" level=info msg="API listen on /run/docker.sock" Dec 13 09:11:13.563150 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 09:11:15.429053 containerd[1457]: time="2024-12-13T09:11:15.428891913Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 09:11:15.448646 systemd-resolved[1329]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 13 09:11:16.327460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947498915.mount: Deactivated successfully. Dec 13 09:11:18.536148 systemd-resolved[1329]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 13 09:11:20.601249 containerd[1457]: time="2024-12-13T09:11:20.600714761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:20.605224 containerd[1457]: time="2024-12-13T09:11:20.605061834Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Dec 13 09:11:20.613961 containerd[1457]: time="2024-12-13T09:11:20.613859508Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:20.632246 containerd[1457]: time="2024-12-13T09:11:20.630237243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:20.632246 containerd[1457]: time="2024-12-13T09:11:20.631903820Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 5.202911727s" Dec 13 09:11:20.632246 containerd[1457]: time="2024-12-13T09:11:20.632009221Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 09:11:20.776182 containerd[1457]: time="2024-12-13T09:11:20.776129940Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 09:11:20.808501 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 09:11:20.852902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:21.182516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:21.182704 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:11:21.309967 kubelet[1913]: E1213 09:11:21.309728 1913 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:11:21.315492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:11:21.315701 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:11:24.495764 containerd[1457]: time="2024-12-13T09:11:24.493967378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:24.495764 containerd[1457]: time="2024-12-13T09:11:24.495669967Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Dec 13 09:11:24.496709 containerd[1457]: time="2024-12-13T09:11:24.496668443Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:24.505161 containerd[1457]: time="2024-12-13T09:11:24.505091645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:24.506916 containerd[1457]: time="2024-12-13T09:11:24.506863864Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 3.730458508s" Dec 13 09:11:24.507364 containerd[1457]: time="2024-12-13T09:11:24.507326049Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 09:11:24.544096 containerd[1457]: time="2024-12-13T09:11:24.544052490Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 09:11:24.546618 systemd-resolved[1329]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Dec 13 09:11:25.922864 containerd[1457]: time="2024-12-13T09:11:25.922767782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:25.924551 containerd[1457]: time="2024-12-13T09:11:25.924486047Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Dec 13 09:11:25.926230 containerd[1457]: time="2024-12-13T09:11:25.925587837Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:25.930285 containerd[1457]: time="2024-12-13T09:11:25.930206902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:25.932503 containerd[1457]: time="2024-12-13T09:11:25.932440465Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.388101616s" Dec 13 09:11:25.932710 containerd[1457]: time="2024-12-13T09:11:25.932686593Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 09:11:25.966486 containerd[1457]: time="2024-12-13T09:11:25.966435063Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 09:11:27.311704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2396970169.mount: Deactivated successfully. Dec 13 09:11:27.946009 containerd[1457]: time="2024-12-13T09:11:27.945894765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:27.949305 containerd[1457]: time="2024-12-13T09:11:27.949033224Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Dec 13 09:11:27.950601 containerd[1457]: time="2024-12-13T09:11:27.950154351Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:27.953951 containerd[1457]: time="2024-12-13T09:11:27.953863541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:27.954962 containerd[1457]: time="2024-12-13T09:11:27.954882710Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.988403427s" Dec 13 09:11:27.954962 containerd[1457]: time="2024-12-13T09:11:27.954949311Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 09:11:27.993196 containerd[1457]: time="2024-12-13T09:11:27.993106838Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 09:11:28.606430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782142862.mount: Deactivated successfully. Dec 13 09:11:29.766648 containerd[1457]: time="2024-12-13T09:11:29.766588933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:29.768690 containerd[1457]: time="2024-12-13T09:11:29.768612420Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 09:11:29.770069 containerd[1457]: time="2024-12-13T09:11:29.769990818Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:29.773859 containerd[1457]: time="2024-12-13T09:11:29.773789813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:29.775799 containerd[1457]: time="2024-12-13T09:11:29.775702423Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.782507697s" Dec 13 09:11:29.775799 containerd[1457]: time="2024-12-13T09:11:29.775769690Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 09:11:29.809206 containerd[1457]: time="2024-12-13T09:11:29.809114801Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 09:11:30.265219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1117614878.mount: Deactivated successfully. Dec 13 09:11:30.276979 containerd[1457]: time="2024-12-13T09:11:30.275901245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:30.277963 containerd[1457]: time="2024-12-13T09:11:30.277378755Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 09:11:30.277963 containerd[1457]: time="2024-12-13T09:11:30.277479740Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:30.280911 containerd[1457]: time="2024-12-13T09:11:30.280826623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:30.282111 containerd[1457]: time="2024-12-13T09:11:30.282063802Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 472.855312ms" Dec 13 09:11:30.282243 containerd[1457]: time="2024-12-13T09:11:30.282118212Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 09:11:30.316433 containerd[1457]: time="2024-12-13T09:11:30.316097838Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 09:11:30.851918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3386678651.mount: Deactivated successfully. Dec 13 09:11:31.558389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 09:11:31.570305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:31.786567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:31.786716 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:11:31.886105 kubelet[2044]: E1213 09:11:31.885405 2044 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:11:31.890256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:11:31.890407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:11:33.403979 containerd[1457]: time="2024-12-13T09:11:33.402518651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:33.404978 containerd[1457]: time="2024-12-13T09:11:33.404905202Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Dec 13 09:11:33.406483 containerd[1457]: time="2024-12-13T09:11:33.406422000Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:33.411733 containerd[1457]: time="2024-12-13T09:11:33.411657269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:11:33.413677 containerd[1457]: time="2024-12-13T09:11:33.413608489Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.097457795s" Dec 13 09:11:33.414025 containerd[1457]: time="2024-12-13T09:11:33.413979299Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 09:11:38.934462 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:38.945473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:39.028367 systemd[1]: Reloading requested from client PID 2138 ('systemctl') (unit session-9.scope)... Dec 13 09:11:39.028398 systemd[1]: Reloading... Dec 13 09:11:39.337398 zram_generator::config[2177]: No configuration found. Dec 13 09:11:39.628665 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:11:39.767700 systemd[1]: Reloading finished in 731 ms. Dec 13 09:11:39.877262 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 09:11:39.877505 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 09:11:39.877885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:39.896654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:40.141307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:40.162377 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 09:11:40.283834 kubelet[2229]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:11:40.283834 kubelet[2229]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 09:11:40.283834 kubelet[2229]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:11:40.286224 kubelet[2229]: I1213 09:11:40.285716 2229 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 09:11:40.837853 kubelet[2229]: I1213 09:11:40.837409 2229 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 09:11:40.837853 kubelet[2229]: I1213 09:11:40.837449 2229 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 09:11:40.837853 kubelet[2229]: I1213 09:11:40.837689 2229 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 09:11:40.887459 kubelet[2229]: I1213 09:11:40.887401 2229 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 09:11:40.889678 kubelet[2229]: E1213 09:11:40.889446 2229 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.159.183:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:40.912020 kubelet[2229]: I1213 09:11:40.911419 2229 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 09:11:40.913241 kubelet[2229]: I1213 09:11:40.913128 2229 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 09:11:40.914003 kubelet[2229]: I1213 09:11:40.913499 2229 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-b-8823ebc6cf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 09:11:40.914789 kubelet[2229]: I1213 09:11:40.914282 2229 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 09:11:40.914789 kubelet[2229]: I1213 09:11:40.914312 2229 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 09:11:40.914789 kubelet[2229]: I1213 09:11:40.914552 2229 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:11:40.916127 kubelet[2229]: I1213 09:11:40.916065 2229 kubelet.go:400] "Attempting to sync node with API server" Dec 13 09:11:40.916288 kubelet[2229]: I1213 09:11:40.916271 2229 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 09:11:40.916405 kubelet[2229]: I1213 09:11:40.916392 2229 kubelet.go:312] "Adding apiserver pod source" Dec 13 09:11:40.916587 kubelet[2229]: I1213 09:11:40.916571 2229 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 09:11:40.925789 kubelet[2229]: I1213 09:11:40.925744 2229 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 09:11:40.930906 kubelet[2229]: I1213 09:11:40.928802 2229 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 09:11:40.930906 kubelet[2229]: W1213 09:11:40.928973 2229 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 09:11:40.930906 kubelet[2229]: I1213 09:11:40.930138 2229 server.go:1264] "Started kubelet" Dec 13 09:11:40.959621 kubelet[2229]: W1213 09:11:40.959045 2229 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.159.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:40.959621 kubelet[2229]: E1213 09:11:40.959289 2229 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.159.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:40.968303 kubelet[2229]: I1213 09:11:40.968209 2229 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 09:11:40.968953 kubelet[2229]: E1213 09:11:40.968721 2229 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.159.183:6443/api/v1/namespaces/default/events\": dial tcp 146.190.159.183:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-b-8823ebc6cf.1810b192e0a81d85 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-b-8823ebc6cf,UID:ci-4081.2.1-b-8823ebc6cf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-b-8823ebc6cf,},FirstTimestamp:2024-12-13 09:11:40.930096517 +0000 UTC m=+0.757561612,LastTimestamp:2024-12-13 09:11:40.930096517 +0000 UTC m=+0.757561612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-b-8823ebc6cf,}" Dec 13 09:11:40.972171 kubelet[2229]: I1213 09:11:40.970170 2229 server.go:455] "Adding debug handlers to kubelet server" Dec 13 09:11:40.975717 kubelet[2229]: W1213 09:11:40.970060 2229 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.159.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-b-8823ebc6cf&limit=500&resourceVersion=0": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:40.975717 kubelet[2229]: E1213 09:11:40.972485 2229 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.159.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-b-8823ebc6cf&limit=500&resourceVersion=0": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:40.975717 kubelet[2229]: I1213 09:11:40.973682 2229 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 09:11:40.976516 kubelet[2229]: I1213 09:11:40.976480 2229 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 09:11:40.982787 kubelet[2229]: I1213 09:11:40.982336 2229 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 09:11:41.014158 kubelet[2229]: I1213 09:11:41.012434 2229 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 09:11:41.014158 kubelet[2229]: I1213 09:11:41.013167 2229 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 09:11:41.015819 kubelet[2229]: E1213 09:11:41.015627 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.159.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-b-8823ebc6cf?timeout=10s\": dial tcp 146.190.159.183:6443: connect: connection refused" interval="200ms" Dec 13 09:11:41.017832 kubelet[2229]: I1213 09:11:41.017790 2229 factory.go:221] Registration of the systemd container factory successfully Dec 13 09:11:41.018287 kubelet[2229]: I1213 09:11:41.018254 2229 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 09:11:41.020467 kubelet[2229]: I1213 09:11:41.019284 2229 reconciler.go:26] "Reconciler: start to sync state" Dec 13 09:11:41.021704 kubelet[2229]: E1213 09:11:41.021669 2229 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 09:11:41.022854 kubelet[2229]: I1213 09:11:41.022826 2229 factory.go:221] Registration of the containerd container factory successfully Dec 13 09:11:41.036339 kubelet[2229]: W1213 09:11:41.036251 2229 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.159.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:41.036339 kubelet[2229]: E1213 09:11:41.036333 2229 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.159.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:41.062499 kubelet[2229]: I1213 09:11:41.062080 2229 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 09:11:41.062499 kubelet[2229]: I1213 09:11:41.062115 2229 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 09:11:41.062499 kubelet[2229]: I1213 09:11:41.062200 2229 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:11:41.078019 kubelet[2229]: I1213 09:11:41.077202 2229 policy_none.go:49] "None policy: Start" Dec 13 09:11:41.081115 kubelet[2229]: I1213 09:11:41.081026 2229 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 09:11:41.082088 kubelet[2229]: I1213 09:11:41.081995 2229 state_mem.go:35] "Initializing new in-memory state store" Dec 13 09:11:41.087080 kubelet[2229]: I1213 09:11:41.086630 2229 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 09:11:41.090230 kubelet[2229]: I1213 09:11:41.089539 2229 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 09:11:41.090230 kubelet[2229]: I1213 09:11:41.089582 2229 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 09:11:41.090230 kubelet[2229]: I1213 09:11:41.089633 2229 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 09:11:41.090230 kubelet[2229]: E1213 09:11:41.089694 2229 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 09:11:41.100340 kubelet[2229]: W1213 09:11:41.100024 2229 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.159.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:41.102495 kubelet[2229]: E1213 09:11:41.102021 2229 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.159.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:41.107737 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 09:11:41.114271 kubelet[2229]: I1213 09:11:41.114179 2229 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.116413 kubelet[2229]: E1213 09:11:41.115317 2229 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.159.183:6443/api/v1/nodes\": dial tcp 146.190.159.183:6443: connect: connection refused" node="ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.124849 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 09:11:41.132200 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 09:11:41.145637 kubelet[2229]: I1213 09:11:41.145548 2229 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 09:11:41.145968 kubelet[2229]: I1213 09:11:41.145891 2229 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 09:11:41.146114 kubelet[2229]: I1213 09:11:41.146063 2229 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 09:11:41.152252 kubelet[2229]: E1213 09:11:41.151058 2229 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-b-8823ebc6cf\" not found" Dec 13 09:11:41.197077 kubelet[2229]: I1213 09:11:41.191094 2229 topology_manager.go:215] "Topology Admit Handler" podUID="fe2440bd480d62864f4d96951d271f99" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.210967 kubelet[2229]: I1213 09:11:41.209396 2229 topology_manager.go:215] "Topology Admit Handler" podUID="080e3fba487346fbdc47f2b3245e3fe9" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.212683 kubelet[2229]: I1213 09:11:41.212634 2229 topology_manager.go:215] "Topology Admit Handler" podUID="9870fa8eba18350c1dfb1f82d8c6fc66" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.217746 kubelet[2229]: E1213 09:11:41.217687 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.159.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-b-8823ebc6cf?timeout=10s\": dial tcp 146.190.159.183:6443: connect: connection refused" interval="400ms" Dec 13 09:11:41.232402 systemd[1]: Created slice kubepods-burstable-podfe2440bd480d62864f4d96951d271f99.slice - libcontainer container kubepods-burstable-podfe2440bd480d62864f4d96951d271f99.slice. Dec 13 09:11:41.272513 systemd[1]: Created slice kubepods-burstable-pod080e3fba487346fbdc47f2b3245e3fe9.slice - libcontainer container kubepods-burstable-pod080e3fba487346fbdc47f2b3245e3fe9.slice. Dec 13 09:11:41.288668 systemd[1]: Created slice kubepods-burstable-pod9870fa8eba18350c1dfb1f82d8c6fc66.slice - libcontainer container kubepods-burstable-pod9870fa8eba18350c1dfb1f82d8c6fc66.slice. Dec 13 09:11:41.317391 kubelet[2229]: I1213 09:11:41.317331 2229 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.318507 kubelet[2229]: E1213 09:11:41.317917 2229 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.159.183:6443/api/v1/nodes\": dial tcp 146.190.159.183:6443: connect: connection refused" node="ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.324079 kubelet[2229]: I1213 09:11:41.323779 2229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe2440bd480d62864f4d96951d271f99-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-b-8823ebc6cf\" (UID: \"fe2440bd480d62864f4d96951d271f99\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.324079 kubelet[2229]: I1213 09:11:41.323867 2229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe2440bd480d62864f4d96951d271f99-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-b-8823ebc6cf\" (UID: \"fe2440bd480d62864f4d96951d271f99\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.324079 kubelet[2229]: I1213 09:11:41.323910 2229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe2440bd480d62864f4d96951d271f99-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-b-8823ebc6cf\" (UID: \"fe2440bd480d62864f4d96951d271f99\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.324079 kubelet[2229]: I1213 09:11:41.323971 2229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9870fa8eba18350c1dfb1f82d8c6fc66-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-b-8823ebc6cf\" (UID: \"9870fa8eba18350c1dfb1f82d8c6fc66\") " pod="kube-system/kube-apiserver-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.324079 kubelet[2229]: I1213 09:11:41.324006 2229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe2440bd480d62864f4d96951d271f99-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-b-8823ebc6cf\" (UID: \"fe2440bd480d62864f4d96951d271f99\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.324586 kubelet[2229]: I1213 09:11:41.324032 2229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe2440bd480d62864f4d96951d271f99-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-b-8823ebc6cf\" (UID: \"fe2440bd480d62864f4d96951d271f99\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.324586 kubelet[2229]: I1213 09:11:41.324105 2229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/080e3fba487346fbdc47f2b3245e3fe9-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-b-8823ebc6cf\" (UID: \"080e3fba487346fbdc47f2b3245e3fe9\") " pod="kube-system/kube-scheduler-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.324586 kubelet[2229]: I1213 09:11:41.324183 2229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9870fa8eba18350c1dfb1f82d8c6fc66-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-b-8823ebc6cf\" (UID: \"9870fa8eba18350c1dfb1f82d8c6fc66\") " pod="kube-system/kube-apiserver-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.324586 kubelet[2229]: I1213 09:11:41.324229 2229 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9870fa8eba18350c1dfb1f82d8c6fc66-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-b-8823ebc6cf\" (UID: \"9870fa8eba18350c1dfb1f82d8c6fc66\") " pod="kube-system/kube-apiserver-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.553044 kubelet[2229]: E1213 09:11:41.552443 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:41.553443 containerd[1457]: time="2024-12-13T09:11:41.553382453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-b-8823ebc6cf,Uid:fe2440bd480d62864f4d96951d271f99,Namespace:kube-system,Attempt:0,}" Dec 13 09:11:41.583735 kubelet[2229]: E1213 09:11:41.582955 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:41.586475 containerd[1457]: time="2024-12-13T09:11:41.586395273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-b-8823ebc6cf,Uid:080e3fba487346fbdc47f2b3245e3fe9,Namespace:kube-system,Attempt:0,}" Dec 13 09:11:41.593373 kubelet[2229]: E1213 09:11:41.592798 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:41.593753 containerd[1457]: time="2024-12-13T09:11:41.593689596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-b-8823ebc6cf,Uid:9870fa8eba18350c1dfb1f82d8c6fc66,Namespace:kube-system,Attempt:0,}" Dec 13 09:11:41.618802 kubelet[2229]: E1213 09:11:41.618703 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.159.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-b-8823ebc6cf?timeout=10s\": dial tcp 146.190.159.183:6443: connect: connection refused" interval="800ms" Dec 13 09:11:41.673591 kubelet[2229]: E1213 09:11:41.673408 2229 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.159.183:6443/api/v1/namespaces/default/events\": dial tcp 146.190.159.183:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-b-8823ebc6cf.1810b192e0a81d85 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-b-8823ebc6cf,UID:ci-4081.2.1-b-8823ebc6cf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-b-8823ebc6cf,},FirstTimestamp:2024-12-13 09:11:40.930096517 +0000 UTC m=+0.757561612,LastTimestamp:2024-12-13 09:11:40.930096517 +0000 UTC m=+0.757561612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-b-8823ebc6cf,}" Dec 13 09:11:41.720709 kubelet[2229]: I1213 09:11:41.720667 2229 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:41.721695 kubelet[2229]: E1213 09:11:41.721628 2229 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.159.183:6443/api/v1/nodes\": dial tcp 146.190.159.183:6443: connect: connection refused" node="ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:42.101596 kubelet[2229]: W1213 09:11:42.101461 2229 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.159.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:42.101990 kubelet[2229]: E1213 09:11:42.101693 2229 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.159.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:42.160772 kubelet[2229]: W1213 09:11:42.160673 2229 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.159.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:42.161025 kubelet[2229]: E1213 09:11:42.160810 2229 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.159.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:42.200287 update_engine[1444]: I20241213 09:11:42.200114 1444 update_attempter.cc:509] Updating boot flags... Dec 13 09:11:42.244649 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2266) Dec 13 09:11:42.379977 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2270) Dec 13 09:11:42.427614 kubelet[2229]: E1213 09:11:42.421034 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.159.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-b-8823ebc6cf?timeout=10s\": dial tcp 146.190.159.183:6443: connect: connection refused" interval="1.6s" Dec 13 09:11:42.433358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount600825127.mount: Deactivated successfully. Dec 13 09:11:42.470041 kubelet[2229]: W1213 09:11:42.468852 2229 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.159.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-b-8823ebc6cf&limit=500&resourceVersion=0": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:42.470265 kubelet[2229]: E1213 09:11:42.470152 2229 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.159.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-b-8823ebc6cf&limit=500&resourceVersion=0": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:42.476017 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2270) Dec 13 09:11:42.524100 containerd[1457]: time="2024-12-13T09:11:42.523844942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:11:42.529967 kubelet[2229]: I1213 09:11:42.529056 2229 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:42.529967 kubelet[2229]: E1213 09:11:42.529870 2229 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.159.183:6443/api/v1/nodes\": dial tcp 146.190.159.183:6443: connect: connection refused" node="ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:42.531656 containerd[1457]: time="2024-12-13T09:11:42.531326615Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 09:11:42.534479 containerd[1457]: time="2024-12-13T09:11:42.534415256Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:11:42.538088 containerd[1457]: time="2024-12-13T09:11:42.538006003Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 09:11:42.538495 containerd[1457]: time="2024-12-13T09:11:42.538341607Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:11:42.539828 containerd[1457]: time="2024-12-13T09:11:42.539734925Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 09:11:42.545184 containerd[1457]: time="2024-12-13T09:11:42.545017901Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:11:42.556827 containerd[1457]: time="2024-12-13T09:11:42.556309352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 962.475831ms" Dec 13 09:11:42.558678 containerd[1457]: time="2024-12-13T09:11:42.557691418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:11:42.562671 containerd[1457]: time="2024-12-13T09:11:42.562597456Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 976.101923ms" Dec 13 09:11:42.563182 containerd[1457]: time="2024-12-13T09:11:42.563122257Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.009602708s" Dec 13 09:11:42.629003 kubelet[2229]: W1213 09:11:42.628876 2229 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.159.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:42.629003 kubelet[2229]: E1213 09:11:42.629006 2229 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.159.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:42.879850 containerd[1457]: time="2024-12-13T09:11:42.878975300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:11:42.879850 containerd[1457]: time="2024-12-13T09:11:42.879095086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:11:42.879850 containerd[1457]: time="2024-12-13T09:11:42.879120734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:42.879850 containerd[1457]: time="2024-12-13T09:11:42.879408112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:42.895036 containerd[1457]: time="2024-12-13T09:11:42.894391167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:11:42.895036 containerd[1457]: time="2024-12-13T09:11:42.894546385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:11:42.895036 containerd[1457]: time="2024-12-13T09:11:42.894565793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:42.895036 containerd[1457]: time="2024-12-13T09:11:42.894706643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:42.898842 containerd[1457]: time="2024-12-13T09:11:42.897920436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:11:42.900478 containerd[1457]: time="2024-12-13T09:11:42.899979510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:11:42.900478 containerd[1457]: time="2024-12-13T09:11:42.900029150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:42.900478 containerd[1457]: time="2024-12-13T09:11:42.900178637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:11:42.925317 systemd[1]: Started cri-containerd-12e83e26fdd52dcd981f69a9434803b75d95a618e4180c7e98aa94bd91b752dd.scope - libcontainer container 12e83e26fdd52dcd981f69a9434803b75d95a618e4180c7e98aa94bd91b752dd. Dec 13 09:11:42.948689 systemd[1]: Started cri-containerd-7da3707a8126b2629ddf16774a850459197300c612b797efb01b4ed28539d652.scope - libcontainer container 7da3707a8126b2629ddf16774a850459197300c612b797efb01b4ed28539d652. Dec 13 09:11:42.980416 systemd[1]: Started cri-containerd-5a3da2b98eab047edb7aebe883bf5efbb6879d576668624ab50e4ae1df1e8a57.scope - libcontainer container 5a3da2b98eab047edb7aebe883bf5efbb6879d576668624ab50e4ae1df1e8a57. Dec 13 09:11:42.982866 kubelet[2229]: E1213 09:11:42.982603 2229 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.159.183:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.159.183:6443: connect: connection refused Dec 13 09:11:43.052367 containerd[1457]: time="2024-12-13T09:11:43.052053128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-b-8823ebc6cf,Uid:080e3fba487346fbdc47f2b3245e3fe9,Namespace:kube-system,Attempt:0,} returns sandbox id \"12e83e26fdd52dcd981f69a9434803b75d95a618e4180c7e98aa94bd91b752dd\"" Dec 13 09:11:43.058622 kubelet[2229]: E1213 09:11:43.058573 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:43.069582 containerd[1457]: time="2024-12-13T09:11:43.067469728Z" level=info msg="CreateContainer within sandbox \"12e83e26fdd52dcd981f69a9434803b75d95a618e4180c7e98aa94bd91b752dd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 09:11:43.103024 containerd[1457]: time="2024-12-13T09:11:43.102550022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-b-8823ebc6cf,Uid:9870fa8eba18350c1dfb1f82d8c6fc66,Namespace:kube-system,Attempt:0,} returns sandbox id \"7da3707a8126b2629ddf16774a850459197300c612b797efb01b4ed28539d652\"" Dec 13 09:11:43.105245 kubelet[2229]: E1213 09:11:43.105201 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:43.111681 containerd[1457]: time="2024-12-13T09:11:43.111133561Z" level=info msg="CreateContainer within sandbox \"7da3707a8126b2629ddf16774a850459197300c612b797efb01b4ed28539d652\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 09:11:43.113829 containerd[1457]: time="2024-12-13T09:11:43.113726072Z" level=info msg="CreateContainer within sandbox \"12e83e26fdd52dcd981f69a9434803b75d95a618e4180c7e98aa94bd91b752dd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"20b809fb4a1a63d116e976e051722d9a8cbb91f044a09c2599e6ad4fce1a299a\"" Dec 13 09:11:43.116195 containerd[1457]: time="2024-12-13T09:11:43.116135543Z" level=info msg="StartContainer for \"20b809fb4a1a63d116e976e051722d9a8cbb91f044a09c2599e6ad4fce1a299a\"" Dec 13 09:11:43.140541 containerd[1457]: time="2024-12-13T09:11:43.140242325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-b-8823ebc6cf,Uid:fe2440bd480d62864f4d96951d271f99,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a3da2b98eab047edb7aebe883bf5efbb6879d576668624ab50e4ae1df1e8a57\"" Dec 13 09:11:43.144275 kubelet[2229]: E1213 09:11:43.143625 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:43.147636 containerd[1457]: time="2024-12-13T09:11:43.147407677Z" level=info msg="CreateContainer within sandbox \"5a3da2b98eab047edb7aebe883bf5efbb6879d576668624ab50e4ae1df1e8a57\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 09:11:43.151966 containerd[1457]: time="2024-12-13T09:11:43.151329276Z" level=info msg="CreateContainer within sandbox \"7da3707a8126b2629ddf16774a850459197300c612b797efb01b4ed28539d652\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"244142054b0f5c9f0c789e2c32c9538f97b9468983e8f91d1f7c745f69a32d5f\"" Dec 13 09:11:43.153689 containerd[1457]: time="2024-12-13T09:11:43.152575329Z" level=info msg="StartContainer for \"244142054b0f5c9f0c789e2c32c9538f97b9468983e8f91d1f7c745f69a32d5f\"" Dec 13 09:11:43.190711 systemd[1]: Started cri-containerd-20b809fb4a1a63d116e976e051722d9a8cbb91f044a09c2599e6ad4fce1a299a.scope - libcontainer container 20b809fb4a1a63d116e976e051722d9a8cbb91f044a09c2599e6ad4fce1a299a. Dec 13 09:11:43.194621 containerd[1457]: time="2024-12-13T09:11:43.194468859Z" level=info msg="CreateContainer within sandbox \"5a3da2b98eab047edb7aebe883bf5efbb6879d576668624ab50e4ae1df1e8a57\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"329233e4fb21591c45789c574efd78336cab5806f6fbc7788c9558eb8a2e2972\"" Dec 13 09:11:43.196688 containerd[1457]: time="2024-12-13T09:11:43.196477759Z" level=info msg="StartContainer for \"329233e4fb21591c45789c574efd78336cab5806f6fbc7788c9558eb8a2e2972\"" Dec 13 09:11:43.228369 systemd[1]: Started cri-containerd-244142054b0f5c9f0c789e2c32c9538f97b9468983e8f91d1f7c745f69a32d5f.scope - libcontainer container 244142054b0f5c9f0c789e2c32c9538f97b9468983e8f91d1f7c745f69a32d5f. Dec 13 09:11:43.264297 systemd[1]: Started cri-containerd-329233e4fb21591c45789c574efd78336cab5806f6fbc7788c9558eb8a2e2972.scope - libcontainer container 329233e4fb21591c45789c574efd78336cab5806f6fbc7788c9558eb8a2e2972. Dec 13 09:11:43.327806 containerd[1457]: time="2024-12-13T09:11:43.327490605Z" level=info msg="StartContainer for \"20b809fb4a1a63d116e976e051722d9a8cbb91f044a09c2599e6ad4fce1a299a\" returns successfully" Dec 13 09:11:43.350984 containerd[1457]: time="2024-12-13T09:11:43.350805921Z" level=info msg="StartContainer for \"244142054b0f5c9f0c789e2c32c9538f97b9468983e8f91d1f7c745f69a32d5f\" returns successfully" Dec 13 09:11:43.410403 containerd[1457]: time="2024-12-13T09:11:43.409652860Z" level=info msg="StartContainer for \"329233e4fb21591c45789c574efd78336cab5806f6fbc7788c9558eb8a2e2972\" returns successfully" Dec 13 09:11:44.123983 kubelet[2229]: E1213 09:11:44.123897 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:44.128973 kubelet[2229]: E1213 09:11:44.128884 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:44.133661 kubelet[2229]: I1213 09:11:44.133386 2229 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:44.137639 kubelet[2229]: E1213 09:11:44.137525 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:45.139084 kubelet[2229]: E1213 09:11:45.136445 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:45.139084 kubelet[2229]: E1213 09:11:45.137156 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:46.091979 kubelet[2229]: E1213 09:11:46.091852 2229 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-b-8823ebc6cf\" not found" node="ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:46.105407 kubelet[2229]: I1213 09:11:46.105071 2229 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:46.163016 kubelet[2229]: E1213 09:11:46.162125 2229 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-b-8823ebc6cf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:46.163016 kubelet[2229]: E1213 09:11:46.162800 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:46.929772 kubelet[2229]: I1213 09:11:46.929671 2229 apiserver.go:52] "Watching apiserver" Dec 13 09:11:46.934526 kubelet[2229]: I1213 09:11:46.934328 2229 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 09:11:48.619867 systemd[1]: Reloading requested from client PID 2516 ('systemctl') (unit session-9.scope)... Dec 13 09:11:48.619895 systemd[1]: Reloading... Dec 13 09:11:48.728294 zram_generator::config[2558]: No configuration found. Dec 13 09:11:48.893393 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:11:49.034730 systemd[1]: Reloading finished in 413 ms. Dec 13 09:11:49.096690 kubelet[2229]: E1213 09:11:49.096334 2229 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081.2.1-b-8823ebc6cf.1810b192e0a81d85 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-b-8823ebc6cf,UID:ci-4081.2.1-b-8823ebc6cf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-b-8823ebc6cf,},FirstTimestamp:2024-12-13 09:11:40.930096517 +0000 UTC m=+0.757561612,LastTimestamp:2024-12-13 09:11:40.930096517 +0000 UTC m=+0.757561612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-b-8823ebc6cf,}" Dec 13 09:11:49.096988 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:49.109634 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 09:11:49.110331 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:49.110471 systemd[1]: kubelet.service: Consumed 1.299s CPU time, 112.3M memory peak, 0B memory swap peak. Dec 13 09:11:49.122190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:11:49.357250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:11:49.359670 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 09:11:49.446814 kubelet[2606]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:11:49.447489 kubelet[2606]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 09:11:49.447489 kubelet[2606]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:11:49.447489 kubelet[2606]: I1213 09:11:49.447062 2606 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 09:11:49.460748 kubelet[2606]: I1213 09:11:49.460687 2606 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 09:11:49.460975 kubelet[2606]: I1213 09:11:49.460771 2606 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 09:11:49.461322 kubelet[2606]: I1213 09:11:49.461266 2606 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 09:11:49.464559 kubelet[2606]: I1213 09:11:49.464503 2606 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 09:11:49.469305 kubelet[2606]: I1213 09:11:49.468555 2606 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 09:11:49.484064 kubelet[2606]: I1213 09:11:49.484009 2606 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 09:11:49.484382 kubelet[2606]: I1213 09:11:49.484341 2606 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 09:11:49.484673 kubelet[2606]: I1213 09:11:49.484387 2606 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-b-8823ebc6cf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 09:11:49.485062 kubelet[2606]: I1213 09:11:49.484700 2606 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 09:11:49.485062 kubelet[2606]: I1213 09:11:49.484719 2606 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 09:11:49.485062 kubelet[2606]: I1213 09:11:49.484780 2606 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:11:49.487978 kubelet[2606]: I1213 09:11:49.485663 2606 kubelet.go:400] "Attempting to sync node with API server" Dec 13 09:11:49.487978 kubelet[2606]: I1213 09:11:49.485719 2606 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 09:11:49.487978 kubelet[2606]: I1213 09:11:49.485761 2606 kubelet.go:312] "Adding apiserver pod source" Dec 13 09:11:49.487978 kubelet[2606]: I1213 09:11:49.485842 2606 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 09:11:49.491093 kubelet[2606]: I1213 09:11:49.490753 2606 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 09:11:49.492270 kubelet[2606]: I1213 09:11:49.492250 2606 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 09:11:49.494381 kubelet[2606]: I1213 09:11:49.494357 2606 server.go:1264] "Started kubelet" Dec 13 09:11:49.499898 kubelet[2606]: I1213 09:11:49.498800 2606 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 09:11:49.515968 kubelet[2606]: I1213 09:11:49.515502 2606 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 09:11:49.519133 kubelet[2606]: I1213 09:11:49.519080 2606 server.go:455] "Adding debug handlers to kubelet server" Dec 13 09:11:49.520956 kubelet[2606]: I1213 09:11:49.520869 2606 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 09:11:49.521389 kubelet[2606]: I1213 09:11:49.521359 2606 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 09:11:49.524186 kubelet[2606]: I1213 09:11:49.524041 2606 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 09:11:49.525174 kubelet[2606]: I1213 09:11:49.525101 2606 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 09:11:49.525745 kubelet[2606]: I1213 09:11:49.525707 2606 reconciler.go:26] "Reconciler: start to sync state" Dec 13 09:11:49.535464 kubelet[2606]: I1213 09:11:49.535095 2606 factory.go:221] Registration of the systemd container factory successfully Dec 13 09:11:49.535464 kubelet[2606]: I1213 09:11:49.535254 2606 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 09:11:49.541830 kubelet[2606]: I1213 09:11:49.541301 2606 factory.go:221] Registration of the containerd container factory successfully Dec 13 09:11:49.542664 kubelet[2606]: I1213 09:11:49.542476 2606 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 09:11:49.545758 kubelet[2606]: I1213 09:11:49.545214 2606 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 09:11:49.545758 kubelet[2606]: I1213 09:11:49.545270 2606 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 09:11:49.545758 kubelet[2606]: I1213 09:11:49.545326 2606 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 09:11:49.545758 kubelet[2606]: E1213 09:11:49.545460 2606 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 09:11:49.561425 kubelet[2606]: E1213 09:11:49.560839 2606 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 09:11:49.631396 kubelet[2606]: I1213 09:11:49.629041 2606 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:49.633088 kubelet[2606]: I1213 09:11:49.633017 2606 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 09:11:49.633088 kubelet[2606]: I1213 09:11:49.633044 2606 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 09:11:49.633379 kubelet[2606]: I1213 09:11:49.633180 2606 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:11:49.633740 kubelet[2606]: I1213 09:11:49.633704 2606 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 09:11:49.633856 kubelet[2606]: I1213 09:11:49.633824 2606 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 09:11:49.633961 kubelet[2606]: I1213 09:11:49.633909 2606 policy_none.go:49] "None policy: Start" Dec 13 09:11:49.641653 kubelet[2606]: I1213 09:11:49.641607 2606 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 09:11:49.642798 kubelet[2606]: I1213 09:11:49.641746 2606 state_mem.go:35] "Initializing new in-memory state store" Dec 13 09:11:49.642798 kubelet[2606]: I1213 09:11:49.641994 2606 state_mem.go:75] "Updated machine memory state" Dec 13 09:11:49.646496 kubelet[2606]: E1213 09:11:49.646457 2606 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 09:11:49.648120 sudo[2636]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 09:11:49.648572 sudo[2636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 09:11:49.655747 kubelet[2606]: I1213 09:11:49.655717 2606 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 09:11:49.657758 kubelet[2606]: I1213 09:11:49.656914 2606 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 09:11:49.659091 kubelet[2606]: I1213 09:11:49.658292 2606 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 09:11:49.666476 kubelet[2606]: I1213 09:11:49.665587 2606 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:49.666476 kubelet[2606]: I1213 09:11:49.665689 2606 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:49.847905 kubelet[2606]: I1213 09:11:49.847806 2606 topology_manager.go:215] "Topology Admit Handler" podUID="9870fa8eba18350c1dfb1f82d8c6fc66" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:49.848146 kubelet[2606]: I1213 09:11:49.848000 2606 topology_manager.go:215] "Topology Admit Handler" podUID="fe2440bd480d62864f4d96951d271f99" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:49.848146 kubelet[2606]: I1213 09:11:49.848116 2606 topology_manager.go:215] "Topology Admit Handler" podUID="080e3fba487346fbdc47f2b3245e3fe9" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:49.859966 kubelet[2606]: W1213 09:11:49.857748 2606 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 09:11:49.863473 kubelet[2606]: W1213 09:11:49.863422 2606 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 09:11:49.864426 kubelet[2606]: W1213 09:11:49.864384 2606 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 09:11:49.929073 kubelet[2606]: I1213 09:11:49.928152 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe2440bd480d62864f4d96951d271f99-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-b-8823ebc6cf\" (UID: \"fe2440bd480d62864f4d96951d271f99\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:49.929073 kubelet[2606]: I1213 09:11:49.928208 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe2440bd480d62864f4d96951d271f99-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-b-8823ebc6cf\" (UID: \"fe2440bd480d62864f4d96951d271f99\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:49.929073 kubelet[2606]: I1213 09:11:49.928234 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/080e3fba487346fbdc47f2b3245e3fe9-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-b-8823ebc6cf\" (UID: \"080e3fba487346fbdc47f2b3245e3fe9\") " pod="kube-system/kube-scheduler-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:49.929073 kubelet[2606]: I1213 09:11:49.928257 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9870fa8eba18350c1dfb1f82d8c6fc66-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-b-8823ebc6cf\" (UID: \"9870fa8eba18350c1dfb1f82d8c6fc66\") " pod="kube-system/kube-apiserver-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:49.929073 kubelet[2606]: I1213 09:11:49.928274 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9870fa8eba18350c1dfb1f82d8c6fc66-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-b-8823ebc6cf\" (UID: \"9870fa8eba18350c1dfb1f82d8c6fc66\") " pod="kube-system/kube-apiserver-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:49.929337 kubelet[2606]: I1213 09:11:49.928292 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe2440bd480d62864f4d96951d271f99-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-b-8823ebc6cf\" (UID: \"fe2440bd480d62864f4d96951d271f99\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:49.929337 kubelet[2606]: I1213 09:11:49.928309 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe2440bd480d62864f4d96951d271f99-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-b-8823ebc6cf\" (UID: \"fe2440bd480d62864f4d96951d271f99\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:49.929337 kubelet[2606]: I1213 09:11:49.928325 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9870fa8eba18350c1dfb1f82d8c6fc66-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-b-8823ebc6cf\" (UID: \"9870fa8eba18350c1dfb1f82d8c6fc66\") " pod="kube-system/kube-apiserver-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:49.929337 kubelet[2606]: I1213 09:11:49.928341 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe2440bd480d62864f4d96951d271f99-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-b-8823ebc6cf\" (UID: \"fe2440bd480d62864f4d96951d271f99\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-b-8823ebc6cf" Dec 13 09:11:50.162078 kubelet[2606]: E1213 09:11:50.162012 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:50.165457 kubelet[2606]: E1213 09:11:50.165242 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:50.165457 kubelet[2606]: E1213 09:11:50.165408 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:50.487076 kubelet[2606]: I1213 09:11:50.487023 2606 apiserver.go:52] "Watching apiserver" Dec 13 09:11:50.500689 sudo[2636]: pam_unix(sudo:session): session closed for user root Dec 13 09:11:50.526034 kubelet[2606]: I1213 09:11:50.525859 2606 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 09:11:50.592551 kubelet[2606]: E1213 09:11:50.591833 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:50.592551 kubelet[2606]: E1213 09:11:50.592110 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:50.592977 kubelet[2606]: E1213 09:11:50.592948 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:50.610747 kubelet[2606]: I1213 09:11:50.609863 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-b-8823ebc6cf" podStartSLOduration=1.6098390409999999 podStartE2EDuration="1.609839041s" podCreationTimestamp="2024-12-13 09:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:11:50.60955269 +0000 UTC m=+1.243028331" watchObservedRunningTime="2024-12-13 09:11:50.609839041 +0000 UTC m=+1.243314682" Dec 13 09:11:50.642023 kubelet[2606]: I1213 09:11:50.641219 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-b-8823ebc6cf" podStartSLOduration=1.64119861 podStartE2EDuration="1.64119861s" podCreationTimestamp="2024-12-13 09:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:11:50.629311454 +0000 UTC m=+1.262787099" watchObservedRunningTime="2024-12-13 09:11:50.64119861 +0000 UTC m=+1.274674247" Dec 13 09:11:50.656946 kubelet[2606]: I1213 09:11:50.656781 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-b-8823ebc6cf" podStartSLOduration=1.656761691 podStartE2EDuration="1.656761691s" podCreationTimestamp="2024-12-13 09:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:11:50.641611553 +0000 UTC m=+1.275087193" watchObservedRunningTime="2024-12-13 09:11:50.656761691 +0000 UTC m=+1.290237332" Dec 13 09:11:51.595574 kubelet[2606]: E1213 09:11:51.594983 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:52.471627 sudo[1678]: pam_unix(sudo:session): session closed for user root Dec 13 09:11:52.476195 sshd[1661]: pam_unix(sshd:session): session closed for user core Dec 13 09:11:52.481659 systemd[1]: sshd@8-146.190.159.183:22-147.75.109.163:43392.service: Deactivated successfully. Dec 13 09:11:52.486241 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 09:11:52.486638 systemd[1]: session-9.scope: Consumed 8.521s CPU time, 190.9M memory peak, 0B memory swap peak. Dec 13 09:11:52.488864 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Dec 13 09:11:52.490811 systemd-logind[1443]: Removed session 9. Dec 13 09:11:53.418118 kubelet[2606]: E1213 09:11:53.415915 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:53.599351 kubelet[2606]: E1213 09:11:53.599205 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:54.260243 kubelet[2606]: E1213 09:11:54.260196 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:11:54.603554 kubelet[2606]: E1213 09:11:54.603427 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:00.161251 kubelet[2606]: E1213 09:12:00.161185 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:00.618722 kubelet[2606]: E1213 09:12:00.618649 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:02.419962 kubelet[2606]: I1213 09:12:02.412590 2606 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 09:12:02.419962 kubelet[2606]: I1213 09:12:02.413737 2606 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 09:12:02.420838 containerd[1457]: time="2024-12-13T09:12:02.413395459Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 09:12:03.285960 kubelet[2606]: I1213 09:12:03.285397 2606 topology_manager.go:215] "Topology Admit Handler" podUID="bfb4cf56-808d-4df2-b49f-3237e5313657" podNamespace="kube-system" podName="cilium-pzv8c" Dec 13 09:12:03.285960 kubelet[2606]: I1213 09:12:03.285895 2606 topology_manager.go:215] "Topology Admit Handler" podUID="1f623c9d-a638-49a7-b6fb-9af1b0ae8185" podNamespace="kube-system" podName="kube-proxy-8mvhf" Dec 13 09:12:03.321518 systemd[1]: Created slice kubepods-besteffort-pod1f623c9d_a638_49a7_b6fb_9af1b0ae8185.slice - libcontainer container kubepods-besteffort-pod1f623c9d_a638_49a7_b6fb_9af1b0ae8185.slice. Dec 13 09:12:03.384614 systemd[1]: Created slice kubepods-burstable-podbfb4cf56_808d_4df2_b49f_3237e5313657.slice - libcontainer container kubepods-burstable-podbfb4cf56_808d_4df2_b49f_3237e5313657.slice. Dec 13 09:12:03.484305 kubelet[2606]: I1213 09:12:03.482732 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-cilium-cgroup\") pod \"cilium-pzv8c\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " pod="kube-system/cilium-pzv8c" Dec 13 09:12:03.484305 kubelet[2606]: I1213 09:12:03.482806 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfb4cf56-808d-4df2-b49f-3237e5313657-cilium-config-path\") pod \"cilium-pzv8c\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " pod="kube-system/cilium-pzv8c" Dec 13 09:12:03.484305 kubelet[2606]: I1213 09:12:03.482837 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-host-proc-sys-net\") pod \"cilium-pzv8c\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " pod="kube-system/cilium-pzv8c" Dec 13 09:12:03.484305 kubelet[2606]: I1213 09:12:03.482866 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f623c9d-a638-49a7-b6fb-9af1b0ae8185-xtables-lock\") pod \"kube-proxy-8mvhf\" (UID: \"1f623c9d-a638-49a7-b6fb-9af1b0ae8185\") " pod="kube-system/kube-proxy-8mvhf" Dec 13 09:12:03.484305 kubelet[2606]: I1213 09:12:03.482892 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1f623c9d-a638-49a7-b6fb-9af1b0ae8185-kube-proxy\") pod \"kube-proxy-8mvhf\" (UID: \"1f623c9d-a638-49a7-b6fb-9af1b0ae8185\") " pod="kube-system/kube-proxy-8mvhf" Dec 13 09:12:03.485182 kubelet[2606]: I1213 09:12:03.482916 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfb4cf56-808d-4df2-b49f-3237e5313657-clustermesh-secrets\") pod \"cilium-pzv8c\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " pod="kube-system/cilium-pzv8c" Dec 13 09:12:03.485182 kubelet[2606]: I1213 09:12:03.482963 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-lib-modules\") pod \"cilium-pzv8c\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " pod="kube-system/cilium-pzv8c" Dec 13 09:12:03.485182 kubelet[2606]: I1213 09:12:03.482997 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfb4cf56-808d-4df2-b49f-3237e5313657-hubble-tls\") pod \"cilium-pzv8c\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " pod="kube-system/cilium-pzv8c" Dec 13 09:12:03.485182 kubelet[2606]: I1213 09:12:03.483033 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnlfb\" (UniqueName: \"kubernetes.io/projected/bfb4cf56-808d-4df2-b49f-3237e5313657-kube-api-access-gnlfb\") pod \"cilium-pzv8c\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " pod="kube-system/cilium-pzv8c" Dec 13 09:12:03.485182 kubelet[2606]: I1213 09:12:03.483061 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f623c9d-a638-49a7-b6fb-9af1b0ae8185-lib-modules\") pod \"kube-proxy-8mvhf\" (UID: \"1f623c9d-a638-49a7-b6fb-9af1b0ae8185\") " pod="kube-system/kube-proxy-8mvhf" Dec 13 09:12:03.485182 kubelet[2606]: I1213 09:12:03.483088 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-bpf-maps\") pod \"cilium-pzv8c\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " pod="kube-system/cilium-pzv8c" Dec 13 09:12:03.485473 kubelet[2606]: I1213 09:12:03.483116 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-hostproc\") pod \"cilium-pzv8c\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " pod="kube-system/cilium-pzv8c" Dec 13 09:12:03.485473 kubelet[2606]: I1213 09:12:03.483147 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-host-proc-sys-kernel\") pod \"cilium-pzv8c\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " pod="kube-system/cilium-pzv8c" Dec 13 09:12:03.485473 kubelet[2606]: I1213 09:12:03.483174 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-cilium-run\") pod \"cilium-pzv8c\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " pod="kube-system/cilium-pzv8c" Dec 13 09:12:03.485473 kubelet[2606]: I1213 09:12:03.483203 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-cni-path\") pod \"cilium-pzv8c\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " pod="kube-system/cilium-pzv8c" Dec 13 09:12:03.485473 kubelet[2606]: I1213 09:12:03.483232 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-etc-cni-netd\") pod \"cilium-pzv8c\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " pod="kube-system/cilium-pzv8c" Dec 13 09:12:03.485473 kubelet[2606]: I1213 09:12:03.483274 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jhvv\" (UniqueName: \"kubernetes.io/projected/1f623c9d-a638-49a7-b6fb-9af1b0ae8185-kube-api-access-9jhvv\") pod \"kube-proxy-8mvhf\" (UID: \"1f623c9d-a638-49a7-b6fb-9af1b0ae8185\") " pod="kube-system/kube-proxy-8mvhf" Dec 13 09:12:03.485815 kubelet[2606]: I1213 09:12:03.483310 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-xtables-lock\") pod \"cilium-pzv8c\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " pod="kube-system/cilium-pzv8c" Dec 13 09:12:03.536389 kubelet[2606]: I1213 09:12:03.536157 2606 topology_manager.go:215] "Topology Admit Handler" podUID="ed6cfa66-5c95-4fea-b198-bb760b7f1c7f" podNamespace="kube-system" podName="cilium-operator-599987898-688zp" Dec 13 09:12:03.562022 systemd[1]: Created slice kubepods-besteffort-poded6cfa66_5c95_4fea_b198_bb760b7f1c7f.slice - libcontainer container kubepods-besteffort-poded6cfa66_5c95_4fea_b198_bb760b7f1c7f.slice. Dec 13 09:12:03.716305 kubelet[2606]: I1213 09:12:03.689091 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5v8q\" (UniqueName: \"kubernetes.io/projected/ed6cfa66-5c95-4fea-b198-bb760b7f1c7f-kube-api-access-k5v8q\") pod \"cilium-operator-599987898-688zp\" (UID: \"ed6cfa66-5c95-4fea-b198-bb760b7f1c7f\") " pod="kube-system/cilium-operator-599987898-688zp" Dec 13 09:12:03.716305 kubelet[2606]: I1213 09:12:03.689163 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed6cfa66-5c95-4fea-b198-bb760b7f1c7f-cilium-config-path\") pod \"cilium-operator-599987898-688zp\" (UID: \"ed6cfa66-5c95-4fea-b198-bb760b7f1c7f\") " pod="kube-system/cilium-operator-599987898-688zp" Dec 13 09:12:03.885984 kubelet[2606]: E1213 09:12:03.882541 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:03.890976 containerd[1457]: time="2024-12-13T09:12:03.887895052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-688zp,Uid:ed6cfa66-5c95-4fea-b198-bb760b7f1c7f,Namespace:kube-system,Attempt:0,}" Dec 13 09:12:03.971197 kubelet[2606]: E1213 09:12:03.968058 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:03.971624 containerd[1457]: time="2024-12-13T09:12:03.970548313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8mvhf,Uid:1f623c9d-a638-49a7-b6fb-9af1b0ae8185,Namespace:kube-system,Attempt:0,}" Dec 13 09:12:03.981252 containerd[1457]: time="2024-12-13T09:12:03.978760788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:03.981585 containerd[1457]: time="2024-12-13T09:12:03.981196894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:03.981727 containerd[1457]: time="2024-12-13T09:12:03.981534720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:03.982303 containerd[1457]: time="2024-12-13T09:12:03.982150703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:04.017976 kubelet[2606]: E1213 09:12:04.010349 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:04.042582 containerd[1457]: time="2024-12-13T09:12:04.042485914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pzv8c,Uid:bfb4cf56-808d-4df2-b49f-3237e5313657,Namespace:kube-system,Attempt:0,}" Dec 13 09:12:04.087449 systemd[1]: Started cri-containerd-3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718.scope - libcontainer container 3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718. Dec 13 09:12:04.090738 containerd[1457]: time="2024-12-13T09:12:04.089201984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:04.090738 containerd[1457]: time="2024-12-13T09:12:04.090324809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:04.090738 containerd[1457]: time="2024-12-13T09:12:04.090356676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:04.095497 containerd[1457]: time="2024-12-13T09:12:04.090949788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:04.201346 systemd[1]: Started cri-containerd-f43078587969e1991904fb87b228e9e8ede9f4d892e349effbd8746f9bcba3fe.scope - libcontainer container f43078587969e1991904fb87b228e9e8ede9f4d892e349effbd8746f9bcba3fe. Dec 13 09:12:04.205405 containerd[1457]: time="2024-12-13T09:12:04.200257314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:04.205405 containerd[1457]: time="2024-12-13T09:12:04.200407822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:04.205405 containerd[1457]: time="2024-12-13T09:12:04.200436234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:04.205405 containerd[1457]: time="2024-12-13T09:12:04.201892776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:04.267344 systemd[1]: Started cri-containerd-e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c.scope - libcontainer container e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c. Dec 13 09:12:04.366434 containerd[1457]: time="2024-12-13T09:12:04.366341213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-688zp,Uid:ed6cfa66-5c95-4fea-b198-bb760b7f1c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718\"" Dec 13 09:12:04.373188 kubelet[2606]: E1213 09:12:04.373136 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:04.382633 containerd[1457]: time="2024-12-13T09:12:04.382451264Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 09:12:04.391868 containerd[1457]: time="2024-12-13T09:12:04.384078037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8mvhf,Uid:1f623c9d-a638-49a7-b6fb-9af1b0ae8185,Namespace:kube-system,Attempt:0,} returns sandbox id \"f43078587969e1991904fb87b228e9e8ede9f4d892e349effbd8746f9bcba3fe\"" Dec 13 09:12:04.392138 kubelet[2606]: E1213 09:12:04.386030 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:04.407168 containerd[1457]: time="2024-12-13T09:12:04.405246530Z" level=info msg="CreateContainer within sandbox \"f43078587969e1991904fb87b228e9e8ede9f4d892e349effbd8746f9bcba3fe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 09:12:04.420157 containerd[1457]: time="2024-12-13T09:12:04.419192812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pzv8c,Uid:bfb4cf56-808d-4df2-b49f-3237e5313657,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\"" Dec 13 09:12:04.423366 kubelet[2606]: E1213 09:12:04.421140 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:04.481744 containerd[1457]: time="2024-12-13T09:12:04.480689881Z" level=info msg="CreateContainer within sandbox \"f43078587969e1991904fb87b228e9e8ede9f4d892e349effbd8746f9bcba3fe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9a6be86d048de61253ffcdc13837b1efc1781ccbc8295bfc890b827ed65bfc88\"" Dec 13 09:12:04.485138 containerd[1457]: time="2024-12-13T09:12:04.484296019Z" level=info msg="StartContainer for \"9a6be86d048de61253ffcdc13837b1efc1781ccbc8295bfc890b827ed65bfc88\"" Dec 13 09:12:04.600600 systemd[1]: Started cri-containerd-9a6be86d048de61253ffcdc13837b1efc1781ccbc8295bfc890b827ed65bfc88.scope - libcontainer container 9a6be86d048de61253ffcdc13837b1efc1781ccbc8295bfc890b827ed65bfc88. Dec 13 09:12:04.731591 containerd[1457]: time="2024-12-13T09:12:04.730393670Z" level=info msg="StartContainer for \"9a6be86d048de61253ffcdc13837b1efc1781ccbc8295bfc890b827ed65bfc88\" returns successfully" Dec 13 09:12:05.667403 kubelet[2606]: E1213 09:12:05.667115 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:06.671990 kubelet[2606]: E1213 09:12:06.671635 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:06.915121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount417809335.mount: Deactivated successfully. Dec 13 09:12:08.706263 containerd[1457]: time="2024-12-13T09:12:08.706160475Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:08.708493 containerd[1457]: time="2024-12-13T09:12:08.708180832Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907249" Dec 13 09:12:08.709706 containerd[1457]: time="2024-12-13T09:12:08.709648985Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:08.713384 containerd[1457]: time="2024-12-13T09:12:08.713072672Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.329991483s" Dec 13 09:12:08.713384 containerd[1457]: time="2024-12-13T09:12:08.713147907Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 09:12:08.716159 containerd[1457]: time="2024-12-13T09:12:08.715651618Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 09:12:08.718945 containerd[1457]: time="2024-12-13T09:12:08.718831960Z" level=info msg="CreateContainer within sandbox \"3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 09:12:08.746802 containerd[1457]: time="2024-12-13T09:12:08.746683090Z" level=info msg="CreateContainer within sandbox \"3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b\"" Dec 13 09:12:08.749093 containerd[1457]: time="2024-12-13T09:12:08.747891916Z" level=info msg="StartContainer for \"5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b\"" Dec 13 09:12:08.797266 systemd[1]: Started cri-containerd-5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b.scope - libcontainer container 5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b. Dec 13 09:12:08.845825 containerd[1457]: time="2024-12-13T09:12:08.845729074Z" level=info msg="StartContainer for \"5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b\" returns successfully" Dec 13 09:12:09.695059 kubelet[2606]: E1213 09:12:09.694374 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:09.855983 kubelet[2606]: I1213 09:12:09.853699 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8mvhf" podStartSLOduration=6.8536761760000005 podStartE2EDuration="6.853676176s" podCreationTimestamp="2024-12-13 09:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:12:05.722960065 +0000 UTC m=+16.356435707" watchObservedRunningTime="2024-12-13 09:12:09.853676176 +0000 UTC m=+20.487151818" Dec 13 09:12:10.710974 kubelet[2606]: E1213 09:12:10.710487 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:14.898824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2661850329.mount: Deactivated successfully. Dec 13 09:12:18.184705 containerd[1457]: time="2024-12-13T09:12:18.184426001Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:18.187164 containerd[1457]: time="2024-12-13T09:12:18.186622616Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735399" Dec 13 09:12:18.187164 containerd[1457]: time="2024-12-13T09:12:18.187086580Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:12:18.189686 containerd[1457]: time="2024-12-13T09:12:18.189630830Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.473916763s" Dec 13 09:12:18.190391 containerd[1457]: time="2024-12-13T09:12:18.189895839Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 09:12:18.193593 containerd[1457]: time="2024-12-13T09:12:18.193476256Z" level=info msg="CreateContainer within sandbox \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 09:12:18.356827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1421831413.mount: Deactivated successfully. Dec 13 09:12:18.363875 containerd[1457]: time="2024-12-13T09:12:18.363680807Z" level=info msg="CreateContainer within sandbox \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5\"" Dec 13 09:12:18.364676 containerd[1457]: time="2024-12-13T09:12:18.364401001Z" level=info msg="StartContainer for \"c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5\"" Dec 13 09:12:18.609515 systemd[1]: Started cri-containerd-c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5.scope - libcontainer container c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5. Dec 13 09:12:18.659375 containerd[1457]: time="2024-12-13T09:12:18.659249585Z" level=info msg="StartContainer for \"c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5\" returns successfully" Dec 13 09:12:18.677535 systemd[1]: cri-containerd-c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5.scope: Deactivated successfully. Dec 13 09:12:18.747120 kubelet[2606]: E1213 09:12:18.746383 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:18.786722 kubelet[2606]: I1213 09:12:18.785700 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-688zp" podStartSLOduration=11.447013684 podStartE2EDuration="15.785676969s" podCreationTimestamp="2024-12-13 09:12:03 +0000 UTC" firstStartedPulling="2024-12-13 09:12:04.376146213 +0000 UTC m=+15.009621832" lastFinishedPulling="2024-12-13 09:12:08.714809468 +0000 UTC m=+19.348285117" observedRunningTime="2024-12-13 09:12:09.856728708 +0000 UTC m=+20.490204348" watchObservedRunningTime="2024-12-13 09:12:18.785676969 +0000 UTC m=+29.419152627" Dec 13 09:12:19.033626 containerd[1457]: time="2024-12-13T09:12:19.017164783Z" level=info msg="shim disconnected" id=c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5 namespace=k8s.io Dec 13 09:12:19.033626 containerd[1457]: time="2024-12-13T09:12:19.033473913Z" level=warning msg="cleaning up after shim disconnected" id=c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5 namespace=k8s.io Dec 13 09:12:19.033626 containerd[1457]: time="2024-12-13T09:12:19.033493185Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:12:19.054234 containerd[1457]: time="2024-12-13T09:12:19.054123916Z" level=warning msg="cleanup warnings time=\"2024-12-13T09:12:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 09:12:19.353519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5-rootfs.mount: Deactivated successfully. Dec 13 09:12:19.751972 kubelet[2606]: E1213 09:12:19.751671 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:19.758758 containerd[1457]: time="2024-12-13T09:12:19.758131344Z" level=info msg="CreateContainer within sandbox \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 09:12:19.795030 containerd[1457]: time="2024-12-13T09:12:19.792325185Z" level=info msg="CreateContainer within sandbox \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809\"" Dec 13 09:12:19.793224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2634102764.mount: Deactivated successfully. Dec 13 09:12:19.802010 containerd[1457]: time="2024-12-13T09:12:19.798834973Z" level=info msg="StartContainer for \"6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809\"" Dec 13 09:12:19.864200 systemd[1]: Started cri-containerd-6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809.scope - libcontainer container 6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809. Dec 13 09:12:19.917489 containerd[1457]: time="2024-12-13T09:12:19.917339614Z" level=info msg="StartContainer for \"6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809\" returns successfully" Dec 13 09:12:19.932114 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 09:12:19.932673 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:12:19.932790 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:12:19.940492 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:12:19.940801 systemd[1]: cri-containerd-6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809.scope: Deactivated successfully. Dec 13 09:12:19.992234 containerd[1457]: time="2024-12-13T09:12:19.992139228Z" level=info msg="shim disconnected" id=6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809 namespace=k8s.io Dec 13 09:12:19.992234 containerd[1457]: time="2024-12-13T09:12:19.992229974Z" level=warning msg="cleaning up after shim disconnected" id=6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809 namespace=k8s.io Dec 13 09:12:19.992552 containerd[1457]: time="2024-12-13T09:12:19.992251130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:12:20.013009 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:12:20.352481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809-rootfs.mount: Deactivated successfully. Dec 13 09:12:20.757640 kubelet[2606]: E1213 09:12:20.756768 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:20.762617 containerd[1457]: time="2024-12-13T09:12:20.762231949Z" level=info msg="CreateContainer within sandbox \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 09:12:20.800464 containerd[1457]: time="2024-12-13T09:12:20.800219762Z" level=info msg="CreateContainer within sandbox \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf\"" Dec 13 09:12:20.802518 containerd[1457]: time="2024-12-13T09:12:20.802368598Z" level=info msg="StartContainer for \"3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf\"" Dec 13 09:12:20.867423 systemd[1]: Started cri-containerd-3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf.scope - libcontainer container 3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf. Dec 13 09:12:20.911634 containerd[1457]: time="2024-12-13T09:12:20.911560007Z" level=info msg="StartContainer for \"3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf\" returns successfully" Dec 13 09:12:20.920944 systemd[1]: cri-containerd-3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf.scope: Deactivated successfully. Dec 13 09:12:20.974074 containerd[1457]: time="2024-12-13T09:12:20.973998539Z" level=info msg="shim disconnected" id=3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf namespace=k8s.io Dec 13 09:12:20.974074 containerd[1457]: time="2024-12-13T09:12:20.974066536Z" level=warning msg="cleaning up after shim disconnected" id=3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf namespace=k8s.io Dec 13 09:12:20.974074 containerd[1457]: time="2024-12-13T09:12:20.974075549Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:12:21.352488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf-rootfs.mount: Deactivated successfully. Dec 13 09:12:21.764695 kubelet[2606]: E1213 09:12:21.764354 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:21.771599 containerd[1457]: time="2024-12-13T09:12:21.769532740Z" level=info msg="CreateContainer within sandbox \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 09:12:21.803580 containerd[1457]: time="2024-12-13T09:12:21.803194029Z" level=info msg="CreateContainer within sandbox \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a\"" Dec 13 09:12:21.805150 containerd[1457]: time="2024-12-13T09:12:21.805094177Z" level=info msg="StartContainer for \"0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a\"" Dec 13 09:12:21.869266 systemd[1]: Started cri-containerd-0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a.scope - libcontainer container 0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a. Dec 13 09:12:21.914230 systemd[1]: cri-containerd-0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a.scope: Deactivated successfully. Dec 13 09:12:21.916225 containerd[1457]: time="2024-12-13T09:12:21.916145931Z" level=info msg="StartContainer for \"0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a\" returns successfully" Dec 13 09:12:21.954692 containerd[1457]: time="2024-12-13T09:12:21.954609634Z" level=info msg="shim disconnected" id=0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a namespace=k8s.io Dec 13 09:12:21.954692 containerd[1457]: time="2024-12-13T09:12:21.954684216Z" level=warning msg="cleaning up after shim disconnected" id=0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a namespace=k8s.io Dec 13 09:12:21.954692 containerd[1457]: time="2024-12-13T09:12:21.954697648Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:12:22.352192 systemd[1]: run-containerd-runc-k8s.io-0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a-runc.ZyeV5M.mount: Deactivated successfully. Dec 13 09:12:22.352319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a-rootfs.mount: Deactivated successfully. Dec 13 09:12:22.782309 kubelet[2606]: E1213 09:12:22.782118 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:22.792601 containerd[1457]: time="2024-12-13T09:12:22.792233713Z" level=info msg="CreateContainer within sandbox \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 09:12:22.824618 containerd[1457]: time="2024-12-13T09:12:22.824417489Z" level=info msg="CreateContainer within sandbox \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43\"" Dec 13 09:12:22.826639 containerd[1457]: time="2024-12-13T09:12:22.826482970Z" level=info msg="StartContainer for \"700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43\"" Dec 13 09:12:22.888285 systemd[1]: Started cri-containerd-700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43.scope - libcontainer container 700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43. Dec 13 09:12:22.930983 containerd[1457]: time="2024-12-13T09:12:22.930879965Z" level=info msg="StartContainer for \"700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43\" returns successfully" Dec 13 09:12:23.128200 kubelet[2606]: I1213 09:12:23.127609 2606 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 09:12:23.171080 kubelet[2606]: I1213 09:12:23.170858 2606 topology_manager.go:215] "Topology Admit Handler" podUID="965682cd-6916-4dca-a7f0-ba743d657d9b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6h9l5" Dec 13 09:12:23.174531 kubelet[2606]: I1213 09:12:23.173908 2606 topology_manager.go:215] "Topology Admit Handler" podUID="c8bb2ba6-692d-4721-9e45-9b7ca108b35a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9vhq6" Dec 13 09:12:23.195484 systemd[1]: Created slice kubepods-burstable-pod965682cd_6916_4dca_a7f0_ba743d657d9b.slice - libcontainer container kubepods-burstable-pod965682cd_6916_4dca_a7f0_ba743d657d9b.slice. Dec 13 09:12:23.209377 systemd[1]: Created slice kubepods-burstable-podc8bb2ba6_692d_4721_9e45_9b7ca108b35a.slice - libcontainer container kubepods-burstable-podc8bb2ba6_692d_4721_9e45_9b7ca108b35a.slice. Dec 13 09:12:23.271115 kubelet[2606]: I1213 09:12:23.271006 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn6gj\" (UniqueName: \"kubernetes.io/projected/c8bb2ba6-692d-4721-9e45-9b7ca108b35a-kube-api-access-xn6gj\") pod \"coredns-7db6d8ff4d-9vhq6\" (UID: \"c8bb2ba6-692d-4721-9e45-9b7ca108b35a\") " pod="kube-system/coredns-7db6d8ff4d-9vhq6" Dec 13 09:12:23.271115 kubelet[2606]: I1213 09:12:23.271099 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f65wg\" (UniqueName: \"kubernetes.io/projected/965682cd-6916-4dca-a7f0-ba743d657d9b-kube-api-access-f65wg\") pod \"coredns-7db6d8ff4d-6h9l5\" (UID: \"965682cd-6916-4dca-a7f0-ba743d657d9b\") " pod="kube-system/coredns-7db6d8ff4d-6h9l5" Dec 13 09:12:23.271115 kubelet[2606]: I1213 09:12:23.271149 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/965682cd-6916-4dca-a7f0-ba743d657d9b-config-volume\") pod \"coredns-7db6d8ff4d-6h9l5\" (UID: \"965682cd-6916-4dca-a7f0-ba743d657d9b\") " pod="kube-system/coredns-7db6d8ff4d-6h9l5" Dec 13 09:12:23.271793 kubelet[2606]: I1213 09:12:23.271169 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8bb2ba6-692d-4721-9e45-9b7ca108b35a-config-volume\") pod \"coredns-7db6d8ff4d-9vhq6\" (UID: \"c8bb2ba6-692d-4721-9e45-9b7ca108b35a\") " pod="kube-system/coredns-7db6d8ff4d-9vhq6" Dec 13 09:12:23.506428 kubelet[2606]: E1213 09:12:23.505991 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:23.508891 containerd[1457]: time="2024-12-13T09:12:23.507912450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6h9l5,Uid:965682cd-6916-4dca-a7f0-ba743d657d9b,Namespace:kube-system,Attempt:0,}" Dec 13 09:12:23.516537 kubelet[2606]: E1213 09:12:23.516314 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:23.517557 containerd[1457]: time="2024-12-13T09:12:23.517174662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9vhq6,Uid:c8bb2ba6-692d-4721-9e45-9b7ca108b35a,Namespace:kube-system,Attempt:0,}" Dec 13 09:12:23.791744 kubelet[2606]: E1213 09:12:23.791576 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:23.852917 kubelet[2606]: I1213 09:12:23.852790 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pzv8c" podStartSLOduration=7.087125424 podStartE2EDuration="20.852768662s" podCreationTimestamp="2024-12-13 09:12:03 +0000 UTC" firstStartedPulling="2024-12-13 09:12:04.425361084 +0000 UTC m=+15.058836703" lastFinishedPulling="2024-12-13 09:12:18.191004305 +0000 UTC m=+28.824479941" observedRunningTime="2024-12-13 09:12:23.851689426 +0000 UTC m=+34.485165078" watchObservedRunningTime="2024-12-13 09:12:23.852768662 +0000 UTC m=+34.486244303" Dec 13 09:12:24.792405 kubelet[2606]: E1213 09:12:24.792330 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:25.534038 systemd-networkd[1365]: cilium_host: Link UP Dec 13 09:12:25.538103 systemd-networkd[1365]: cilium_net: Link UP Dec 13 09:12:25.538462 systemd-networkd[1365]: cilium_net: Gained carrier Dec 13 09:12:25.542307 systemd-networkd[1365]: cilium_host: Gained carrier Dec 13 09:12:25.796989 kubelet[2606]: E1213 09:12:25.796384 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:25.869523 systemd-networkd[1365]: cilium_vxlan: Link UP Dec 13 09:12:25.869537 systemd-networkd[1365]: cilium_vxlan: Gained carrier Dec 13 09:12:26.118808 systemd-networkd[1365]: cilium_net: Gained IPv6LL Dec 13 09:12:26.373613 systemd-networkd[1365]: cilium_host: Gained IPv6LL Dec 13 09:12:26.689165 kernel: NET: Registered PF_ALG protocol family Dec 13 09:12:27.592213 systemd-networkd[1365]: cilium_vxlan: Gained IPv6LL Dec 13 09:12:28.685337 systemd-networkd[1365]: lxc_health: Link UP Dec 13 09:12:28.690013 systemd-networkd[1365]: lxc_health: Gained carrier Dec 13 09:12:29.223905 kernel: eth0: renamed from tmp74d5d Dec 13 09:12:29.219959 systemd-networkd[1365]: lxc71adc5b89900: Link UP Dec 13 09:12:29.253038 systemd-networkd[1365]: lxc71adc5b89900: Gained carrier Dec 13 09:12:29.306668 systemd-networkd[1365]: lxc01846b785a63: Link UP Dec 13 09:12:29.314744 kernel: eth0: renamed from tmp5df12 Dec 13 09:12:29.329541 systemd-networkd[1365]: lxc01846b785a63: Gained carrier Dec 13 09:12:30.025624 kubelet[2606]: E1213 09:12:30.025560 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:30.150259 systemd-networkd[1365]: lxc_health: Gained IPv6LL Dec 13 09:12:30.406375 systemd-networkd[1365]: lxc71adc5b89900: Gained IPv6LL Dec 13 09:12:30.725430 systemd-networkd[1365]: lxc01846b785a63: Gained IPv6LL Dec 13 09:12:30.842505 kubelet[2606]: E1213 09:12:30.842127 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:36.767097 containerd[1457]: time="2024-12-13T09:12:36.765728776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:36.767097 containerd[1457]: time="2024-12-13T09:12:36.765799184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:36.767097 containerd[1457]: time="2024-12-13T09:12:36.765810599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:36.767097 containerd[1457]: time="2024-12-13T09:12:36.765902443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:36.831329 systemd[1]: Started cri-containerd-74d5de5de2f60291989be2cfca927407f51896d2102dab9d6400ba715812c3ab.scope - libcontainer container 74d5de5de2f60291989be2cfca927407f51896d2102dab9d6400ba715812c3ab. Dec 13 09:12:36.868448 containerd[1457]: time="2024-12-13T09:12:36.868242752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:12:36.869422 containerd[1457]: time="2024-12-13T09:12:36.868328112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:12:36.870901 containerd[1457]: time="2024-12-13T09:12:36.869652307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:36.870901 containerd[1457]: time="2024-12-13T09:12:36.870476264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:12:36.918221 systemd[1]: Started cri-containerd-5df12b06ba0a9a36ce95d9680d2634be87b85aa69f5acc30ecbaa61844090841.scope - libcontainer container 5df12b06ba0a9a36ce95d9680d2634be87b85aa69f5acc30ecbaa61844090841. Dec 13 09:12:37.031012 containerd[1457]: time="2024-12-13T09:12:37.030122815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9vhq6,Uid:c8bb2ba6-692d-4721-9e45-9b7ca108b35a,Namespace:kube-system,Attempt:0,} returns sandbox id \"74d5de5de2f60291989be2cfca927407f51896d2102dab9d6400ba715812c3ab\"" Dec 13 09:12:37.038488 kubelet[2606]: E1213 09:12:37.038445 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:37.049477 containerd[1457]: time="2024-12-13T09:12:37.049399256Z" level=info msg="CreateContainer within sandbox \"74d5de5de2f60291989be2cfca927407f51896d2102dab9d6400ba715812c3ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 09:12:37.069298 containerd[1457]: time="2024-12-13T09:12:37.069234854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6h9l5,Uid:965682cd-6916-4dca-a7f0-ba743d657d9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5df12b06ba0a9a36ce95d9680d2634be87b85aa69f5acc30ecbaa61844090841\"" Dec 13 09:12:37.073955 kubelet[2606]: E1213 09:12:37.072180 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:37.085222 containerd[1457]: time="2024-12-13T09:12:37.083444384Z" level=info msg="CreateContainer within sandbox \"5df12b06ba0a9a36ce95d9680d2634be87b85aa69f5acc30ecbaa61844090841\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 09:12:37.090640 containerd[1457]: time="2024-12-13T09:12:37.090463583Z" level=info msg="CreateContainer within sandbox \"74d5de5de2f60291989be2cfca927407f51896d2102dab9d6400ba715812c3ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e30e1bf1bd769b7cfa9c30c35a44f372a519f8096360d7080bb235ba4d57867\"" Dec 13 09:12:37.092199 containerd[1457]: time="2024-12-13T09:12:37.092049514Z" level=info msg="StartContainer for \"4e30e1bf1bd769b7cfa9c30c35a44f372a519f8096360d7080bb235ba4d57867\"" Dec 13 09:12:37.120066 containerd[1457]: time="2024-12-13T09:12:37.119816838Z" level=info msg="CreateContainer within sandbox \"5df12b06ba0a9a36ce95d9680d2634be87b85aa69f5acc30ecbaa61844090841\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"784ed4060dc4b4db4dedccccb1db1614788e83cf99e457e5c60e4e7cbd29785a\"" Dec 13 09:12:37.123349 containerd[1457]: time="2024-12-13T09:12:37.123292596Z" level=info msg="StartContainer for \"784ed4060dc4b4db4dedccccb1db1614788e83cf99e457e5c60e4e7cbd29785a\"" Dec 13 09:12:37.150223 systemd[1]: Started cri-containerd-4e30e1bf1bd769b7cfa9c30c35a44f372a519f8096360d7080bb235ba4d57867.scope - libcontainer container 4e30e1bf1bd769b7cfa9c30c35a44f372a519f8096360d7080bb235ba4d57867. Dec 13 09:12:37.183440 systemd[1]: Started cri-containerd-784ed4060dc4b4db4dedccccb1db1614788e83cf99e457e5c60e4e7cbd29785a.scope - libcontainer container 784ed4060dc4b4db4dedccccb1db1614788e83cf99e457e5c60e4e7cbd29785a. Dec 13 09:12:37.233438 containerd[1457]: time="2024-12-13T09:12:37.233373353Z" level=info msg="StartContainer for \"4e30e1bf1bd769b7cfa9c30c35a44f372a519f8096360d7080bb235ba4d57867\" returns successfully" Dec 13 09:12:37.250519 containerd[1457]: time="2024-12-13T09:12:37.250365159Z" level=info msg="StartContainer for \"784ed4060dc4b4db4dedccccb1db1614788e83cf99e457e5c60e4e7cbd29785a\" returns successfully" Dec 13 09:12:37.784303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1869316950.mount: Deactivated successfully. Dec 13 09:12:37.866828 kubelet[2606]: E1213 09:12:37.866070 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:37.871080 kubelet[2606]: E1213 09:12:37.871016 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:37.891409 kubelet[2606]: I1213 09:12:37.891320 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9vhq6" podStartSLOduration=34.89129355 podStartE2EDuration="34.89129355s" podCreationTimestamp="2024-12-13 09:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:12:37.889101122 +0000 UTC m=+48.522576770" watchObservedRunningTime="2024-12-13 09:12:37.89129355 +0000 UTC m=+48.524769194" Dec 13 09:12:37.913540 kubelet[2606]: I1213 09:12:37.912945 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6h9l5" podStartSLOduration=34.912901564 podStartE2EDuration="34.912901564s" podCreationTimestamp="2024-12-13 09:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:12:37.909592599 +0000 UTC m=+48.543068252" watchObservedRunningTime="2024-12-13 09:12:37.912901564 +0000 UTC m=+48.546377277" Dec 13 09:12:38.874465 kubelet[2606]: E1213 09:12:38.874356 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:38.877477 kubelet[2606]: E1213 09:12:38.875152 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:38.909209 systemd[1]: Started sshd@9-146.190.159.183:22-147.75.109.163:50072.service - OpenSSH per-connection server daemon (147.75.109.163:50072). Dec 13 09:12:39.042756 sshd[3977]: Accepted publickey for core from 147.75.109.163 port 50072 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:39.046267 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:39.058035 systemd-logind[1443]: New session 10 of user core. Dec 13 09:12:39.066383 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 09:12:39.879856 kubelet[2606]: E1213 09:12:39.879759 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:39.883053 kubelet[2606]: E1213 09:12:39.881705 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:12:40.041495 sshd[3977]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:40.048452 systemd[1]: sshd@9-146.190.159.183:22-147.75.109.163:50072.service: Deactivated successfully. Dec 13 09:12:40.059582 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 09:12:40.067725 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Dec 13 09:12:40.072250 systemd-logind[1443]: Removed session 10. Dec 13 09:12:45.063591 systemd[1]: Started sshd@10-146.190.159.183:22-147.75.109.163:50074.service - OpenSSH per-connection server daemon (147.75.109.163:50074). Dec 13 09:12:45.125141 sshd[3998]: Accepted publickey for core from 147.75.109.163 port 50074 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:45.128294 sshd[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:45.135556 systemd-logind[1443]: New session 11 of user core. Dec 13 09:12:45.141257 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 09:12:45.315986 sshd[3998]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:45.323822 systemd[1]: sshd@10-146.190.159.183:22-147.75.109.163:50074.service: Deactivated successfully. Dec 13 09:12:45.330390 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 09:12:45.332540 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Dec 13 09:12:45.334703 systemd-logind[1443]: Removed session 11. Dec 13 09:12:50.349492 systemd[1]: Started sshd@11-146.190.159.183:22-147.75.109.163:52882.service - OpenSSH per-connection server daemon (147.75.109.163:52882). Dec 13 09:12:50.440050 sshd[4014]: Accepted publickey for core from 147.75.109.163 port 52882 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:50.443951 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:50.466097 systemd-logind[1443]: New session 12 of user core. Dec 13 09:12:50.486865 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 09:12:50.727615 sshd[4014]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:50.737316 systemd[1]: sshd@11-146.190.159.183:22-147.75.109.163:52882.service: Deactivated successfully. Dec 13 09:12:50.743297 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 09:12:50.751784 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Dec 13 09:12:50.754546 systemd-logind[1443]: Removed session 12. Dec 13 09:12:55.750608 systemd[1]: Started sshd@12-146.190.159.183:22-147.75.109.163:52894.service - OpenSSH per-connection server daemon (147.75.109.163:52894). Dec 13 09:12:55.804769 sshd[4028]: Accepted publickey for core from 147.75.109.163 port 52894 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:55.807128 sshd[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:55.816062 systemd-logind[1443]: New session 13 of user core. Dec 13 09:12:55.827482 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 09:12:55.996274 sshd[4028]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:56.008911 systemd[1]: sshd@12-146.190.159.183:22-147.75.109.163:52894.service: Deactivated successfully. Dec 13 09:12:56.012224 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 09:12:56.014104 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Dec 13 09:12:56.024126 systemd[1]: Started sshd@13-146.190.159.183:22-147.75.109.163:55448.service - OpenSSH per-connection server daemon (147.75.109.163:55448). Dec 13 09:12:56.028023 systemd-logind[1443]: Removed session 13. Dec 13 09:12:56.080581 sshd[4042]: Accepted publickey for core from 147.75.109.163 port 55448 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:56.082982 sshd[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:56.090894 systemd-logind[1443]: New session 14 of user core. Dec 13 09:12:56.097480 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 09:12:56.340154 sshd[4042]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:56.354280 systemd[1]: sshd@13-146.190.159.183:22-147.75.109.163:55448.service: Deactivated successfully. Dec 13 09:12:56.359116 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 09:12:56.362912 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Dec 13 09:12:56.371525 systemd[1]: Started sshd@14-146.190.159.183:22-147.75.109.163:55452.service - OpenSSH per-connection server daemon (147.75.109.163:55452). Dec 13 09:12:56.376680 systemd-logind[1443]: Removed session 14. Dec 13 09:12:56.436544 sshd[4053]: Accepted publickey for core from 147.75.109.163 port 55452 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:12:56.439893 sshd[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:12:56.448347 systemd-logind[1443]: New session 15 of user core. Dec 13 09:12:56.454396 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 09:12:56.652372 sshd[4053]: pam_unix(sshd:session): session closed for user core Dec 13 09:12:56.661718 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Dec 13 09:12:56.662487 systemd[1]: sshd@14-146.190.159.183:22-147.75.109.163:55452.service: Deactivated successfully. Dec 13 09:12:56.665759 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 09:12:56.667715 systemd-logind[1443]: Removed session 15. Dec 13 09:13:00.546812 kubelet[2606]: E1213 09:13:00.546713 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:01.733669 systemd[1]: Started sshd@15-146.190.159.183:22-147.75.109.163:55462.service - OpenSSH per-connection server daemon (147.75.109.163:55462). Dec 13 09:13:01.844024 sshd[4066]: Accepted publickey for core from 147.75.109.163 port 55462 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:01.846692 sshd[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:01.882081 systemd-logind[1443]: New session 16 of user core. Dec 13 09:13:01.890469 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 09:13:02.196390 sshd[4066]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:02.228388 systemd[1]: sshd@15-146.190.159.183:22-147.75.109.163:55462.service: Deactivated successfully. Dec 13 09:13:02.236550 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 09:13:02.240258 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Dec 13 09:13:02.244679 systemd-logind[1443]: Removed session 16. Dec 13 09:13:07.214383 systemd[1]: Started sshd@16-146.190.159.183:22-147.75.109.163:56182.service - OpenSSH per-connection server daemon (147.75.109.163:56182). Dec 13 09:13:07.257969 sshd[4081]: Accepted publickey for core from 147.75.109.163 port 56182 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:07.260375 sshd[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:07.267033 systemd-logind[1443]: New session 17 of user core. Dec 13 09:13:07.279507 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 09:13:07.441829 sshd[4081]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:07.447539 systemd[1]: sshd@16-146.190.159.183:22-147.75.109.163:56182.service: Deactivated successfully. Dec 13 09:13:07.452552 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 09:13:07.454112 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Dec 13 09:13:07.455598 systemd-logind[1443]: Removed session 17. Dec 13 09:13:07.548865 kubelet[2606]: E1213 09:13:07.548131 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:12.471832 systemd[1]: Started sshd@17-146.190.159.183:22-147.75.109.163:56186.service - OpenSSH per-connection server daemon (147.75.109.163:56186). Dec 13 09:13:12.526972 sshd[4094]: Accepted publickey for core from 147.75.109.163 port 56186 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:12.530762 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:12.541460 systemd-logind[1443]: New session 18 of user core. Dec 13 09:13:12.547328 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 09:13:12.796778 sshd[4094]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:12.809357 systemd[1]: sshd@17-146.190.159.183:22-147.75.109.163:56186.service: Deactivated successfully. Dec 13 09:13:12.812914 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 09:13:12.819180 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Dec 13 09:13:12.828624 systemd[1]: Started sshd@18-146.190.159.183:22-147.75.109.163:56188.service - OpenSSH per-connection server daemon (147.75.109.163:56188). Dec 13 09:13:12.833872 systemd-logind[1443]: Removed session 18. Dec 13 09:13:12.930720 sshd[4107]: Accepted publickey for core from 147.75.109.163 port 56188 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:12.934807 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:12.949215 systemd-logind[1443]: New session 19 of user core. Dec 13 09:13:12.953375 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 09:13:13.892428 sshd[4107]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:13.937739 systemd[1]: Started sshd@19-146.190.159.183:22-147.75.109.163:56202.service - OpenSSH per-connection server daemon (147.75.109.163:56202). Dec 13 09:13:13.938708 systemd[1]: sshd@18-146.190.159.183:22-147.75.109.163:56188.service: Deactivated successfully. Dec 13 09:13:13.949581 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 09:13:13.955958 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Dec 13 09:13:13.962076 systemd-logind[1443]: Removed session 19. Dec 13 09:13:14.073907 sshd[4116]: Accepted publickey for core from 147.75.109.163 port 56202 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:14.078332 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:14.089656 systemd-logind[1443]: New session 20 of user core. Dec 13 09:13:14.099628 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 09:13:14.550112 kubelet[2606]: E1213 09:13:14.549847 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:17.060668 sshd[4116]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:17.075888 systemd[1]: sshd@19-146.190.159.183:22-147.75.109.163:56202.service: Deactivated successfully. Dec 13 09:13:17.085843 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 09:13:17.096360 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Dec 13 09:13:17.105085 systemd[1]: Started sshd@20-146.190.159.183:22-147.75.109.163:40932.service - OpenSSH per-connection server daemon (147.75.109.163:40932). Dec 13 09:13:17.112526 systemd-logind[1443]: Removed session 20. Dec 13 09:13:17.194011 sshd[4134]: Accepted publickey for core from 147.75.109.163 port 40932 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:17.196463 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:17.217331 systemd-logind[1443]: New session 21 of user core. Dec 13 09:13:17.221396 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 09:13:17.712659 sshd[4134]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:17.731667 systemd[1]: sshd@20-146.190.159.183:22-147.75.109.163:40932.service: Deactivated successfully. Dec 13 09:13:17.742068 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 09:13:17.748767 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Dec 13 09:13:17.756609 systemd[1]: Started sshd@21-146.190.159.183:22-147.75.109.163:40946.service - OpenSSH per-connection server daemon (147.75.109.163:40946). Dec 13 09:13:17.763167 systemd-logind[1443]: Removed session 21. Dec 13 09:13:17.827063 sshd[4146]: Accepted publickey for core from 147.75.109.163 port 40946 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:17.830041 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:17.848907 systemd-logind[1443]: New session 22 of user core. Dec 13 09:13:17.858309 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 09:13:18.091595 sshd[4146]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:18.102095 systemd[1]: sshd@21-146.190.159.183:22-147.75.109.163:40946.service: Deactivated successfully. Dec 13 09:13:18.105917 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 09:13:18.109226 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Dec 13 09:13:18.113094 systemd-logind[1443]: Removed session 22. Dec 13 09:13:21.547315 kubelet[2606]: E1213 09:13:21.547004 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:23.112378 systemd[1]: Started sshd@22-146.190.159.183:22-147.75.109.163:40958.service - OpenSSH per-connection server daemon (147.75.109.163:40958). Dec 13 09:13:23.153862 sshd[4160]: Accepted publickey for core from 147.75.109.163 port 40958 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:23.155951 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:23.164011 systemd-logind[1443]: New session 23 of user core. Dec 13 09:13:23.169451 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 09:13:23.311218 sshd[4160]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:23.317262 systemd[1]: sshd@22-146.190.159.183:22-147.75.109.163:40958.service: Deactivated successfully. Dec 13 09:13:23.320674 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 09:13:23.321905 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Dec 13 09:13:23.323250 systemd-logind[1443]: Removed session 23. Dec 13 09:13:28.338525 systemd[1]: Started sshd@23-146.190.159.183:22-147.75.109.163:38278.service - OpenSSH per-connection server daemon (147.75.109.163:38278). Dec 13 09:13:28.405458 sshd[4176]: Accepted publickey for core from 147.75.109.163 port 38278 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:28.406639 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:28.426172 systemd-logind[1443]: New session 24 of user core. Dec 13 09:13:28.431252 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 09:13:28.646749 sshd[4176]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:28.659695 systemd[1]: sshd@23-146.190.159.183:22-147.75.109.163:38278.service: Deactivated successfully. Dec 13 09:13:28.668563 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 09:13:28.674768 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. Dec 13 09:13:28.678020 systemd-logind[1443]: Removed session 24. Dec 13 09:13:33.668251 systemd[1]: Started sshd@24-146.190.159.183:22-147.75.109.163:38284.service - OpenSSH per-connection server daemon (147.75.109.163:38284). Dec 13 09:13:33.715001 sshd[4189]: Accepted publickey for core from 147.75.109.163 port 38284 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:33.716615 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:33.723236 systemd-logind[1443]: New session 25 of user core. Dec 13 09:13:33.731308 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 09:13:33.871092 sshd[4189]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:33.877338 systemd[1]: sshd@24-146.190.159.183:22-147.75.109.163:38284.service: Deactivated successfully. Dec 13 09:13:33.880262 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 09:13:33.881810 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. Dec 13 09:13:33.886218 systemd-logind[1443]: Removed session 25. Dec 13 09:13:35.549029 kubelet[2606]: E1213 09:13:35.548341 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:38.894513 systemd[1]: Started sshd@25-146.190.159.183:22-147.75.109.163:35454.service - OpenSSH per-connection server daemon (147.75.109.163:35454). Dec 13 09:13:39.004981 sshd[4206]: Accepted publickey for core from 147.75.109.163 port 35454 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:39.009879 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:39.023092 systemd-logind[1443]: New session 26 of user core. Dec 13 09:13:39.030328 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 09:13:39.263222 sshd[4206]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:39.275510 systemd[1]: sshd@25-146.190.159.183:22-147.75.109.163:35454.service: Deactivated successfully. Dec 13 09:13:39.278843 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 09:13:39.288865 systemd-logind[1443]: Session 26 logged out. Waiting for processes to exit. Dec 13 09:13:39.300953 systemd[1]: Started sshd@26-146.190.159.183:22-147.75.109.163:35464.service - OpenSSH per-connection server daemon (147.75.109.163:35464). Dec 13 09:13:39.304257 systemd-logind[1443]: Removed session 26. Dec 13 09:13:39.362871 sshd[4219]: Accepted publickey for core from 147.75.109.163 port 35464 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:39.367664 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:39.385089 systemd-logind[1443]: New session 27 of user core. Dec 13 09:13:39.390301 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 09:13:41.548686 kubelet[2606]: E1213 09:13:41.548593 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:42.460807 containerd[1457]: time="2024-12-13T09:13:42.460650125Z" level=info msg="StopContainer for \"5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b\" with timeout 30 (s)" Dec 13 09:13:42.465976 containerd[1457]: time="2024-12-13T09:13:42.464953006Z" level=info msg="Stop container \"5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b\" with signal terminated" Dec 13 09:13:42.486907 containerd[1457]: time="2024-12-13T09:13:42.486822499Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 09:13:42.500424 containerd[1457]: time="2024-12-13T09:13:42.500239831Z" level=info msg="StopContainer for \"700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43\" with timeout 2 (s)" Dec 13 09:13:42.501525 containerd[1457]: time="2024-12-13T09:13:42.501361703Z" level=info msg="Stop container \"700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43\" with signal terminated" Dec 13 09:13:42.508853 systemd[1]: cri-containerd-5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b.scope: Deactivated successfully. Dec 13 09:13:42.528775 systemd-networkd[1365]: lxc_health: Link DOWN Dec 13 09:13:42.528790 systemd-networkd[1365]: lxc_health: Lost carrier Dec 13 09:13:42.576810 systemd[1]: cri-containerd-700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43.scope: Deactivated successfully. Dec 13 09:13:42.577603 systemd[1]: cri-containerd-700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43.scope: Consumed 12.276s CPU time. Dec 13 09:13:42.596717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b-rootfs.mount: Deactivated successfully. Dec 13 09:13:42.635288 containerd[1457]: time="2024-12-13T09:13:42.635193382Z" level=info msg="shim disconnected" id=5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b namespace=k8s.io Dec 13 09:13:42.635589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43-rootfs.mount: Deactivated successfully. Dec 13 09:13:42.636873 containerd[1457]: time="2024-12-13T09:13:42.636311457Z" level=warning msg="cleaning up after shim disconnected" id=5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b namespace=k8s.io Dec 13 09:13:42.636873 containerd[1457]: time="2024-12-13T09:13:42.636344615Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:13:42.636873 containerd[1457]: time="2024-12-13T09:13:42.636690352Z" level=info msg="shim disconnected" id=700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43 namespace=k8s.io Dec 13 09:13:42.636873 containerd[1457]: time="2024-12-13T09:13:42.636740707Z" level=warning msg="cleaning up after shim disconnected" id=700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43 namespace=k8s.io Dec 13 09:13:42.636873 containerd[1457]: time="2024-12-13T09:13:42.636748387Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:13:42.677005 containerd[1457]: time="2024-12-13T09:13:42.676827438Z" level=info msg="StopContainer for \"700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43\" returns successfully" Dec 13 09:13:42.678211 containerd[1457]: time="2024-12-13T09:13:42.678168875Z" level=info msg="StopPodSandbox for \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\"" Dec 13 09:13:42.681475 containerd[1457]: time="2024-12-13T09:13:42.678231537Z" level=info msg="Container to stop \"0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:13:42.681475 containerd[1457]: time="2024-12-13T09:13:42.678243882Z" level=info msg="Container to stop \"700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:13:42.681475 containerd[1457]: time="2024-12-13T09:13:42.678255665Z" level=info msg="Container to stop \"c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:13:42.681475 containerd[1457]: time="2024-12-13T09:13:42.678265414Z" level=info msg="Container to stop \"3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:13:42.681475 containerd[1457]: time="2024-12-13T09:13:42.678276840Z" level=info msg="Container to stop \"6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:13:42.684021 containerd[1457]: time="2024-12-13T09:13:42.682809638Z" level=info msg="StopContainer for \"5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b\" returns successfully" Dec 13 09:13:42.683637 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c-shm.mount: Deactivated successfully. Dec 13 09:13:42.685785 containerd[1457]: time="2024-12-13T09:13:42.685501690Z" level=info msg="StopPodSandbox for \"3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718\"" Dec 13 09:13:42.685785 containerd[1457]: time="2024-12-13T09:13:42.685581397Z" level=info msg="Container to stop \"5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:13:42.690409 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718-shm.mount: Deactivated successfully. Dec 13 09:13:42.701617 systemd[1]: cri-containerd-e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c.scope: Deactivated successfully. Dec 13 09:13:42.734873 systemd[1]: cri-containerd-3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718.scope: Deactivated successfully. Dec 13 09:13:42.761981 containerd[1457]: time="2024-12-13T09:13:42.761599013Z" level=info msg="shim disconnected" id=e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c namespace=k8s.io Dec 13 09:13:42.761981 containerd[1457]: time="2024-12-13T09:13:42.761661327Z" level=warning msg="cleaning up after shim disconnected" id=e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c namespace=k8s.io Dec 13 09:13:42.761981 containerd[1457]: time="2024-12-13T09:13:42.761670691Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:13:42.797420 containerd[1457]: time="2024-12-13T09:13:42.797104130Z" level=warning msg="cleanup warnings time=\"2024-12-13T09:13:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 09:13:42.809595 containerd[1457]: time="2024-12-13T09:13:42.808899286Z" level=info msg="shim disconnected" id=3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718 namespace=k8s.io Dec 13 09:13:42.809595 containerd[1457]: time="2024-12-13T09:13:42.808984431Z" level=warning msg="cleaning up after shim disconnected" id=3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718 namespace=k8s.io Dec 13 09:13:42.809595 containerd[1457]: time="2024-12-13T09:13:42.808997815Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:13:42.831054 containerd[1457]: time="2024-12-13T09:13:42.830013064Z" level=info msg="TearDown network for sandbox \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\" successfully" Dec 13 09:13:42.831054 containerd[1457]: time="2024-12-13T09:13:42.830904091Z" level=info msg="StopPodSandbox for \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\" returns successfully" Dec 13 09:13:42.856859 containerd[1457]: time="2024-12-13T09:13:42.856708046Z" level=info msg="TearDown network for sandbox \"3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718\" successfully" Dec 13 09:13:42.857474 containerd[1457]: time="2024-12-13T09:13:42.856903215Z" level=info msg="StopPodSandbox for \"3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718\" returns successfully" Dec 13 09:13:42.937510 kubelet[2606]: I1213 09:13:42.937464 2606 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfb4cf56-808d-4df2-b49f-3237e5313657-hubble-tls\") pod \"bfb4cf56-808d-4df2-b49f-3237e5313657\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " Dec 13 09:13:42.938546 kubelet[2606]: I1213 09:13:42.938521 2606 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnlfb\" (UniqueName: \"kubernetes.io/projected/bfb4cf56-808d-4df2-b49f-3237e5313657-kube-api-access-gnlfb\") pod \"bfb4cf56-808d-4df2-b49f-3237e5313657\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " Dec 13 09:13:42.938655 kubelet[2606]: I1213 09:13:42.938643 2606 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-xtables-lock\") pod \"bfb4cf56-808d-4df2-b49f-3237e5313657\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " Dec 13 09:13:42.938717 kubelet[2606]: I1213 09:13:42.938708 2606 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfb4cf56-808d-4df2-b49f-3237e5313657-cilium-config-path\") pod \"bfb4cf56-808d-4df2-b49f-3237e5313657\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " Dec 13 09:13:42.939594 kubelet[2606]: I1213 09:13:42.938851 2606 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-bpf-maps\") pod \"bfb4cf56-808d-4df2-b49f-3237e5313657\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " Dec 13 09:13:42.939594 kubelet[2606]: I1213 09:13:42.938882 2606 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-host-proc-sys-kernel\") pod \"bfb4cf56-808d-4df2-b49f-3237e5313657\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " Dec 13 09:13:42.939594 kubelet[2606]: I1213 09:13:42.938898 2606 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-lib-modules\") pod \"bfb4cf56-808d-4df2-b49f-3237e5313657\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " Dec 13 09:13:42.939594 kubelet[2606]: I1213 09:13:42.938913 2606 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-host-proc-sys-net\") pod \"bfb4cf56-808d-4df2-b49f-3237e5313657\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " Dec 13 09:13:42.939594 kubelet[2606]: I1213 09:13:42.938949 2606 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-cilium-run\") pod \"bfb4cf56-808d-4df2-b49f-3237e5313657\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " Dec 13 09:13:42.939594 kubelet[2606]: I1213 09:13:42.938963 2606 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-cilium-cgroup\") pod \"bfb4cf56-808d-4df2-b49f-3237e5313657\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " Dec 13 09:13:42.939813 kubelet[2606]: I1213 09:13:42.938985 2606 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfb4cf56-808d-4df2-b49f-3237e5313657-clustermesh-secrets\") pod \"bfb4cf56-808d-4df2-b49f-3237e5313657\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " Dec 13 09:13:42.939813 kubelet[2606]: I1213 09:13:42.939005 2606 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5v8q\" (UniqueName: \"kubernetes.io/projected/ed6cfa66-5c95-4fea-b198-bb760b7f1c7f-kube-api-access-k5v8q\") pod \"ed6cfa66-5c95-4fea-b198-bb760b7f1c7f\" (UID: \"ed6cfa66-5c95-4fea-b198-bb760b7f1c7f\") " Dec 13 09:13:42.939813 kubelet[2606]: I1213 09:13:42.939019 2606 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-hostproc\") pod \"bfb4cf56-808d-4df2-b49f-3237e5313657\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " Dec 13 09:13:42.939813 kubelet[2606]: I1213 09:13:42.939035 2606 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-cni-path\") pod \"bfb4cf56-808d-4df2-b49f-3237e5313657\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " Dec 13 09:13:42.939813 kubelet[2606]: I1213 09:13:42.939051 2606 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-etc-cni-netd\") pod \"bfb4cf56-808d-4df2-b49f-3237e5313657\" (UID: \"bfb4cf56-808d-4df2-b49f-3237e5313657\") " Dec 13 09:13:42.939813 kubelet[2606]: I1213 09:13:42.939070 2606 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed6cfa66-5c95-4fea-b198-bb760b7f1c7f-cilium-config-path\") pod \"ed6cfa66-5c95-4fea-b198-bb760b7f1c7f\" (UID: \"ed6cfa66-5c95-4fea-b198-bb760b7f1c7f\") " Dec 13 09:13:42.944437 kubelet[2606]: I1213 09:13:42.944387 2606 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb4cf56-808d-4df2-b49f-3237e5313657-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bfb4cf56-808d-4df2-b49f-3237e5313657" (UID: "bfb4cf56-808d-4df2-b49f-3237e5313657"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 09:13:42.944745 kubelet[2606]: I1213 09:13:42.944717 2606 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bfb4cf56-808d-4df2-b49f-3237e5313657" (UID: "bfb4cf56-808d-4df2-b49f-3237e5313657"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:42.945104 kubelet[2606]: I1213 09:13:42.942601 2606 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed6cfa66-5c95-4fea-b198-bb760b7f1c7f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ed6cfa66-5c95-4fea-b198-bb760b7f1c7f" (UID: "ed6cfa66-5c95-4fea-b198-bb760b7f1c7f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 09:13:42.945199 kubelet[2606]: I1213 09:13:42.945135 2606 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bfb4cf56-808d-4df2-b49f-3237e5313657" (UID: "bfb4cf56-808d-4df2-b49f-3237e5313657"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:42.945199 kubelet[2606]: I1213 09:13:42.945184 2606 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bfb4cf56-808d-4df2-b49f-3237e5313657" (UID: "bfb4cf56-808d-4df2-b49f-3237e5313657"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:42.949912 kubelet[2606]: I1213 09:13:42.949848 2606 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfb4cf56-808d-4df2-b49f-3237e5313657-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bfb4cf56-808d-4df2-b49f-3237e5313657" (UID: "bfb4cf56-808d-4df2-b49f-3237e5313657"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 09:13:42.950724 kubelet[2606]: I1213 09:13:42.950315 2606 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb4cf56-808d-4df2-b49f-3237e5313657-kube-api-access-gnlfb" (OuterVolumeSpecName: "kube-api-access-gnlfb") pod "bfb4cf56-808d-4df2-b49f-3237e5313657" (UID: "bfb4cf56-808d-4df2-b49f-3237e5313657"). InnerVolumeSpecName "kube-api-access-gnlfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 09:13:42.950724 kubelet[2606]: I1213 09:13:42.950368 2606 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bfb4cf56-808d-4df2-b49f-3237e5313657" (UID: "bfb4cf56-808d-4df2-b49f-3237e5313657"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:42.954430 kubelet[2606]: I1213 09:13:42.954365 2606 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb4cf56-808d-4df2-b49f-3237e5313657-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bfb4cf56-808d-4df2-b49f-3237e5313657" (UID: "bfb4cf56-808d-4df2-b49f-3237e5313657"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 09:13:42.954691 kubelet[2606]: I1213 09:13:42.954658 2606 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bfb4cf56-808d-4df2-b49f-3237e5313657" (UID: "bfb4cf56-808d-4df2-b49f-3237e5313657"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:42.954866 kubelet[2606]: I1213 09:13:42.954847 2606 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bfb4cf56-808d-4df2-b49f-3237e5313657" (UID: "bfb4cf56-808d-4df2-b49f-3237e5313657"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:42.954990 kubelet[2606]: I1213 09:13:42.954974 2606 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bfb4cf56-808d-4df2-b49f-3237e5313657" (UID: "bfb4cf56-808d-4df2-b49f-3237e5313657"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:42.955174 kubelet[2606]: I1213 09:13:42.955154 2606 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-cni-path" (OuterVolumeSpecName: "cni-path") pod "bfb4cf56-808d-4df2-b49f-3237e5313657" (UID: "bfb4cf56-808d-4df2-b49f-3237e5313657"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:42.955265 kubelet[2606]: I1213 09:13:42.955172 2606 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed6cfa66-5c95-4fea-b198-bb760b7f1c7f-kube-api-access-k5v8q" (OuterVolumeSpecName: "kube-api-access-k5v8q") pod "ed6cfa66-5c95-4fea-b198-bb760b7f1c7f" (UID: "ed6cfa66-5c95-4fea-b198-bb760b7f1c7f"). InnerVolumeSpecName "kube-api-access-k5v8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 09:13:42.955361 kubelet[2606]: I1213 09:13:42.955204 2606 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bfb4cf56-808d-4df2-b49f-3237e5313657" (UID: "bfb4cf56-808d-4df2-b49f-3237e5313657"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:42.955361 kubelet[2606]: I1213 09:13:42.955228 2606 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-hostproc" (OuterVolumeSpecName: "hostproc") pod "bfb4cf56-808d-4df2-b49f-3237e5313657" (UID: "bfb4cf56-808d-4df2-b49f-3237e5313657"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:13:43.045957 kubelet[2606]: I1213 09:13:43.045361 2606 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-host-proc-sys-net\") on node \"ci-4081.2.1-b-8823ebc6cf\" DevicePath \"\"" Dec 13 09:13:43.045957 kubelet[2606]: I1213 09:13:43.045440 2606 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-lib-modules\") on node \"ci-4081.2.1-b-8823ebc6cf\" DevicePath \"\"" Dec 13 09:13:43.045957 kubelet[2606]: I1213 09:13:43.045463 2606 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-cilium-run\") on node \"ci-4081.2.1-b-8823ebc6cf\" DevicePath \"\"" Dec 13 09:13:43.045957 kubelet[2606]: I1213 09:13:43.045479 2606 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfb4cf56-808d-4df2-b49f-3237e5313657-clustermesh-secrets\") on node \"ci-4081.2.1-b-8823ebc6cf\" DevicePath \"\"" Dec 13 09:13:43.045957 kubelet[2606]: I1213 09:13:43.045497 2606 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k5v8q\" (UniqueName: \"kubernetes.io/projected/ed6cfa66-5c95-4fea-b198-bb760b7f1c7f-kube-api-access-k5v8q\") on node \"ci-4081.2.1-b-8823ebc6cf\" DevicePath \"\"" Dec 13 09:13:43.045957 kubelet[2606]: I1213 09:13:43.045515 2606 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-cilium-cgroup\") on node \"ci-4081.2.1-b-8823ebc6cf\" DevicePath \"\"" Dec 13 09:13:43.045957 kubelet[2606]: I1213 09:13:43.045529 2606 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-cni-path\") on node \"ci-4081.2.1-b-8823ebc6cf\" DevicePath \"\"" Dec 13 09:13:43.045957 kubelet[2606]: I1213 09:13:43.045544 2606 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-hostproc\") on node \"ci-4081.2.1-b-8823ebc6cf\" DevicePath \"\"" Dec 13 09:13:43.046584 kubelet[2606]: I1213 09:13:43.045559 2606 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-etc-cni-netd\") on node \"ci-4081.2.1-b-8823ebc6cf\" DevicePath \"\"" Dec 13 09:13:43.046584 kubelet[2606]: I1213 09:13:43.045592 2606 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed6cfa66-5c95-4fea-b198-bb760b7f1c7f-cilium-config-path\") on node \"ci-4081.2.1-b-8823ebc6cf\" DevicePath \"\"" Dec 13 09:13:43.046584 kubelet[2606]: I1213 09:13:43.045611 2606 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-xtables-lock\") on node \"ci-4081.2.1-b-8823ebc6cf\" DevicePath \"\"" Dec 13 09:13:43.046584 kubelet[2606]: I1213 09:13:43.045628 2606 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfb4cf56-808d-4df2-b49f-3237e5313657-hubble-tls\") on node \"ci-4081.2.1-b-8823ebc6cf\" DevicePath \"\"" Dec 13 09:13:43.046584 kubelet[2606]: I1213 09:13:43.045643 2606 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gnlfb\" (UniqueName: \"kubernetes.io/projected/bfb4cf56-808d-4df2-b49f-3237e5313657-kube-api-access-gnlfb\") on node \"ci-4081.2.1-b-8823ebc6cf\" DevicePath \"\"" Dec 13 09:13:43.046584 kubelet[2606]: I1213 09:13:43.045660 2606 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfb4cf56-808d-4df2-b49f-3237e5313657-cilium-config-path\") on node \"ci-4081.2.1-b-8823ebc6cf\" DevicePath \"\"" Dec 13 09:13:43.046584 kubelet[2606]: I1213 09:13:43.045677 2606 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-bpf-maps\") on node \"ci-4081.2.1-b-8823ebc6cf\" DevicePath \"\"" Dec 13 09:13:43.046584 kubelet[2606]: I1213 09:13:43.045692 2606 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfb4cf56-808d-4df2-b49f-3237e5313657-host-proc-sys-kernel\") on node \"ci-4081.2.1-b-8823ebc6cf\" DevicePath \"\"" Dec 13 09:13:43.149295 kubelet[2606]: I1213 09:13:43.148972 2606 scope.go:117] "RemoveContainer" containerID="5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b" Dec 13 09:13:43.153347 containerd[1457]: time="2024-12-13T09:13:43.153202962Z" level=info msg="RemoveContainer for \"5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b\"" Dec 13 09:13:43.155803 systemd[1]: Removed slice kubepods-besteffort-poded6cfa66_5c95_4fea_b198_bb760b7f1c7f.slice - libcontainer container kubepods-besteffort-poded6cfa66_5c95_4fea_b198_bb760b7f1c7f.slice. Dec 13 09:13:43.189125 containerd[1457]: time="2024-12-13T09:13:43.189036919Z" level=info msg="RemoveContainer for \"5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b\" returns successfully" Dec 13 09:13:43.193783 kubelet[2606]: I1213 09:13:43.193702 2606 scope.go:117] "RemoveContainer" containerID="5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b" Dec 13 09:13:43.210552 containerd[1457]: time="2024-12-13T09:13:43.198697282Z" level=error msg="ContainerStatus for \"5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b\": not found" Dec 13 09:13:43.228129 systemd[1]: Removed slice kubepods-burstable-podbfb4cf56_808d_4df2_b49f_3237e5313657.slice - libcontainer container kubepods-burstable-podbfb4cf56_808d_4df2_b49f_3237e5313657.slice. Dec 13 09:13:43.228303 systemd[1]: kubepods-burstable-podbfb4cf56_808d_4df2_b49f_3237e5313657.slice: Consumed 12.403s CPU time. Dec 13 09:13:43.243618 kubelet[2606]: E1213 09:13:43.243539 2606 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b\": not found" containerID="5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b" Dec 13 09:13:43.246114 kubelet[2606]: I1213 09:13:43.245712 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b"} err="failed to get container status \"5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b\": rpc error: code = NotFound desc = an error occurred when try to find container \"5197a07348ca33010d7be8012b0708ff83709154bf8c5ff15a7ce34a2aab0f7b\": not found" Dec 13 09:13:43.246114 kubelet[2606]: I1213 09:13:43.245885 2606 scope.go:117] "RemoveContainer" containerID="700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43" Dec 13 09:13:43.250483 containerd[1457]: time="2024-12-13T09:13:43.250424261Z" level=info msg="RemoveContainer for \"700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43\"" Dec 13 09:13:43.258795 containerd[1457]: time="2024-12-13T09:13:43.257763875Z" level=info msg="RemoveContainer for \"700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43\" returns successfully" Dec 13 09:13:43.258984 kubelet[2606]: I1213 09:13:43.258502 2606 scope.go:117] "RemoveContainer" containerID="0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a" Dec 13 09:13:43.263747 containerd[1457]: time="2024-12-13T09:13:43.263605587Z" level=info msg="RemoveContainer for \"0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a\"" Dec 13 09:13:43.269295 containerd[1457]: time="2024-12-13T09:13:43.269219498Z" level=info msg="RemoveContainer for \"0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a\" returns successfully" Dec 13 09:13:43.270091 kubelet[2606]: I1213 09:13:43.269677 2606 scope.go:117] "RemoveContainer" containerID="3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf" Dec 13 09:13:43.272826 containerd[1457]: time="2024-12-13T09:13:43.272339459Z" level=info msg="RemoveContainer for \"3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf\"" Dec 13 09:13:43.277377 containerd[1457]: time="2024-12-13T09:13:43.276836642Z" level=info msg="RemoveContainer for \"3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf\" returns successfully" Dec 13 09:13:43.277535 kubelet[2606]: I1213 09:13:43.277194 2606 scope.go:117] "RemoveContainer" containerID="6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809" Dec 13 09:13:43.278864 containerd[1457]: time="2024-12-13T09:13:43.278806714Z" level=info msg="RemoveContainer for \"6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809\"" Dec 13 09:13:43.284468 containerd[1457]: time="2024-12-13T09:13:43.283741375Z" level=info msg="RemoveContainer for \"6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809\" returns successfully" Dec 13 09:13:43.284637 kubelet[2606]: I1213 09:13:43.284277 2606 scope.go:117] "RemoveContainer" containerID="c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5" Dec 13 09:13:43.286583 containerd[1457]: time="2024-12-13T09:13:43.286522241Z" level=info msg="RemoveContainer for \"c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5\"" Dec 13 09:13:43.290759 containerd[1457]: time="2024-12-13T09:13:43.290649956Z" level=info msg="RemoveContainer for \"c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5\" returns successfully" Dec 13 09:13:43.291502 kubelet[2606]: I1213 09:13:43.291038 2606 scope.go:117] "RemoveContainer" containerID="700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43" Dec 13 09:13:43.292042 containerd[1457]: time="2024-12-13T09:13:43.291794579Z" level=error msg="ContainerStatus for \"700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43\": not found" Dec 13 09:13:43.292809 kubelet[2606]: E1213 09:13:43.292340 2606 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43\": not found" containerID="700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43" Dec 13 09:13:43.292809 kubelet[2606]: I1213 09:13:43.292389 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43"} err="failed to get container status \"700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43\": rpc error: code = NotFound desc = an error occurred when try to find container \"700ec5505cad5ac65f129495dea853f1d6bee562cd5bf053d5f3af13c83fcf43\": not found" Dec 13 09:13:43.292809 kubelet[2606]: I1213 09:13:43.292432 2606 scope.go:117] "RemoveContainer" containerID="0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a" Dec 13 09:13:43.293026 containerd[1457]: time="2024-12-13T09:13:43.292724008Z" level=error msg="ContainerStatus for \"0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a\": not found" Dec 13 09:13:43.293385 kubelet[2606]: E1213 09:13:43.293181 2606 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a\": not found" containerID="0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a" Dec 13 09:13:43.293385 kubelet[2606]: I1213 09:13:43.293225 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a"} err="failed to get container status \"0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a717344e7a3e1f54257298f61426a25b1bfc0830752610d4834dd297f36a96a\": not found" Dec 13 09:13:43.293385 kubelet[2606]: I1213 09:13:43.293272 2606 scope.go:117] "RemoveContainer" containerID="3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf" Dec 13 09:13:43.293706 containerd[1457]: time="2024-12-13T09:13:43.293636631Z" level=error msg="ContainerStatus for \"3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf\": not found" Dec 13 09:13:43.294090 kubelet[2606]: E1213 09:13:43.293882 2606 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf\": not found" containerID="3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf" Dec 13 09:13:43.294090 kubelet[2606]: I1213 09:13:43.293921 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf"} err="failed to get container status \"3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b0b3a6fbedd8b053aeab7029733f99727cb69075a4668f842bd52f14b9fcfbf\": not found" Dec 13 09:13:43.294090 kubelet[2606]: I1213 09:13:43.293989 2606 scope.go:117] "RemoveContainer" containerID="6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809" Dec 13 09:13:43.294420 containerd[1457]: time="2024-12-13T09:13:43.294305258Z" level=error msg="ContainerStatus for \"6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809\": not found" Dec 13 09:13:43.294732 kubelet[2606]: E1213 09:13:43.294583 2606 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809\": not found" containerID="6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809" Dec 13 09:13:43.294732 kubelet[2606]: I1213 09:13:43.294613 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809"} err="failed to get container status \"6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d1b5b28eac5d2d2874291dfe1cf7e9fff8c1dfc7101384b7acb635a223b7809\": not found" Dec 13 09:13:43.294732 kubelet[2606]: I1213 09:13:43.294648 2606 scope.go:117] "RemoveContainer" containerID="c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5" Dec 13 09:13:43.295414 containerd[1457]: time="2024-12-13T09:13:43.295067678Z" level=error msg="ContainerStatus for \"c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5\": not found" Dec 13 09:13:43.295562 kubelet[2606]: E1213 09:13:43.295530 2606 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5\": not found" containerID="c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5" Dec 13 09:13:43.295695 kubelet[2606]: I1213 09:13:43.295666 2606 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5"} err="failed to get container status \"c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8a86559c6dae74bf4f60e37bdc702cc9f455df12d446a6f670ea5c94989a5a5\": not found" Dec 13 09:13:43.448601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c-rootfs.mount: Deactivated successfully. Dec 13 09:13:43.448767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718-rootfs.mount: Deactivated successfully. Dec 13 09:13:43.448883 systemd[1]: var-lib-kubelet-pods-ed6cfa66\x2d5c95\x2d4fea\x2db198\x2dbb760b7f1c7f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk5v8q.mount: Deactivated successfully. Dec 13 09:13:43.449035 systemd[1]: var-lib-kubelet-pods-bfb4cf56\x2d808d\x2d4df2\x2db49f\x2d3237e5313657-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgnlfb.mount: Deactivated successfully. Dec 13 09:13:43.449315 systemd[1]: var-lib-kubelet-pods-bfb4cf56\x2d808d\x2d4df2\x2db49f\x2d3237e5313657-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 09:13:43.449407 systemd[1]: var-lib-kubelet-pods-bfb4cf56\x2d808d\x2d4df2\x2db49f\x2d3237e5313657-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 09:13:43.551048 kubelet[2606]: I1213 09:13:43.550971 2606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfb4cf56-808d-4df2-b49f-3237e5313657" path="/var/lib/kubelet/pods/bfb4cf56-808d-4df2-b49f-3237e5313657/volumes" Dec 13 09:13:43.552773 kubelet[2606]: I1213 09:13:43.552519 2606 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed6cfa66-5c95-4fea-b198-bb760b7f1c7f" path="/var/lib/kubelet/pods/ed6cfa66-5c95-4fea-b198-bb760b7f1c7f/volumes" Dec 13 09:13:44.313219 sshd[4219]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:44.328218 systemd[1]: sshd@26-146.190.159.183:22-147.75.109.163:35464.service: Deactivated successfully. Dec 13 09:13:44.332698 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 09:13:44.333141 systemd[1]: session-27.scope: Consumed 2.022s CPU time. Dec 13 09:13:44.337403 systemd-logind[1443]: Session 27 logged out. Waiting for processes to exit. Dec 13 09:13:44.344550 systemd[1]: Started sshd@27-146.190.159.183:22-147.75.109.163:35466.service - OpenSSH per-connection server daemon (147.75.109.163:35466). Dec 13 09:13:44.347882 systemd-logind[1443]: Removed session 27. Dec 13 09:13:44.397870 sshd[4383]: Accepted publickey for core from 147.75.109.163 port 35466 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:44.400480 sshd[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:44.408042 systemd-logind[1443]: New session 28 of user core. Dec 13 09:13:44.416364 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 09:13:44.720659 kubelet[2606]: E1213 09:13:44.720477 2606 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 09:13:45.262086 sshd[4383]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:45.273442 systemd[1]: sshd@27-146.190.159.183:22-147.75.109.163:35466.service: Deactivated successfully. Dec 13 09:13:45.279616 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 09:13:45.283690 systemd-logind[1443]: Session 28 logged out. Waiting for processes to exit. Dec 13 09:13:45.293765 systemd[1]: Started sshd@28-146.190.159.183:22-147.75.109.163:35468.service - OpenSSH per-connection server daemon (147.75.109.163:35468). Dec 13 09:13:45.297740 systemd-logind[1443]: Removed session 28. Dec 13 09:13:45.330799 kubelet[2606]: I1213 09:13:45.329790 2606 topology_manager.go:215] "Topology Admit Handler" podUID="2b144795-3f75-42d9-b9ee-14dc8bef1f52" podNamespace="kube-system" podName="cilium-rqx24" Dec 13 09:13:45.330799 kubelet[2606]: E1213 09:13:45.329883 2606 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfb4cf56-808d-4df2-b49f-3237e5313657" containerName="apply-sysctl-overwrites" Dec 13 09:13:45.330799 kubelet[2606]: E1213 09:13:45.329898 2606 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfb4cf56-808d-4df2-b49f-3237e5313657" containerName="mount-bpf-fs" Dec 13 09:13:45.330799 kubelet[2606]: E1213 09:13:45.329909 2606 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfb4cf56-808d-4df2-b49f-3237e5313657" containerName="clean-cilium-state" Dec 13 09:13:45.330799 kubelet[2606]: E1213 09:13:45.329942 2606 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed6cfa66-5c95-4fea-b198-bb760b7f1c7f" containerName="cilium-operator" Dec 13 09:13:45.330799 kubelet[2606]: E1213 09:13:45.329952 2606 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfb4cf56-808d-4df2-b49f-3237e5313657" containerName="mount-cgroup" Dec 13 09:13:45.330799 kubelet[2606]: E1213 09:13:45.329963 2606 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfb4cf56-808d-4df2-b49f-3237e5313657" containerName="cilium-agent" Dec 13 09:13:45.330799 kubelet[2606]: I1213 09:13:45.329998 2606 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed6cfa66-5c95-4fea-b198-bb760b7f1c7f" containerName="cilium-operator" Dec 13 09:13:45.330799 kubelet[2606]: I1213 09:13:45.330011 2606 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb4cf56-808d-4df2-b49f-3237e5313657" containerName="cilium-agent" Dec 13 09:13:45.356036 sshd[4396]: Accepted publickey for core from 147.75.109.163 port 35468 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:45.368990 kubelet[2606]: I1213 09:13:45.368060 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2b144795-3f75-42d9-b9ee-14dc8bef1f52-cilium-ipsec-secrets\") pod \"cilium-rqx24\" (UID: \"2b144795-3f75-42d9-b9ee-14dc8bef1f52\") " pod="kube-system/cilium-rqx24" Dec 13 09:13:45.368990 kubelet[2606]: I1213 09:13:45.368138 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b144795-3f75-42d9-b9ee-14dc8bef1f52-lib-modules\") pod \"cilium-rqx24\" (UID: \"2b144795-3f75-42d9-b9ee-14dc8bef1f52\") " pod="kube-system/cilium-rqx24" Dec 13 09:13:45.368990 kubelet[2606]: I1213 09:13:45.368180 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b144795-3f75-42d9-b9ee-14dc8bef1f52-host-proc-sys-net\") pod \"cilium-rqx24\" (UID: \"2b144795-3f75-42d9-b9ee-14dc8bef1f52\") " pod="kube-system/cilium-rqx24" Dec 13 09:13:45.368990 kubelet[2606]: I1213 09:13:45.368302 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b144795-3f75-42d9-b9ee-14dc8bef1f52-cilium-run\") pod \"cilium-rqx24\" (UID: \"2b144795-3f75-42d9-b9ee-14dc8bef1f52\") " pod="kube-system/cilium-rqx24" Dec 13 09:13:45.368990 kubelet[2606]: I1213 09:13:45.368334 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b144795-3f75-42d9-b9ee-14dc8bef1f52-bpf-maps\") pod \"cilium-rqx24\" (UID: \"2b144795-3f75-42d9-b9ee-14dc8bef1f52\") " pod="kube-system/cilium-rqx24" Dec 13 09:13:45.368990 kubelet[2606]: I1213 09:13:45.368363 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b144795-3f75-42d9-b9ee-14dc8bef1f52-host-proc-sys-kernel\") pod \"cilium-rqx24\" (UID: \"2b144795-3f75-42d9-b9ee-14dc8bef1f52\") " pod="kube-system/cilium-rqx24" Dec 13 09:13:45.369395 kubelet[2606]: I1213 09:13:45.368396 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b144795-3f75-42d9-b9ee-14dc8bef1f52-cilium-cgroup\") pod \"cilium-rqx24\" (UID: \"2b144795-3f75-42d9-b9ee-14dc8bef1f52\") " pod="kube-system/cilium-rqx24" Dec 13 09:13:45.369395 kubelet[2606]: I1213 09:13:45.368425 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b144795-3f75-42d9-b9ee-14dc8bef1f52-clustermesh-secrets\") pod \"cilium-rqx24\" (UID: \"2b144795-3f75-42d9-b9ee-14dc8bef1f52\") " pod="kube-system/cilium-rqx24" Dec 13 09:13:45.369395 kubelet[2606]: I1213 09:13:45.368455 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc4hv\" (UniqueName: \"kubernetes.io/projected/2b144795-3f75-42d9-b9ee-14dc8bef1f52-kube-api-access-mc4hv\") pod \"cilium-rqx24\" (UID: \"2b144795-3f75-42d9-b9ee-14dc8bef1f52\") " pod="kube-system/cilium-rqx24" Dec 13 09:13:45.369395 kubelet[2606]: I1213 09:13:45.368482 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b144795-3f75-42d9-b9ee-14dc8bef1f52-etc-cni-netd\") pod \"cilium-rqx24\" (UID: \"2b144795-3f75-42d9-b9ee-14dc8bef1f52\") " pod="kube-system/cilium-rqx24" Dec 13 09:13:45.369395 kubelet[2606]: I1213 09:13:45.368512 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b144795-3f75-42d9-b9ee-14dc8bef1f52-xtables-lock\") pod \"cilium-rqx24\" (UID: \"2b144795-3f75-42d9-b9ee-14dc8bef1f52\") " pod="kube-system/cilium-rqx24" Dec 13 09:13:45.369395 kubelet[2606]: I1213 09:13:45.368538 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b144795-3f75-42d9-b9ee-14dc8bef1f52-cni-path\") pod \"cilium-rqx24\" (UID: \"2b144795-3f75-42d9-b9ee-14dc8bef1f52\") " pod="kube-system/cilium-rqx24" Dec 13 09:13:45.369669 kubelet[2606]: I1213 09:13:45.368568 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b144795-3f75-42d9-b9ee-14dc8bef1f52-hostproc\") pod \"cilium-rqx24\" (UID: \"2b144795-3f75-42d9-b9ee-14dc8bef1f52\") " pod="kube-system/cilium-rqx24" Dec 13 09:13:45.369669 kubelet[2606]: I1213 09:13:45.368595 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b144795-3f75-42d9-b9ee-14dc8bef1f52-cilium-config-path\") pod \"cilium-rqx24\" (UID: \"2b144795-3f75-42d9-b9ee-14dc8bef1f52\") " pod="kube-system/cilium-rqx24" Dec 13 09:13:45.369669 kubelet[2606]: I1213 09:13:45.368620 2606 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b144795-3f75-42d9-b9ee-14dc8bef1f52-hubble-tls\") pod \"cilium-rqx24\" (UID: \"2b144795-3f75-42d9-b9ee-14dc8bef1f52\") " pod="kube-system/cilium-rqx24" Dec 13 09:13:45.369669 kubelet[2606]: W1213 09:13:45.368805 2606 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4081.2.1-b-8823ebc6cf" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.2.1-b-8823ebc6cf' and this object Dec 13 09:13:45.369813 sshd[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:45.373739 kubelet[2606]: W1213 09:13:45.371226 2606 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081.2.1-b-8823ebc6cf" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.2.1-b-8823ebc6cf' and this object Dec 13 09:13:45.374285 systemd[1]: Created slice kubepods-burstable-pod2b144795_3f75_42d9_b9ee_14dc8bef1f52.slice - libcontainer container kubepods-burstable-pod2b144795_3f75_42d9_b9ee_14dc8bef1f52.slice. Dec 13 09:13:45.385969 kubelet[2606]: W1213 09:13:45.385047 2606 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081.2.1-b-8823ebc6cf" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.2.1-b-8823ebc6cf' and this object Dec 13 09:13:45.385969 kubelet[2606]: E1213 09:13:45.385096 2606 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081.2.1-b-8823ebc6cf" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.2.1-b-8823ebc6cf' and this object Dec 13 09:13:45.385969 kubelet[2606]: W1213 09:13:45.385133 2606 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081.2.1-b-8823ebc6cf" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.2.1-b-8823ebc6cf' and this object Dec 13 09:13:45.385969 kubelet[2606]: E1213 09:13:45.385143 2606 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081.2.1-b-8823ebc6cf" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.2.1-b-8823ebc6cf' and this object Dec 13 09:13:45.385969 kubelet[2606]: E1213 09:13:45.385256 2606 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4081.2.1-b-8823ebc6cf" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.2.1-b-8823ebc6cf' and this object Dec 13 09:13:45.386414 kubelet[2606]: E1213 09:13:45.385275 2606 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081.2.1-b-8823ebc6cf" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.2.1-b-8823ebc6cf' and this object Dec 13 09:13:45.395404 systemd-logind[1443]: New session 29 of user core. Dec 13 09:13:45.406226 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 09:13:45.483063 sshd[4396]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:45.502592 systemd[1]: sshd@28-146.190.159.183:22-147.75.109.163:35468.service: Deactivated successfully. Dec 13 09:13:45.505735 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 09:13:45.509425 systemd-logind[1443]: Session 29 logged out. Waiting for processes to exit. Dec 13 09:13:45.515711 systemd[1]: Started sshd@29-146.190.159.183:22-147.75.109.163:35480.service - OpenSSH per-connection server daemon (147.75.109.163:35480). Dec 13 09:13:45.518742 systemd-logind[1443]: Removed session 29. Dec 13 09:13:45.583721 sshd[4405]: Accepted publickey for core from 147.75.109.163 port 35480 ssh2: RSA SHA256:MBgsXj4XmUdA9aYiHDxHU6W7jfMbGlrjxNqF1h45cRo Dec 13 09:13:45.584732 sshd[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:45.596057 systemd-logind[1443]: New session 30 of user core. Dec 13 09:13:45.601422 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 09:13:46.472362 kubelet[2606]: E1213 09:13:46.472239 2606 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Dec 13 09:13:46.472362 kubelet[2606]: E1213 09:13:46.472288 2606 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Dec 13 09:13:46.473019 kubelet[2606]: E1213 09:13:46.472248 2606 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Dec 13 09:13:46.486439 kubelet[2606]: E1213 09:13:46.481163 2606 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-rqx24: failed to sync secret cache: timed out waiting for the condition Dec 13 09:13:46.501197 kubelet[2606]: E1213 09:13:46.500871 2606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b144795-3f75-42d9-b9ee-14dc8bef1f52-clustermesh-secrets podName:2b144795-3f75-42d9-b9ee-14dc8bef1f52 nodeName:}" failed. No retries permitted until 2024-12-13 09:13:46.972364513 +0000 UTC m=+117.605840158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/2b144795-3f75-42d9-b9ee-14dc8bef1f52-clustermesh-secrets") pod "cilium-rqx24" (UID: "2b144795-3f75-42d9-b9ee-14dc8bef1f52") : failed to sync secret cache: timed out waiting for the condition Dec 13 09:13:46.501197 kubelet[2606]: E1213 09:13:46.501023 2606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b144795-3f75-42d9-b9ee-14dc8bef1f52-cilium-ipsec-secrets podName:2b144795-3f75-42d9-b9ee-14dc8bef1f52 nodeName:}" failed. No retries permitted until 2024-12-13 09:13:47.000986133 +0000 UTC m=+117.634461757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/2b144795-3f75-42d9-b9ee-14dc8bef1f52-cilium-ipsec-secrets") pod "cilium-rqx24" (UID: "2b144795-3f75-42d9-b9ee-14dc8bef1f52") : failed to sync secret cache: timed out waiting for the condition Dec 13 09:13:46.501197 kubelet[2606]: E1213 09:13:46.501151 2606 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b144795-3f75-42d9-b9ee-14dc8bef1f52-hubble-tls podName:2b144795-3f75-42d9-b9ee-14dc8bef1f52 nodeName:}" failed. No retries permitted until 2024-12-13 09:13:47.00113753 +0000 UTC m=+117.634613170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/2b144795-3f75-42d9-b9ee-14dc8bef1f52-hubble-tls") pod "cilium-rqx24" (UID: "2b144795-3f75-42d9-b9ee-14dc8bef1f52") : failed to sync secret cache: timed out waiting for the condition Dec 13 09:13:47.201006 kubelet[2606]: E1213 09:13:47.200393 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:47.201258 containerd[1457]: time="2024-12-13T09:13:47.201188995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rqx24,Uid:2b144795-3f75-42d9-b9ee-14dc8bef1f52,Namespace:kube-system,Attempt:0,}" Dec 13 09:13:47.239978 containerd[1457]: time="2024-12-13T09:13:47.239738892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:13:47.239978 containerd[1457]: time="2024-12-13T09:13:47.239970244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:13:47.240229 containerd[1457]: time="2024-12-13T09:13:47.240012024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:13:47.240310 containerd[1457]: time="2024-12-13T09:13:47.240263214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:13:47.280430 systemd[1]: Started cri-containerd-f093c5ed8bcd03d227ce50a8769519902f7df8bea11815a387625564747b551a.scope - libcontainer container f093c5ed8bcd03d227ce50a8769519902f7df8bea11815a387625564747b551a. Dec 13 09:13:47.323140 containerd[1457]: time="2024-12-13T09:13:47.323080381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rqx24,Uid:2b144795-3f75-42d9-b9ee-14dc8bef1f52,Namespace:kube-system,Attempt:0,} returns sandbox id \"f093c5ed8bcd03d227ce50a8769519902f7df8bea11815a387625564747b551a\"" Dec 13 09:13:47.324193 kubelet[2606]: E1213 09:13:47.324168 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:47.328495 containerd[1457]: time="2024-12-13T09:13:47.328244707Z" level=info msg="CreateContainer within sandbox \"f093c5ed8bcd03d227ce50a8769519902f7df8bea11815a387625564747b551a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 09:13:47.345241 containerd[1457]: time="2024-12-13T09:13:47.345152321Z" level=info msg="CreateContainer within sandbox \"f093c5ed8bcd03d227ce50a8769519902f7df8bea11815a387625564747b551a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"83edce0d6fc58358410f8942d283bb98ff7dbe099c813b11d00241d1fbca1a23\"" Dec 13 09:13:47.347195 containerd[1457]: time="2024-12-13T09:13:47.347142069Z" level=info msg="StartContainer for \"83edce0d6fc58358410f8942d283bb98ff7dbe099c813b11d00241d1fbca1a23\"" Dec 13 09:13:47.383187 systemd[1]: Started cri-containerd-83edce0d6fc58358410f8942d283bb98ff7dbe099c813b11d00241d1fbca1a23.scope - libcontainer container 83edce0d6fc58358410f8942d283bb98ff7dbe099c813b11d00241d1fbca1a23. Dec 13 09:13:47.424167 containerd[1457]: time="2024-12-13T09:13:47.424107437Z" level=info msg="StartContainer for \"83edce0d6fc58358410f8942d283bb98ff7dbe099c813b11d00241d1fbca1a23\" returns successfully" Dec 13 09:13:47.441103 systemd[1]: cri-containerd-83edce0d6fc58358410f8942d283bb98ff7dbe099c813b11d00241d1fbca1a23.scope: Deactivated successfully. Dec 13 09:13:47.491015 containerd[1457]: time="2024-12-13T09:13:47.490623039Z" level=info msg="shim disconnected" id=83edce0d6fc58358410f8942d283bb98ff7dbe099c813b11d00241d1fbca1a23 namespace=k8s.io Dec 13 09:13:47.491015 containerd[1457]: time="2024-12-13T09:13:47.490710118Z" level=warning msg="cleaning up after shim disconnected" id=83edce0d6fc58358410f8942d283bb98ff7dbe099c813b11d00241d1fbca1a23 namespace=k8s.io Dec 13 09:13:47.491015 containerd[1457]: time="2024-12-13T09:13:47.490723460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:13:47.514514 containerd[1457]: time="2024-12-13T09:13:47.514317385Z" level=warning msg="cleanup warnings time=\"2024-12-13T09:13:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 09:13:48.218271 kubelet[2606]: E1213 09:13:48.218131 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:48.224194 containerd[1457]: time="2024-12-13T09:13:48.221833033Z" level=info msg="CreateContainer within sandbox \"f093c5ed8bcd03d227ce50a8769519902f7df8bea11815a387625564747b551a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 09:13:48.254334 containerd[1457]: time="2024-12-13T09:13:48.253963409Z" level=info msg="CreateContainer within sandbox \"f093c5ed8bcd03d227ce50a8769519902f7df8bea11815a387625564747b551a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8eb9431886beb0e697054717b119c2018c71593e689a340c569fb5e415f1e56f\"" Dec 13 09:13:48.257285 containerd[1457]: time="2024-12-13T09:13:48.257178412Z" level=info msg="StartContainer for \"8eb9431886beb0e697054717b119c2018c71593e689a340c569fb5e415f1e56f\"" Dec 13 09:13:48.309540 systemd[1]: Started cri-containerd-8eb9431886beb0e697054717b119c2018c71593e689a340c569fb5e415f1e56f.scope - libcontainer container 8eb9431886beb0e697054717b119c2018c71593e689a340c569fb5e415f1e56f. Dec 13 09:13:48.348554 containerd[1457]: time="2024-12-13T09:13:48.348495068Z" level=info msg="StartContainer for \"8eb9431886beb0e697054717b119c2018c71593e689a340c569fb5e415f1e56f\" returns successfully" Dec 13 09:13:48.358853 systemd[1]: cri-containerd-8eb9431886beb0e697054717b119c2018c71593e689a340c569fb5e415f1e56f.scope: Deactivated successfully. Dec 13 09:13:48.388998 containerd[1457]: time="2024-12-13T09:13:48.388895497Z" level=info msg="shim disconnected" id=8eb9431886beb0e697054717b119c2018c71593e689a340c569fb5e415f1e56f namespace=k8s.io Dec 13 09:13:48.388998 containerd[1457]: time="2024-12-13T09:13:48.388990615Z" level=warning msg="cleaning up after shim disconnected" id=8eb9431886beb0e697054717b119c2018c71593e689a340c569fb5e415f1e56f namespace=k8s.io Dec 13 09:13:48.388998 containerd[1457]: time="2024-12-13T09:13:48.389003562Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:13:48.405640 containerd[1457]: time="2024-12-13T09:13:48.404288418Z" level=warning msg="cleanup warnings time=\"2024-12-13T09:13:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 09:13:49.014162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8eb9431886beb0e697054717b119c2018c71593e689a340c569fb5e415f1e56f-rootfs.mount: Deactivated successfully. Dec 13 09:13:49.223950 kubelet[2606]: E1213 09:13:49.223901 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:49.230204 containerd[1457]: time="2024-12-13T09:13:49.229549868Z" level=info msg="CreateContainer within sandbox \"f093c5ed8bcd03d227ce50a8769519902f7df8bea11815a387625564747b551a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 09:13:49.275254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2197580975.mount: Deactivated successfully. Dec 13 09:13:49.276513 containerd[1457]: time="2024-12-13T09:13:49.275840297Z" level=info msg="CreateContainer within sandbox \"f093c5ed8bcd03d227ce50a8769519902f7df8bea11815a387625564747b551a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cbcc4903f093e1817fbdad08f11a5a25310871ef9ea80871c3146092dc50fe9d\"" Dec 13 09:13:49.278098 containerd[1457]: time="2024-12-13T09:13:49.277366279Z" level=info msg="StartContainer for \"cbcc4903f093e1817fbdad08f11a5a25310871ef9ea80871c3146092dc50fe9d\"" Dec 13 09:13:49.335245 systemd[1]: Started cri-containerd-cbcc4903f093e1817fbdad08f11a5a25310871ef9ea80871c3146092dc50fe9d.scope - libcontainer container cbcc4903f093e1817fbdad08f11a5a25310871ef9ea80871c3146092dc50fe9d. Dec 13 09:13:49.393006 containerd[1457]: time="2024-12-13T09:13:49.392921492Z" level=info msg="StartContainer for \"cbcc4903f093e1817fbdad08f11a5a25310871ef9ea80871c3146092dc50fe9d\" returns successfully" Dec 13 09:13:49.416814 systemd[1]: cri-containerd-cbcc4903f093e1817fbdad08f11a5a25310871ef9ea80871c3146092dc50fe9d.scope: Deactivated successfully. Dec 13 09:13:49.460476 containerd[1457]: time="2024-12-13T09:13:49.460388210Z" level=info msg="shim disconnected" id=cbcc4903f093e1817fbdad08f11a5a25310871ef9ea80871c3146092dc50fe9d namespace=k8s.io Dec 13 09:13:49.460476 containerd[1457]: time="2024-12-13T09:13:49.460474506Z" level=warning msg="cleaning up after shim disconnected" id=cbcc4903f093e1817fbdad08f11a5a25310871ef9ea80871c3146092dc50fe9d namespace=k8s.io Dec 13 09:13:49.460476 containerd[1457]: time="2024-12-13T09:13:49.460485028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:13:49.611305 containerd[1457]: time="2024-12-13T09:13:49.611063641Z" level=info msg="StopPodSandbox for \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\"" Dec 13 09:13:49.611305 containerd[1457]: time="2024-12-13T09:13:49.611200320Z" level=info msg="TearDown network for sandbox \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\" successfully" Dec 13 09:13:49.611305 containerd[1457]: time="2024-12-13T09:13:49.611216749Z" level=info msg="StopPodSandbox for \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\" returns successfully" Dec 13 09:13:49.611878 containerd[1457]: time="2024-12-13T09:13:49.611841546Z" level=info msg="RemovePodSandbox for \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\"" Dec 13 09:13:49.611878 containerd[1457]: time="2024-12-13T09:13:49.611877790Z" level=info msg="Forcibly stopping sandbox \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\"" Dec 13 09:13:49.611997 containerd[1457]: time="2024-12-13T09:13:49.611977809Z" level=info msg="TearDown network for sandbox \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\" successfully" Dec 13 09:13:49.615948 containerd[1457]: time="2024-12-13T09:13:49.615825909Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:13:49.615948 containerd[1457]: time="2024-12-13T09:13:49.615917329Z" level=info msg="RemovePodSandbox \"e6bdb10d07d5d3501c27e32b74e72cd1575c8ccf23977935064dcd4b1a8e206c\" returns successfully" Dec 13 09:13:49.616566 containerd[1457]: time="2024-12-13T09:13:49.616525506Z" level=info msg="StopPodSandbox for \"3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718\"" Dec 13 09:13:49.616635 containerd[1457]: time="2024-12-13T09:13:49.616619342Z" level=info msg="TearDown network for sandbox \"3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718\" successfully" Dec 13 09:13:49.616635 containerd[1457]: time="2024-12-13T09:13:49.616631198Z" level=info msg="StopPodSandbox for \"3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718\" returns successfully" Dec 13 09:13:49.617008 containerd[1457]: time="2024-12-13T09:13:49.616980324Z" level=info msg="RemovePodSandbox for \"3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718\"" Dec 13 09:13:49.617067 containerd[1457]: time="2024-12-13T09:13:49.617013477Z" level=info msg="Forcibly stopping sandbox \"3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718\"" Dec 13 09:13:49.617260 containerd[1457]: time="2024-12-13T09:13:49.617080362Z" level=info msg="TearDown network for sandbox \"3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718\" successfully" Dec 13 09:13:49.621503 containerd[1457]: time="2024-12-13T09:13:49.621408986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:13:49.621503 containerd[1457]: time="2024-12-13T09:13:49.621506669Z" level=info msg="RemovePodSandbox \"3b9d28258e4628773c49277c926b6e6af0d44e1794950b2b273051004de14718\" returns successfully" Dec 13 09:13:49.721860 kubelet[2606]: E1213 09:13:49.721799 2606 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 09:13:50.013966 systemd[1]: run-containerd-runc-k8s.io-cbcc4903f093e1817fbdad08f11a5a25310871ef9ea80871c3146092dc50fe9d-runc.pgHp5T.mount: Deactivated successfully. Dec 13 09:13:50.014456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbcc4903f093e1817fbdad08f11a5a25310871ef9ea80871c3146092dc50fe9d-rootfs.mount: Deactivated successfully. Dec 13 09:13:50.230206 kubelet[2606]: E1213 09:13:50.229705 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:50.238043 containerd[1457]: time="2024-12-13T09:13:50.236170477Z" level=info msg="CreateContainer within sandbox \"f093c5ed8bcd03d227ce50a8769519902f7df8bea11815a387625564747b551a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 09:13:50.258593 containerd[1457]: time="2024-12-13T09:13:50.258522923Z" level=info msg="CreateContainer within sandbox \"f093c5ed8bcd03d227ce50a8769519902f7df8bea11815a387625564747b551a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"afbdc0d8859592fd6ed3598d0538f29d8ca14e9fd9c77438aec940f50969b4c0\"" Dec 13 09:13:50.258820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3761021116.mount: Deactivated successfully. Dec 13 09:13:50.262591 containerd[1457]: time="2024-12-13T09:13:50.261240445Z" level=info msg="StartContainer for \"afbdc0d8859592fd6ed3598d0538f29d8ca14e9fd9c77438aec940f50969b4c0\"" Dec 13 09:13:50.326394 systemd[1]: Started cri-containerd-afbdc0d8859592fd6ed3598d0538f29d8ca14e9fd9c77438aec940f50969b4c0.scope - libcontainer container afbdc0d8859592fd6ed3598d0538f29d8ca14e9fd9c77438aec940f50969b4c0. Dec 13 09:13:50.363684 systemd[1]: cri-containerd-afbdc0d8859592fd6ed3598d0538f29d8ca14e9fd9c77438aec940f50969b4c0.scope: Deactivated successfully. Dec 13 09:13:50.371916 containerd[1457]: time="2024-12-13T09:13:50.369026152Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b144795_3f75_42d9_b9ee_14dc8bef1f52.slice/cri-containerd-afbdc0d8859592fd6ed3598d0538f29d8ca14e9fd9c77438aec940f50969b4c0.scope/memory.events\": no such file or directory" Dec 13 09:13:50.373259 kubelet[2606]: E1213 09:13:50.372845 2606 cadvisor_stats_provider.go:500] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b144795_3f75_42d9_b9ee_14dc8bef1f52.slice/cri-containerd-afbdc0d8859592fd6ed3598d0538f29d8ca14e9fd9c77438aec940f50969b4c0.scope\": RecentStats: unable to find data in memory cache]" Dec 13 09:13:50.375990 containerd[1457]: time="2024-12-13T09:13:50.375878650Z" level=info msg="StartContainer for \"afbdc0d8859592fd6ed3598d0538f29d8ca14e9fd9c77438aec940f50969b4c0\" returns successfully" Dec 13 09:13:50.407002 containerd[1457]: time="2024-12-13T09:13:50.406694174Z" level=info msg="shim disconnected" id=afbdc0d8859592fd6ed3598d0538f29d8ca14e9fd9c77438aec940f50969b4c0 namespace=k8s.io Dec 13 09:13:50.407002 containerd[1457]: time="2024-12-13T09:13:50.406753902Z" level=warning msg="cleaning up after shim disconnected" id=afbdc0d8859592fd6ed3598d0538f29d8ca14e9fd9c77438aec940f50969b4c0 namespace=k8s.io Dec 13 09:13:50.407002 containerd[1457]: time="2024-12-13T09:13:50.406762901Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:13:51.014166 systemd[1]: run-containerd-runc-k8s.io-afbdc0d8859592fd6ed3598d0538f29d8ca14e9fd9c77438aec940f50969b4c0-runc.GcMHS7.mount: Deactivated successfully. Dec 13 09:13:51.014342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afbdc0d8859592fd6ed3598d0538f29d8ca14e9fd9c77438aec940f50969b4c0-rootfs.mount: Deactivated successfully. Dec 13 09:13:51.234502 kubelet[2606]: E1213 09:13:51.234443 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:51.242159 containerd[1457]: time="2024-12-13T09:13:51.242042742Z" level=info msg="CreateContainer within sandbox \"f093c5ed8bcd03d227ce50a8769519902f7df8bea11815a387625564747b551a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 09:13:51.270731 containerd[1457]: time="2024-12-13T09:13:51.268384549Z" level=info msg="CreateContainer within sandbox \"f093c5ed8bcd03d227ce50a8769519902f7df8bea11815a387625564747b551a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"57a3dfc91445cc37d0ec6badbcc297a8e8e5c8288ecb845b326101c378fb2ef7\"" Dec 13 09:13:51.271230 containerd[1457]: time="2024-12-13T09:13:51.271180846Z" level=info msg="StartContainer for \"57a3dfc91445cc37d0ec6badbcc297a8e8e5c8288ecb845b326101c378fb2ef7\"" Dec 13 09:13:51.333253 systemd[1]: Started cri-containerd-57a3dfc91445cc37d0ec6badbcc297a8e8e5c8288ecb845b326101c378fb2ef7.scope - libcontainer container 57a3dfc91445cc37d0ec6badbcc297a8e8e5c8288ecb845b326101c378fb2ef7. Dec 13 09:13:51.382868 containerd[1457]: time="2024-12-13T09:13:51.382695351Z" level=info msg="StartContainer for \"57a3dfc91445cc37d0ec6badbcc297a8e8e5c8288ecb845b326101c378fb2ef7\" returns successfully" Dec 13 09:13:51.875055 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 09:13:52.086124 kubelet[2606]: I1213 09:13:52.085857 2606 setters.go:580] "Node became not ready" node="ci-4081.2.1-b-8823ebc6cf" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T09:13:52Z","lastTransitionTime":"2024-12-13T09:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 09:13:52.241977 kubelet[2606]: E1213 09:13:52.241776 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:53.245611 kubelet[2606]: E1213 09:13:53.245490 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:54.658612 systemd[1]: run-containerd-runc-k8s.io-57a3dfc91445cc37d0ec6badbcc297a8e8e5c8288ecb845b326101c378fb2ef7-runc.X5DI09.mount: Deactivated successfully. Dec 13 09:13:55.530391 systemd-networkd[1365]: lxc_health: Link UP Dec 13 09:13:55.541344 systemd-networkd[1365]: lxc_health: Gained carrier Dec 13 09:13:56.925781 systemd[1]: run-containerd-runc-k8s.io-57a3dfc91445cc37d0ec6badbcc297a8e8e5c8288ecb845b326101c378fb2ef7-runc.a11iWx.mount: Deactivated successfully. Dec 13 09:13:57.203991 kubelet[2606]: E1213 09:13:57.203068 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:57.234434 kubelet[2606]: I1213 09:13:57.231596 2606 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rqx24" podStartSLOduration=12.228689538 podStartE2EDuration="12.228689538s" podCreationTimestamp="2024-12-13 09:13:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:13:52.264002089 +0000 UTC m=+122.897477729" watchObservedRunningTime="2024-12-13 09:13:57.228689538 +0000 UTC m=+127.862165174" Dec 13 09:13:57.253172 systemd-networkd[1365]: lxc_health: Gained IPv6LL Dec 13 09:13:57.258269 kubelet[2606]: E1213 09:13:57.258021 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:13:58.267732 kubelet[2606]: E1213 09:13:58.266211 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 13 09:14:01.638967 systemd[1]: run-containerd-runc-k8s.io-57a3dfc91445cc37d0ec6badbcc297a8e8e5c8288ecb845b326101c378fb2ef7-runc.GiQEr9.mount: Deactivated successfully. Dec 13 09:14:01.801134 sshd[4405]: pam_unix(sshd:session): session closed for user core Dec 13 09:14:01.823504 systemd[1]: sshd@29-146.190.159.183:22-147.75.109.163:35480.service: Deactivated successfully. Dec 13 09:14:01.827548 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 09:14:01.835123 systemd-logind[1443]: Session 30 logged out. Waiting for processes to exit. Dec 13 09:14:01.839730 systemd-logind[1443]: Removed session 30. Dec 13 09:14:03.547708 kubelet[2606]: E1213 09:14:03.547208 2606 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"