Jan 17 12:20:47.927369 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:20:47.927410 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:20:47.927429 kernel: BIOS-provided physical RAM map: Jan 17 12:20:47.927440 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:20:47.927451 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:20:47.927462 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:20:47.927476 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jan 17 12:20:47.927488 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jan 17 12:20:47.927499 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:20:47.927514 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:20:47.927525 kernel: NX (Execute Disable) protection: active Jan 17 12:20:47.927537 kernel: APIC: Static calls initialized Jan 17 12:20:47.927548 kernel: SMBIOS 2.8 present. Jan 17 12:20:47.927561 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 17 12:20:47.927576 kernel: Hypervisor detected: KVM Jan 17 12:20:47.927593 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:20:47.927607 kernel: kvm-clock: using sched offset of 3102675628 cycles Jan 17 12:20:47.927621 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:20:47.927634 kernel: tsc: Detected 2494.146 MHz processor Jan 17 12:20:47.927648 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:20:47.927661 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:20:47.927675 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jan 17 12:20:47.927690 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:20:47.927703 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:20:47.927719 kernel: ACPI: Early table checksum verification disabled Jan 17 12:20:47.927732 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jan 17 12:20:47.927746 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:20:47.927760 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:20:47.927773 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:20:47.927786 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 17 12:20:47.927799 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:20:47.927812 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:20:47.927825 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:20:47.927842 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:20:47.927855 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 17 12:20:47.927869 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 17 12:20:47.927882 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 17 12:20:47.927896 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 17 12:20:47.927909 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 17 12:20:47.927922 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 17 12:20:47.927941 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 17 12:20:47.927958 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:20:47.927972 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:20:47.927987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:20:47.927999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 12:20:47.928014 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jan 17 12:20:47.928030 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jan 17 12:20:47.928048 kernel: Zone ranges: Jan 17 12:20:47.928100 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:20:47.928118 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jan 17 12:20:47.928134 kernel: Normal empty Jan 17 12:20:47.928150 kernel: Movable zone start for each node Jan 17 12:20:47.928166 kernel: Early memory node ranges Jan 17 12:20:47.928182 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:20:47.928198 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jan 17 12:20:47.928212 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jan 17 12:20:47.928231 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:20:47.928246 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:20:47.928261 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jan 17 12:20:47.928277 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:20:47.928292 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:20:47.928307 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:20:47.928322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:20:47.928337 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:20:47.928352 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:20:47.928369 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:20:47.928382 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:20:47.928397 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:20:47.928412 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:20:47.928426 kernel: TSC deadline timer available Jan 17 12:20:47.928441 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:20:47.928454 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:20:47.928469 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 17 12:20:47.928484 kernel: Booting paravirtualized kernel on KVM Jan 17 12:20:47.928502 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:20:47.928517 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:20:47.928531 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:20:47.928545 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:20:47.928558 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:20:47.928573 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 12:20:47.928590 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:20:47.928606 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:20:47.928624 kernel: random: crng init done Jan 17 12:20:47.928639 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:20:47.928655 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:20:47.928671 kernel: Fallback order for Node 0: 0 Jan 17 12:20:47.928686 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jan 17 12:20:47.928701 kernel: Policy zone: DMA32 Jan 17 12:20:47.928716 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:20:47.928732 kernel: Memory: 1971192K/2096600K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 125148K reserved, 0K cma-reserved) Jan 17 12:20:47.928747 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:20:47.928765 kernel: Kernel/User page tables isolation: enabled Jan 17 12:20:47.928779 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:20:47.928793 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:20:47.928807 kernel: Dynamic Preempt: voluntary Jan 17 12:20:47.928820 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:20:47.928836 kernel: rcu: RCU event tracing is enabled. Jan 17 12:20:47.928851 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:20:47.928866 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:20:47.928880 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:20:47.928899 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:20:47.928913 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:20:47.928928 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:20:47.928943 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:20:47.928958 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:20:47.928972 kernel: Console: colour VGA+ 80x25 Jan 17 12:20:47.928987 kernel: printk: console [tty0] enabled Jan 17 12:20:47.929002 kernel: printk: console [ttyS0] enabled Jan 17 12:20:47.929015 kernel: ACPI: Core revision 20230628 Jan 17 12:20:47.929030 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:20:47.929048 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:20:47.929092 kernel: x2apic enabled Jan 17 12:20:47.929108 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:20:47.929122 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:20:47.929137 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39fcb9af, max_idle_ns: 440795211412 ns Jan 17 12:20:47.929152 kernel: Calibrating delay loop (skipped) preset value.. 4988.29 BogoMIPS (lpj=2494146) Jan 17 12:20:47.929166 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 12:20:47.929181 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 12:20:47.929212 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:20:47.929226 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:20:47.929242 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:20:47.929260 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:20:47.929275 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 12:20:47.929290 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:20:47.929305 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:20:47.929319 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 12:20:47.929335 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:20:47.929353 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:20:47.929368 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:20:47.929383 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:20:47.929398 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:20:47.929412 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 12:20:47.929428 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:20:47.929443 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:20:47.929457 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:20:47.929477 kernel: landlock: Up and running. Jan 17 12:20:47.929492 kernel: SELinux: Initializing. Jan 17 12:20:47.929509 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:20:47.929526 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:20:47.929543 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 17 12:20:47.929559 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:20:47.929575 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:20:47.929591 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:20:47.929610 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 17 12:20:47.929625 kernel: signal: max sigframe size: 1776 Jan 17 12:20:47.929641 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:20:47.929655 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:20:47.929671 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:20:47.929686 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:20:47.929703 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:20:47.929718 kernel: .... node #0, CPUs: #1 Jan 17 12:20:47.929735 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:20:47.929751 kernel: smpboot: Max logical packages: 1 Jan 17 12:20:47.929772 kernel: smpboot: Total of 2 processors activated (9976.58 BogoMIPS) Jan 17 12:20:47.929786 kernel: devtmpfs: initialized Jan 17 12:20:47.929800 kernel: x86/mm: Memory block size: 128MB Jan 17 12:20:47.929815 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:20:47.929830 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:20:47.929845 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:20:47.929860 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:20:47.929873 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:20:47.929888 kernel: audit: type=2000 audit(1737116447.068:1): state=initialized audit_enabled=0 res=1 Jan 17 12:20:47.929906 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:20:47.929921 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:20:47.929936 kernel: cpuidle: using governor menu Jan 17 12:20:47.929950 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:20:47.929965 kernel: dca service started, version 1.12.1 Jan 17 12:20:47.929979 kernel: PCI: Using configuration type 1 for base access Jan 17 12:20:47.929995 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:20:47.930010 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:20:47.930026 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:20:47.930045 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:20:47.930077 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:20:47.930093 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:20:47.930109 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:20:47.930124 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:20:47.930139 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:20:47.930155 kernel: ACPI: Interpreter enabled Jan 17 12:20:47.930170 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:20:47.930185 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:20:47.930205 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:20:47.930221 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:20:47.930236 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 12:20:47.930251 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:20:47.930562 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:20:47.930736 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:20:47.930882 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:20:47.930908 kernel: acpiphp: Slot [3] registered Jan 17 12:20:47.930924 kernel: acpiphp: Slot [4] registered Jan 17 12:20:47.930937 kernel: acpiphp: Slot [5] registered Jan 17 12:20:47.930953 kernel: acpiphp: Slot [6] registered Jan 17 12:20:47.930968 kernel: acpiphp: Slot [7] registered Jan 17 12:20:47.930985 kernel: acpiphp: Slot [8] registered Jan 17 12:20:47.931000 kernel: acpiphp: Slot [9] registered Jan 17 12:20:47.931017 kernel: acpiphp: Slot [10] registered Jan 17 12:20:47.931032 kernel: acpiphp: Slot [11] registered Jan 17 12:20:47.931051 kernel: acpiphp: Slot [12] registered Jan 17 12:20:47.933132 kernel: acpiphp: Slot [13] registered Jan 17 12:20:47.933154 kernel: acpiphp: Slot [14] registered Jan 17 12:20:47.933168 kernel: acpiphp: Slot [15] registered Jan 17 12:20:47.933182 kernel: acpiphp: Slot [16] registered Jan 17 12:20:47.933197 kernel: acpiphp: Slot [17] registered Jan 17 12:20:47.933212 kernel: acpiphp: Slot [18] registered Jan 17 12:20:47.933226 kernel: acpiphp: Slot [19] registered Jan 17 12:20:47.933241 kernel: acpiphp: Slot [20] registered Jan 17 12:20:47.933256 kernel: acpiphp: Slot [21] registered Jan 17 12:20:47.933280 kernel: acpiphp: Slot [22] registered Jan 17 12:20:47.933293 kernel: acpiphp: Slot [23] registered Jan 17 12:20:47.933306 kernel: acpiphp: Slot [24] registered Jan 17 12:20:47.933320 kernel: acpiphp: Slot [25] registered Jan 17 12:20:47.933334 kernel: acpiphp: Slot [26] registered Jan 17 12:20:47.933348 kernel: acpiphp: Slot [27] registered Jan 17 12:20:47.933361 kernel: acpiphp: Slot [28] registered Jan 17 12:20:47.933375 kernel: acpiphp: Slot [29] registered Jan 17 12:20:47.933389 kernel: acpiphp: Slot [30] registered Jan 17 12:20:47.933407 kernel: acpiphp: Slot [31] registered Jan 17 12:20:47.933422 kernel: PCI host bridge to bus 0000:00 Jan 17 12:20:47.933654 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:20:47.933785 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:20:47.933912 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:20:47.934031 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 12:20:47.936278 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 17 12:20:47.936422 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:20:47.936634 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:20:47.936803 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 12:20:47.936954 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 17 12:20:47.937143 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 17 12:20:47.937285 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 12:20:47.937427 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 12:20:47.937563 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 12:20:47.937698 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 12:20:47.937848 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 17 12:20:47.937983 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 17 12:20:47.938154 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:20:47.938290 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 17 12:20:47.938430 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 17 12:20:47.938575 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 17 12:20:47.938718 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 17 12:20:47.938866 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 17 12:20:47.939008 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 17 12:20:47.941231 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 17 12:20:47.941345 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:20:47.941457 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:20:47.941553 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 17 12:20:47.941646 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 17 12:20:47.941739 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 17 12:20:47.941839 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:20:47.941934 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 17 12:20:47.942030 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 17 12:20:47.944194 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 17 12:20:47.944311 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 17 12:20:47.944406 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 17 12:20:47.944499 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 17 12:20:47.944592 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 17 12:20:47.944699 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:20:47.944800 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 12:20:47.944894 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 17 12:20:47.944985 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 17 12:20:47.945111 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:20:47.945207 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 17 12:20:47.945300 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 17 12:20:47.945392 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 17 12:20:47.945496 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 17 12:20:47.945595 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 17 12:20:47.945717 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 17 12:20:47.945729 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:20:47.945739 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:20:47.945748 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:20:47.945757 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:20:47.945770 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:20:47.945779 kernel: iommu: Default domain type: Translated Jan 17 12:20:47.945789 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:20:47.945798 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:20:47.945807 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:20:47.945816 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:20:47.945824 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jan 17 12:20:47.945922 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 17 12:20:47.948143 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 17 12:20:47.948353 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:20:47.948371 kernel: vgaarb: loaded Jan 17 12:20:47.948381 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:20:47.948391 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:20:47.948400 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:20:47.948409 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:20:47.948418 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:20:47.948434 kernel: pnp: PnP ACPI init Jan 17 12:20:47.948444 kernel: pnp: PnP ACPI: found 4 devices Jan 17 12:20:47.948462 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:20:47.948472 kernel: NET: Registered PF_INET protocol family Jan 17 12:20:47.948481 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:20:47.948490 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:20:47.948499 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:20:47.948508 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:20:47.948517 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:20:47.948526 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:20:47.948535 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:20:47.948547 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:20:47.948556 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:20:47.948565 kernel: NET: Registered PF_XDP protocol family Jan 17 12:20:47.948683 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:20:47.948800 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:20:47.948888 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:20:47.948974 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 12:20:47.949073 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 17 12:20:47.949199 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 17 12:20:47.949302 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:20:47.949316 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 12:20:47.949416 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 28063 usecs Jan 17 12:20:47.949429 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:20:47.949443 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:20:47.949456 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39fcb9af, max_idle_ns: 440795211412 ns Jan 17 12:20:47.949470 kernel: Initialise system trusted keyrings Jan 17 12:20:47.949489 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:20:47.949500 kernel: Key type asymmetric registered Jan 17 12:20:47.949508 kernel: Asymmetric key parser 'x509' registered Jan 17 12:20:47.949521 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:20:47.949537 kernel: io scheduler mq-deadline registered Jan 17 12:20:47.949548 kernel: io scheduler kyber registered Jan 17 12:20:47.949557 kernel: io scheduler bfq registered Jan 17 12:20:47.949566 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:20:47.949575 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 17 12:20:47.949588 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:20:47.949597 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:20:47.949606 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:20:47.949615 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:20:47.949624 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:20:47.949633 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:20:47.949641 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:20:47.949771 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 12:20:47.949790 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:20:47.949912 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 12:20:47.950003 kernel: rtc_cmos 00:03: setting system clock to 2025-01-17T12:20:47 UTC (1737116447) Jan 17 12:20:47.951153 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 12:20:47.951169 kernel: intel_pstate: CPU model not supported Jan 17 12:20:47.951179 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:20:47.951188 kernel: Segment Routing with IPv6 Jan 17 12:20:47.951212 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:20:47.951226 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:20:47.951247 kernel: Key type dns_resolver registered Jan 17 12:20:47.951258 kernel: IPI shorthand broadcast: enabled Jan 17 12:20:47.951267 kernel: sched_clock: Marking stable (857006263, 104907086)->(1047410465, -85497116) Jan 17 12:20:47.951276 kernel: registered taskstats version 1 Jan 17 12:20:47.951285 kernel: Loading compiled-in X.509 certificates Jan 17 12:20:47.951294 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:20:47.951303 kernel: Key type .fscrypt registered Jan 17 12:20:47.951312 kernel: Key type fscrypt-provisioning registered Jan 17 12:20:47.951322 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:20:47.951333 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:20:47.951342 kernel: ima: No architecture policies found Jan 17 12:20:47.951351 kernel: clk: Disabling unused clocks Jan 17 12:20:47.951360 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:20:47.951370 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:20:47.951399 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:20:47.951412 kernel: Run /init as init process Jan 17 12:20:47.951421 kernel: with arguments: Jan 17 12:20:47.951431 kernel: /init Jan 17 12:20:47.951443 kernel: with environment: Jan 17 12:20:47.951452 kernel: HOME=/ Jan 17 12:20:47.951461 kernel: TERM=linux Jan 17 12:20:47.951470 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:20:47.951482 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:20:47.951494 systemd[1]: Detected virtualization kvm. Jan 17 12:20:47.951504 systemd[1]: Detected architecture x86-64. Jan 17 12:20:47.951516 systemd[1]: Running in initrd. Jan 17 12:20:47.951526 systemd[1]: No hostname configured, using default hostname. Jan 17 12:20:47.951535 systemd[1]: Hostname set to <localhost>. Jan 17 12:20:47.951545 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:20:47.951555 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:20:47.951565 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:20:47.951575 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:20:47.951586 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:20:47.951598 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:20:47.951608 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:20:47.951618 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:20:47.951629 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:20:47.951640 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:20:47.951649 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:20:47.951659 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:20:47.951672 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:20:47.951682 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:20:47.951692 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:20:47.951704 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:20:47.951714 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:20:47.951724 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:20:47.951736 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:20:47.951746 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:20:47.951756 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:20:47.951765 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:20:47.951775 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:20:47.951785 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:20:47.951795 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:20:47.951805 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:20:47.951817 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:20:47.951827 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:20:47.951837 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:20:47.951847 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:20:47.951860 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:20:47.951869 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:20:47.951879 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:20:47.951890 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:20:47.951904 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:20:47.951943 systemd-journald[182]: Collecting audit messages is disabled. Jan 17 12:20:47.951971 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:20:47.951981 kernel: Bridge firewalling registered Jan 17 12:20:47.951991 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:20:47.952003 systemd-journald[182]: Journal started Jan 17 12:20:47.952024 systemd-journald[182]: Runtime Journal (/run/log/journal/b600a829577c4d6180480c7983f0f0bb) is 4.9M, max 39.3M, 34.4M free. Jan 17 12:20:47.923175 systemd-modules-load[183]: Inserted module 'overlay' Jan 17 12:20:47.950613 systemd-modules-load[183]: Inserted module 'br_netfilter' Jan 17 12:20:47.990365 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:20:47.991293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:20:47.992204 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:20:48.000331 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:20:48.013405 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:20:48.018349 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:20:48.023411 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:20:48.040495 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:20:48.049794 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:20:48.056406 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:20:48.066615 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:20:48.070038 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:20:48.078495 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:20:48.082344 dracut-cmdline[216]: dracut-dracut-053 Jan 17 12:20:48.088085 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:20:48.109667 systemd-resolved[223]: Positive Trust Anchors: Jan 17 12:20:48.110443 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:20:48.110484 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:20:48.115651 systemd-resolved[223]: Defaulting to hostname 'linux'. Jan 17 12:20:48.117712 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:20:48.118235 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:20:48.187144 kernel: SCSI subsystem initialized Jan 17 12:20:48.198205 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:20:48.211178 kernel: iscsi: registered transport (tcp) Jan 17 12:20:48.238158 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:20:48.238280 kernel: QLogic iSCSI HBA Driver Jan 17 12:20:48.306171 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:20:48.321464 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:20:48.352183 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:20:48.352292 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:20:48.353558 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:20:48.404103 kernel: raid6: avx2x4 gen() 17957 MB/s Jan 17 12:20:48.420087 kernel: raid6: avx2x2 gen() 17994 MB/s Jan 17 12:20:48.437416 kernel: raid6: avx2x1 gen() 13394 MB/s Jan 17 12:20:48.437458 kernel: raid6: using algorithm avx2x2 gen() 17994 MB/s Jan 17 12:20:48.455375 kernel: raid6: .... xor() 20395 MB/s, rmw enabled Jan 17 12:20:48.455452 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:20:48.476101 kernel: xor: automatically using best checksumming function avx Jan 17 12:20:48.643111 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:20:48.656847 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:20:48.663302 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:20:48.687266 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 17 12:20:48.692933 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:20:48.702320 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:20:48.717496 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jan 17 12:20:48.752997 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:20:48.758291 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:20:48.831275 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:20:48.838415 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:20:48.859721 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:20:48.861898 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:20:48.862636 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:20:48.863017 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:20:48.869552 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:20:48.890763 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:20:48.918135 kernel: scsi host0: Virtio SCSI HBA Jan 17 12:20:48.920081 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 17 12:20:48.992224 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:20:48.992246 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 12:20:48.993279 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:20:48.993307 kernel: GPT:9289727 != 125829119 Jan 17 12:20:48.993320 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:20:48.993332 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:20:48.993345 kernel: GPT:9289727 != 125829119 Jan 17 12:20:48.993356 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:20:48.993368 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:20:48.993380 kernel: AES CTR mode by8 optimization enabled Jan 17 12:20:48.993391 kernel: libata version 3.00 loaded. Jan 17 12:20:48.993407 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 17 12:20:48.995400 kernel: scsi host1: ata_piix Jan 17 12:20:48.995592 kernel: scsi host2: ata_piix Jan 17 12:20:48.995710 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 17 12:20:48.995724 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 17 12:20:48.995736 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 17 12:20:49.014632 kernel: ACPI: bus type USB registered Jan 17 12:20:49.014664 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Jan 17 12:20:49.014785 kernel: usbcore: registered new interface driver usbfs Jan 17 12:20:49.014798 kernel: usbcore: registered new interface driver hub Jan 17 12:20:49.014810 kernel: usbcore: registered new device driver usb Jan 17 12:20:48.989812 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:20:48.990002 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:20:48.990818 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:20:48.991322 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:20:48.991473 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:20:48.991893 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:20:49.002707 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:20:49.066015 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:20:49.078509 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:20:49.097383 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:20:49.181103 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (461) Jan 17 12:20:49.186311 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:20:49.187902 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (448) Jan 17 12:20:49.193389 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:20:49.202368 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:20:49.207647 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:20:49.213978 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 17 12:20:49.214192 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 17 12:20:49.214311 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 17 12:20:49.214425 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 17 12:20:49.214544 kernel: hub 1-0:1.0: USB hub found Jan 17 12:20:49.214677 kernel: hub 1-0:1.0: 2 ports detected Jan 17 12:20:49.211435 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:20:49.218328 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:20:49.226538 disk-uuid[549]: Primary Header is updated. Jan 17 12:20:49.226538 disk-uuid[549]: Secondary Entries is updated. Jan 17 12:20:49.226538 disk-uuid[549]: Secondary Header is updated. Jan 17 12:20:49.233105 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:20:49.238130 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:20:50.240144 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:20:50.240573 disk-uuid[550]: The operation has completed successfully. Jan 17 12:20:50.281534 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:20:50.281644 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:20:50.291306 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:20:50.297176 sh[561]: Success Jan 17 12:20:50.313083 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:20:50.362483 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:20:50.378216 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:20:50.380255 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:20:50.403092 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:20:50.403155 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:20:50.403169 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:20:50.403191 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:20:50.404444 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:20:50.411641 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:20:50.412701 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:20:50.417283 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:20:50.419271 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:20:50.435707 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:20:50.435767 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:20:50.435781 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:20:50.439084 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:20:50.453849 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:20:50.454392 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:20:50.461632 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:20:50.469426 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:20:50.597599 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:20:50.604395 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:20:50.611483 ignition[663]: Ignition 2.19.0 Jan 17 12:20:50.611663 ignition[663]: Stage: fetch-offline Jan 17 12:20:50.615443 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:20:50.611703 ignition[663]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:50.611740 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:20:50.611858 ignition[663]: parsed url from cmdline: "" Jan 17 12:20:50.611861 ignition[663]: no config URL provided Jan 17 12:20:50.611867 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:20:50.611875 ignition[663]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:20:50.611882 ignition[663]: failed to fetch config: resource requires networking Jan 17 12:20:50.612082 ignition[663]: Ignition finished successfully Jan 17 12:20:50.636994 systemd-networkd[751]: lo: Link UP Jan 17 12:20:50.637010 systemd-networkd[751]: lo: Gained carrier Jan 17 12:20:50.639255 systemd-networkd[751]: Enumeration completed Jan 17 12:20:50.639419 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:20:50.639703 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 12:20:50.639707 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 17 12:20:50.640532 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:20:50.640537 systemd-networkd[751]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:20:50.641635 systemd-networkd[751]: eth0: Link UP Jan 17 12:20:50.641639 systemd-networkd[751]: eth0: Gained carrier Jan 17 12:20:50.641647 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 17 12:20:50.641987 systemd[1]: Reached target network.target - Network. Jan 17 12:20:50.647430 systemd-networkd[751]: eth1: Link UP Jan 17 12:20:50.647434 systemd-networkd[751]: eth1: Gained carrier Jan 17 12:20:50.647449 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:20:50.650396 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:20:50.657167 systemd-networkd[751]: eth1: DHCPv4 address 10.124.0.2/20 acquired from 169.254.169.253 Jan 17 12:20:50.663236 systemd-networkd[751]: eth0: DHCPv4 address 137.184.44.6/20, gateway 137.184.32.1 acquired from 169.254.169.253 Jan 17 12:20:50.674281 ignition[754]: Ignition 2.19.0 Jan 17 12:20:50.674303 ignition[754]: Stage: fetch Jan 17 12:20:50.674659 ignition[754]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:50.674678 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:20:50.674843 ignition[754]: parsed url from cmdline: "" Jan 17 12:20:50.674851 ignition[754]: no config URL provided Jan 17 12:20:50.674861 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:20:50.674876 ignition[754]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:20:50.674911 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 17 12:20:50.689290 ignition[754]: GET result: OK Jan 17 12:20:50.690080 ignition[754]: parsing config with SHA512: cf53160f787ce07ddf2d32414f42cf215373830fdf1a4c4b8f86d5704cb6629ebc2599770bd3f7c87b77b9622e8f4add6c9f54cb800f05ca07722ff00d0b4a53 Jan 17 12:20:50.697872 unknown[754]: fetched base config from "system" Jan 17 12:20:50.697886 unknown[754]: fetched base config from "system" Jan 17 12:20:50.698825 ignition[754]: fetch: fetch complete Jan 17 12:20:50.697895 unknown[754]: fetched user config from "digitalocean" Jan 17 12:20:50.698837 ignition[754]: fetch: fetch passed Jan 17 12:20:50.698934 ignition[754]: Ignition finished successfully Jan 17 12:20:50.702174 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:20:50.705317 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:20:50.732235 ignition[761]: Ignition 2.19.0 Jan 17 12:20:50.732245 ignition[761]: Stage: kargs Jan 17 12:20:50.732448 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:50.732460 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:20:50.734718 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:20:50.733368 ignition[761]: kargs: kargs passed Jan 17 12:20:50.733421 ignition[761]: Ignition finished successfully Jan 17 12:20:50.740282 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:20:50.769987 ignition[767]: Ignition 2.19.0 Jan 17 12:20:50.770006 ignition[767]: Stage: disks Jan 17 12:20:50.770297 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:50.770312 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:20:50.771682 ignition[767]: disks: disks passed Jan 17 12:20:50.773075 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:20:50.771765 ignition[767]: Ignition finished successfully Jan 17 12:20:50.778390 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:20:50.779500 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:20:50.780121 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:20:50.780958 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:20:50.781740 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:20:50.792371 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:20:50.808463 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:20:50.812035 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:20:50.816220 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:20:50.929343 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:20:50.930019 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:20:50.931124 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:20:50.937217 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:20:50.940208 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:20:50.943437 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 17 12:20:50.953099 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (783) Jan 17 12:20:50.953377 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:20:50.960578 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:20:50.960617 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:20:50.960638 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:20:50.960990 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:20:50.961045 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:20:50.965464 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:20:50.970266 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:20:50.981392 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:20:50.982971 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:20:51.046362 coreos-metadata[785]: Jan 17 12:20:51.046 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:20:51.051940 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:20:51.055698 coreos-metadata[786]: Jan 17 12:20:51.055 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:20:51.057480 coreos-metadata[785]: Jan 17 12:20:51.056 INFO Fetch successful Jan 17 12:20:51.059962 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:20:51.062585 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 17 12:20:51.063726 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 17 12:20:51.066420 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:20:51.067395 coreos-metadata[786]: Jan 17 12:20:51.067 INFO Fetch successful Jan 17 12:20:51.074738 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:20:51.075631 coreos-metadata[786]: Jan 17 12:20:51.075 INFO wrote hostname ci-4081.3.0-d-600f54fd9d to /sysroot/etc/hostname Jan 17 12:20:51.077070 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:20:51.173293 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:20:51.178201 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:20:51.179856 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:20:51.194110 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:20:51.210875 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:20:51.217919 ignition[905]: INFO : Ignition 2.19.0 Jan 17 12:20:51.217919 ignition[905]: INFO : Stage: mount Jan 17 12:20:51.219144 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:51.219144 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:20:51.219144 ignition[905]: INFO : mount: mount passed Jan 17 12:20:51.219144 ignition[905]: INFO : Ignition finished successfully Jan 17 12:20:51.220007 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:20:51.226288 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:20:51.400391 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:20:51.405412 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:20:51.417458 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (916) Jan 17 12:20:51.417518 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:20:51.417533 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:20:51.418529 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:20:51.424107 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:20:51.425989 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:20:51.448556 ignition[932]: INFO : Ignition 2.19.0 Jan 17 12:20:51.448556 ignition[932]: INFO : Stage: files Jan 17 12:20:51.449785 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:51.449785 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:20:51.450930 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:20:51.451476 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:20:51.451476 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:20:51.455086 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:20:51.455762 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:20:51.456470 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:20:51.455797 unknown[932]: wrote ssh authorized keys file for user: core Jan 17 12:20:51.458018 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:20:51.458018 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:20:51.487485 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:20:51.553928 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:20:51.553928 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:20:51.553928 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 12:20:51.737452 systemd-networkd[751]: eth0: Gained IPv6LL Jan 17 12:20:51.904528 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:20:51.961886 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:20:51.962646 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:20:51.962646 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:20:51.962646 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:20:51.962646 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:20:51.962646 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:20:51.962646 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:20:51.962646 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:20:51.962646 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:20:51.967224 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:20:51.967224 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:20:51.967224 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:20:51.967224 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:20:51.967224 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:20:51.967224 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:20:52.185306 systemd-networkd[751]: eth1: Gained IPv6LL Jan 17 12:20:52.259556 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:20:52.543140 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:20:52.543140 ignition[932]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 12:20:52.544863 ignition[932]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:20:52.544863 ignition[932]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:20:52.544863 ignition[932]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 12:20:52.544863 ignition[932]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:20:52.544863 ignition[932]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:20:52.544863 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:20:52.544863 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:20:52.544863 ignition[932]: INFO : files: files passed Jan 17 12:20:52.544863 ignition[932]: INFO : Ignition finished successfully Jan 17 12:20:52.545767 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:20:52.552249 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:20:52.555265 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:20:52.559561 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:20:52.559678 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:20:52.571470 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:20:52.571470 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:20:52.573843 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:20:52.575600 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:20:52.576371 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:20:52.581262 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:20:52.611816 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:20:52.611937 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:20:52.612943 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:20:52.613560 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:20:52.614302 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:20:52.616207 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:20:52.634405 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:20:52.641390 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:20:52.651263 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:20:52.652385 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:20:52.652877 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:20:52.653649 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:20:52.653770 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:20:52.654682 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:20:52.655251 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:20:52.655942 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:20:52.656695 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:20:52.657405 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:20:52.658042 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:20:52.658779 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:20:52.659546 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:20:52.660242 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:20:52.660984 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:20:52.661734 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:20:52.661861 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:20:52.662630 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:20:52.663139 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:20:52.663811 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:20:52.665579 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:20:52.666560 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:20:52.666728 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:20:52.668093 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:20:52.668267 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:20:52.669264 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:20:52.669366 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:20:52.670104 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:20:52.670253 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:20:52.678318 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:20:52.681445 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:20:52.682292 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:20:52.682906 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:20:52.683893 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:20:52.683998 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:20:52.690338 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:20:52.690903 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:20:52.699393 ignition[986]: INFO : Ignition 2.19.0 Jan 17 12:20:52.699393 ignition[986]: INFO : Stage: umount Jan 17 12:20:52.699393 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:20:52.699393 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 17 12:20:52.699393 ignition[986]: INFO : umount: umount passed Jan 17 12:20:52.699393 ignition[986]: INFO : Ignition finished successfully Jan 17 12:20:52.700300 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:20:52.700399 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:20:52.706697 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:20:52.706802 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:20:52.707420 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:20:52.707470 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:20:52.707817 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:20:52.707852 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:20:52.708219 systemd[1]: Stopped target network.target - Network. Jan 17 12:20:52.708539 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:20:52.708604 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:20:52.708975 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:20:52.709301 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:20:52.713294 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:20:52.713865 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:20:52.714528 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:20:52.716468 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:20:52.716520 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:20:52.716872 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:20:52.716904 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:20:52.717257 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:20:52.717300 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:20:52.717648 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:20:52.717682 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:20:52.719339 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:20:52.726251 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:20:52.728911 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:20:52.729276 systemd-networkd[751]: eth0: DHCPv6 lease lost Jan 17 12:20:52.729928 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:20:52.730120 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:20:52.731974 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:20:52.732207 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:20:52.733288 systemd-networkd[751]: eth1: DHCPv6 lease lost Jan 17 12:20:52.734973 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:20:52.735121 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:20:52.736131 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:20:52.736236 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:20:52.740369 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:20:52.740436 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:20:52.745265 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:20:52.745666 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:20:52.745723 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:20:52.747389 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:20:52.747440 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:20:52.749422 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:20:52.749492 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:20:52.750040 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:20:52.750102 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:20:52.751092 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:20:52.761783 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:20:52.761907 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:20:52.763880 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:20:52.764035 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:20:52.765911 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:20:52.765968 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:20:52.766852 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:20:52.766901 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:20:52.767633 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:20:52.767682 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:20:52.768707 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:20:52.768751 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:20:52.769368 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:20:52.769409 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:20:52.777358 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:20:52.779406 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:20:52.779480 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:20:52.779900 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:20:52.779938 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:20:52.780339 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:20:52.780377 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:20:52.780755 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:20:52.780791 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:20:52.784273 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:20:52.784821 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:20:52.785669 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:20:52.792438 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:20:52.800129 systemd[1]: Switching root. Jan 17 12:20:52.854708 systemd-journald[182]: Journal stopped Jan 17 12:20:53.998369 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Jan 17 12:20:53.998434 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:20:53.998449 kernel: SELinux: policy capability open_perms=1 Jan 17 12:20:53.998467 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:20:53.998483 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:20:53.998495 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:20:53.998507 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:20:53.998519 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:20:53.998530 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:20:53.998546 kernel: audit: type=1403 audit(1737116453.082:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:20:53.998559 systemd[1]: Successfully loaded SELinux policy in 37.907ms. Jan 17 12:20:53.998588 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.201ms. Jan 17 12:20:53.998613 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:20:53.998631 systemd[1]: Detected virtualization kvm. Jan 17 12:20:53.998644 systemd[1]: Detected architecture x86-64. Jan 17 12:20:53.998656 systemd[1]: Detected first boot. Jan 17 12:20:53.998668 systemd[1]: Hostname set to <ci-4081.3.0-d-600f54fd9d>. Jan 17 12:20:53.998681 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:20:53.998694 zram_generator::config[1029]: No configuration found. Jan 17 12:20:53.998708 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:20:53.998724 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:20:53.998736 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:20:53.998748 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:20:53.998761 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:20:53.998774 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:20:53.998790 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:20:53.998802 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:20:53.998814 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:20:53.998826 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:20:53.998842 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:20:53.998854 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:20:53.998866 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:20:53.998879 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:20:53.998892 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:20:53.998904 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:20:53.998917 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:20:53.998929 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:20:53.998945 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:20:53.998963 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:20:53.998975 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:20:53.998988 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:20:53.999001 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:20:53.999013 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:20:53.999029 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:20:53.999045 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:20:54.004054 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:20:54.004251 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:20:54.004266 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:20:54.004280 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:20:54.004293 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:20:54.004306 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:20:54.004318 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:20:54.004331 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:20:54.004350 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:20:54.004362 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:20:54.004385 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:20:54.004398 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:20:54.004415 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:20:54.004427 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:20:54.004440 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:20:54.004454 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:20:54.004469 systemd[1]: Reached target machines.target - Containers. Jan 17 12:20:54.004482 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:20:54.004495 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:20:54.004509 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:20:54.004521 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:20:54.004533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:20:54.004546 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:20:54.004558 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:20:54.004571 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:20:54.004586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:20:54.004600 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:20:54.004613 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:20:54.004625 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:20:54.004641 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:20:54.004660 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:20:54.004675 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:20:54.004695 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:20:54.004713 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:20:54.004735 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:20:54.004752 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:20:54.004772 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:20:54.004790 systemd[1]: Stopped verity-setup.service. Jan 17 12:20:54.004810 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:20:54.004823 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:20:54.004835 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:20:54.004847 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:20:54.004864 kernel: loop: module loaded Jan 17 12:20:54.004877 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:20:54.004889 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:20:54.004901 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:20:54.004914 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:20:54.004929 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:20:54.004944 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:20:54.004958 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:20:54.004971 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:20:54.004983 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:20:54.004996 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:20:54.005011 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:20:54.005024 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:20:54.005036 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:20:54.005048 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:20:54.005071 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:20:54.005084 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:20:54.005097 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:20:54.005109 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:20:54.005126 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:20:54.005139 kernel: ACPI: bus type drm_connector registered Jan 17 12:20:54.005151 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:20:54.005164 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:20:54.005176 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:20:54.005188 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:20:54.005201 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:20:54.005250 systemd-journald[1109]: Collecting audit messages is disabled. Jan 17 12:20:54.005278 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:20:54.005290 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:20:54.005304 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:20:54.005317 kernel: fuse: init (API version 7.39) Jan 17 12:20:54.005332 systemd-journald[1109]: Journal started Jan 17 12:20:54.005357 systemd-journald[1109]: Runtime Journal (/run/log/journal/b600a829577c4d6180480c7983f0f0bb) is 4.9M, max 39.3M, 34.4M free. Jan 17 12:20:54.011254 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:20:54.011319 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:20:54.011337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:20:53.645200 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:20:54.018228 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:20:54.018275 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:20:53.665574 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:20:53.665991 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:20:54.021121 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:20:54.030313 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:20:54.038128 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:20:54.033557 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:20:54.033721 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:20:54.034376 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:20:54.043970 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:20:54.071535 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:20:54.083251 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 12:20:54.087853 systemd-tmpfiles[1126]: ACLs are not supported, ignoring. Jan 17 12:20:54.089896 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:20:54.093202 systemd-tmpfiles[1126]: ACLs are not supported, ignoring. Jan 17 12:20:54.093384 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:20:54.094565 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:20:54.103364 systemd-journald[1109]: Time spent on flushing to /var/log/journal/b600a829577c4d6180480c7983f0f0bb is 33.885ms for 997 entries. Jan 17 12:20:54.103364 systemd-journald[1109]: System Journal (/var/log/journal/b600a829577c4d6180480c7983f0f0bb) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:20:54.150406 systemd-journald[1109]: Received client request to flush runtime journal. Jan 17 12:20:54.102709 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:20:54.112305 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:20:54.153425 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:20:54.164293 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:20:54.171520 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:20:54.181593 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:20:54.188142 kernel: loop1: detected capacity change from 0 to 211296 Jan 17 12:20:54.203818 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:20:54.204455 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:20:54.245408 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:20:54.252088 kernel: loop2: detected capacity change from 0 to 8 Jan 17 12:20:54.256299 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:20:54.257037 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:20:54.266053 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:20:54.280891 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 12:20:54.298271 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jan 17 12:20:54.298292 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jan 17 12:20:54.311514 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:20:54.321297 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:20:54.353099 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 12:20:54.379303 kernel: loop5: detected capacity change from 0 to 211296 Jan 17 12:20:54.420093 kernel: loop6: detected capacity change from 0 to 8 Jan 17 12:20:54.423086 kernel: loop7: detected capacity change from 0 to 140768 Jan 17 12:20:54.443025 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 17 12:20:54.449151 (sd-merge)[1177]: Merged extensions into '/usr'. Jan 17 12:20:54.458991 systemd[1]: Reloading requested from client PID 1133 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:20:54.459008 systemd[1]: Reloading... Jan 17 12:20:54.588103 zram_generator::config[1212]: No configuration found. Jan 17 12:20:54.751190 ldconfig[1129]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:20:54.772361 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:20:54.821519 systemd[1]: Reloading finished in 361 ms. Jan 17 12:20:54.842473 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:20:54.846719 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:20:54.857356 systemd[1]: Starting ensure-sysext.service... Jan 17 12:20:54.861897 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:20:54.879261 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:20:54.879283 systemd[1]: Reloading... Jan 17 12:20:54.921250 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:20:54.921868 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:20:54.923883 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:20:54.924706 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jan 17 12:20:54.925338 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jan 17 12:20:54.930880 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:20:54.931171 systemd-tmpfiles[1247]: Skipping /boot Jan 17 12:20:54.947126 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:20:54.947280 systemd-tmpfiles[1247]: Skipping /boot Jan 17 12:20:54.992093 zram_generator::config[1274]: No configuration found. Jan 17 12:20:55.122097 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:20:55.178858 systemd[1]: Reloading finished in 299 ms. Jan 17 12:20:55.203518 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:20:55.209618 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:20:55.219274 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:20:55.221582 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:20:55.226248 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:20:55.236571 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:20:55.240302 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:20:55.245258 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:20:55.258358 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:20:55.260356 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:20:55.260560 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:20:55.271586 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:20:55.279343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:20:55.281315 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:20:55.281814 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:20:55.281937 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:20:55.284035 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:20:55.285284 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:20:55.285444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:20:55.285531 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:20:55.291577 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:20:55.291772 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:20:55.299331 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:20:55.299887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:20:55.299970 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:20:55.301163 systemd[1]: Finished ensure-sysext.service. Jan 17 12:20:55.301985 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:20:55.316480 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:20:55.319423 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:20:55.320589 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:20:55.321375 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:20:55.326041 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:20:55.337558 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:20:55.338304 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:20:55.349495 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:20:55.350138 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:20:55.351595 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:20:55.361627 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:20:55.362197 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:20:55.362922 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:20:55.365377 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:20:55.365582 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:20:55.366939 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Jan 17 12:20:55.367573 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:20:55.368375 augenrules[1355]: No rules Jan 17 12:20:55.370565 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:20:55.387659 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:20:55.402309 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:20:55.407233 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:20:55.482715 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:20:55.483488 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:20:55.528315 systemd-networkd[1372]: lo: Link UP Jan 17 12:20:55.528324 systemd-networkd[1372]: lo: Gained carrier Jan 17 12:20:55.529158 systemd-networkd[1372]: Enumeration completed Jan 17 12:20:55.529274 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:20:55.534571 systemd-resolved[1324]: Positive Trust Anchors: Jan 17 12:20:55.535657 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:20:55.535700 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:20:55.540423 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:20:55.542577 systemd-resolved[1324]: Using system hostname 'ci-4081.3.0-d-600f54fd9d'. Jan 17 12:20:55.546731 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:20:55.548972 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:20:55.549018 systemd[1]: Reached target network.target - Network. Jan 17 12:20:55.549380 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:20:55.571212 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 17 12:20:55.573122 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:20:55.573273 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:20:55.576919 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:20:55.580669 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:20:55.593247 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:20:55.593734 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:20:55.593777 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:20:55.593793 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:20:55.598317 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:20:55.598518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:20:55.604033 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:20:55.614091 kernel: ISO 9660 Extensions: RRIP_1991A Jan 17 12:20:55.618901 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 17 12:20:55.623074 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1370) Jan 17 12:20:55.624866 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:20:55.625338 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:20:55.628946 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:20:55.630006 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:20:55.633699 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:20:55.653390 systemd-networkd[1372]: eth1: Configuring with /run/systemd/network/10-da:8f:22:64:11:b3.network. Jan 17 12:20:55.654745 systemd-networkd[1372]: eth1: Link UP Jan 17 12:20:55.654753 systemd-networkd[1372]: eth1: Gained carrier Jan 17 12:20:55.658977 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:20:55.687551 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 17 12:20:55.699487 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:20:55.692940 systemd-networkd[1372]: eth0: Configuring with /run/systemd/network/10-c2:2e:49:49:f9:18.network. Jan 17 12:20:55.693561 systemd-networkd[1372]: eth0: Link UP Jan 17 12:20:55.693565 systemd-networkd[1372]: eth0: Gained carrier Jan 17 12:20:55.693727 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:20:55.696438 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:20:55.696694 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:20:55.710049 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:20:55.714921 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:20:55.718251 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:20:55.727087 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:20:55.745118 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 17 12:20:55.745200 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 17 12:20:55.746875 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:20:55.754314 kernel: Console: switching to colour dummy device 80x25 Jan 17 12:20:55.757140 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 12:20:55.757218 kernel: [drm] features: -context_init Jan 17 12:20:55.763106 kernel: [drm] number of scanouts: 1 Jan 17 12:20:55.763211 kernel: [drm] number of cap sets: 0 Jan 17 12:20:55.769111 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 17 12:20:55.780093 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:20:55.783681 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 12:20:55.783755 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:20:55.796096 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 12:20:55.798394 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:20:55.809461 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:20:55.809697 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:20:55.818458 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:20:55.829522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:20:55.829762 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:20:55.837286 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:20:55.911088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:20:55.967642 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:20:55.998827 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:20:56.004397 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:20:56.030146 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:20:56.063701 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:20:56.065184 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:20:56.065337 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:20:56.065570 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:20:56.065712 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:20:56.066264 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:20:56.067816 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:20:56.067929 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:20:56.067998 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:20:56.068028 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:20:56.068101 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:20:56.069815 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:20:56.071835 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:20:56.079217 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:20:56.082150 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:20:56.083115 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:20:56.083743 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:20:56.085802 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:20:56.086599 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:20:56.087248 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:20:56.089208 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:20:56.093748 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:20:56.094350 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:20:56.098281 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:20:56.109216 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:20:56.114378 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:20:56.115091 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:20:56.122094 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:20:56.133199 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:20:56.140746 jq[1433]: false Jan 17 12:20:56.138981 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:20:56.145274 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:20:56.154712 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:20:56.160312 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:20:56.160921 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:20:56.164038 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:20:56.171215 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:20:56.173575 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:20:56.184645 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:20:56.184827 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:20:56.192463 coreos-metadata[1431]: Jan 17 12:20:56.192 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:20:56.195950 dbus-daemon[1432]: [system] SELinux support is enabled Jan 17 12:20:56.196169 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:20:56.203571 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:20:56.203603 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:20:56.204301 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:20:56.204376 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 17 12:20:56.204398 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:20:56.205364 coreos-metadata[1431]: Jan 17 12:20:56.205 INFO Fetch successful Jan 17 12:20:56.226522 (ntainerd)[1453]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:20:56.267092 jq[1443]: true Jan 17 12:20:56.273500 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:20:56.273678 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:20:56.289135 extend-filesystems[1434]: Found loop4 Jan 17 12:20:56.289135 extend-filesystems[1434]: Found loop5 Jan 17 12:20:56.289135 extend-filesystems[1434]: Found loop6 Jan 17 12:20:56.289135 extend-filesystems[1434]: Found loop7 Jan 17 12:20:56.289135 extend-filesystems[1434]: Found vda Jan 17 12:20:56.289135 extend-filesystems[1434]: Found vda1 Jan 17 12:20:56.289135 extend-filesystems[1434]: Found vda2 Jan 17 12:20:56.289135 extend-filesystems[1434]: Found vda3 Jan 17 12:20:56.289135 extend-filesystems[1434]: Found usr Jan 17 12:20:56.289135 extend-filesystems[1434]: Found vda4 Jan 17 12:20:56.289135 extend-filesystems[1434]: Found vda6 Jan 17 12:20:56.289135 extend-filesystems[1434]: Found vda7 Jan 17 12:20:56.289135 extend-filesystems[1434]: Found vda9 Jan 17 12:20:56.289135 extend-filesystems[1434]: Checking size of /dev/vda9 Jan 17 12:20:56.331375 update_engine[1442]: I20250117 12:20:56.318899 1442 main.cc:92] Flatcar Update Engine starting Jan 17 12:20:56.300956 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:20:56.339202 tar[1445]: linux-amd64/helm Jan 17 12:20:56.312527 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:20:56.338233 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:20:56.339805 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:20:56.355703 update_engine[1442]: I20250117 12:20:56.355345 1442 update_check_scheduler.cc:74] Next update check in 10m41s Jan 17 12:20:56.348220 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:20:56.362265 jq[1469]: true Jan 17 12:20:56.362580 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:20:56.373238 extend-filesystems[1434]: Resized partition /dev/vda9 Jan 17 12:20:56.387235 extend-filesystems[1479]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:20:56.395871 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 17 12:20:56.432511 systemd-logind[1441]: New seat seat0. Jan 17 12:20:56.449792 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:20:56.449819 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:20:56.451331 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:20:56.460347 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1369) Jan 17 12:20:56.563574 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 12:20:56.604327 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:20:56.611211 extend-filesystems[1479]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:20:56.611211 extend-filesystems[1479]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 12:20:56.611211 extend-filesystems[1479]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 12:20:56.623005 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Jan 17 12:20:56.623005 extend-filesystems[1434]: Found vdb Jan 17 12:20:56.614534 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:20:56.616252 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:20:56.641191 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:20:56.647442 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:20:56.668104 systemd[1]: Starting sshkeys.service... Jan 17 12:20:56.687905 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:20:56.700158 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:20:56.753707 coreos-metadata[1507]: Jan 17 12:20:56.753 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 17 12:20:56.770131 coreos-metadata[1507]: Jan 17 12:20:56.768 INFO Fetch successful Jan 17 12:20:56.780595 unknown[1507]: wrote ssh authorized keys file for user: core Jan 17 12:20:56.811359 containerd[1453]: time="2025-01-17T12:20:56.811263200Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:20:56.821496 update-ssh-keys[1512]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:20:56.822782 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:20:56.828040 systemd[1]: Finished sshkeys.service. Jan 17 12:20:56.871589 containerd[1453]: time="2025-01-17T12:20:56.871320487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:20:56.876086 containerd[1453]: time="2025-01-17T12:20:56.876027221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:20:56.876560 containerd[1453]: time="2025-01-17T12:20:56.876217283Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:20:56.876560 containerd[1453]: time="2025-01-17T12:20:56.876244120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:20:56.876560 containerd[1453]: time="2025-01-17T12:20:56.876400678Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:20:56.876560 containerd[1453]: time="2025-01-17T12:20:56.876418281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:20:56.876560 containerd[1453]: time="2025-01-17T12:20:56.876486324Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:20:56.876560 containerd[1453]: time="2025-01-17T12:20:56.876500488Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:20:56.876952 containerd[1453]: time="2025-01-17T12:20:56.876930535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:20:56.877083 containerd[1453]: time="2025-01-17T12:20:56.877000031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:20:56.877083 containerd[1453]: time="2025-01-17T12:20:56.877019493Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:20:56.877083 containerd[1453]: time="2025-01-17T12:20:56.877029283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:20:56.877561 containerd[1453]: time="2025-01-17T12:20:56.877223780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:20:56.877561 containerd[1453]: time="2025-01-17T12:20:56.877518666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:20:56.877796 containerd[1453]: time="2025-01-17T12:20:56.877778135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:20:56.877845 containerd[1453]: time="2025-01-17T12:20:56.877835984Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:20:56.877969 containerd[1453]: time="2025-01-17T12:20:56.877955808Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:20:56.878099 containerd[1453]: time="2025-01-17T12:20:56.878051383Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:20:56.885790 containerd[1453]: time="2025-01-17T12:20:56.885735629Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:20:56.886357 containerd[1453]: time="2025-01-17T12:20:56.885999183Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:20:56.886357 containerd[1453]: time="2025-01-17T12:20:56.886037856Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:20:56.886357 containerd[1453]: time="2025-01-17T12:20:56.886109604Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:20:56.886357 containerd[1453]: time="2025-01-17T12:20:56.886127603Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:20:56.886357 containerd[1453]: time="2025-01-17T12:20:56.886293250Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:20:56.887029 containerd[1453]: time="2025-01-17T12:20:56.887004563Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:20:56.887878 containerd[1453]: time="2025-01-17T12:20:56.887312882Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:20:56.887878 containerd[1453]: time="2025-01-17T12:20:56.887339051Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:20:56.887878 containerd[1453]: time="2025-01-17T12:20:56.887371850Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:20:56.887878 containerd[1453]: time="2025-01-17T12:20:56.887387663Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:20:56.887878 containerd[1453]: time="2025-01-17T12:20:56.887402436Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:20:56.887878 containerd[1453]: time="2025-01-17T12:20:56.887419768Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:20:56.887878 containerd[1453]: time="2025-01-17T12:20:56.887454785Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:20:56.887878 containerd[1453]: time="2025-01-17T12:20:56.887475543Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:20:56.887878 containerd[1453]: time="2025-01-17T12:20:56.887494590Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:20:56.887878 containerd[1453]: time="2025-01-17T12:20:56.887511886Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:20:56.887878 containerd[1453]: time="2025-01-17T12:20:56.887532705Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:20:56.887878 containerd[1453]: time="2025-01-17T12:20:56.887555249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.887878 containerd[1453]: time="2025-01-17T12:20:56.887578868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.887878 containerd[1453]: time="2025-01-17T12:20:56.887603709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.888283 containerd[1453]: time="2025-01-17T12:20:56.887619608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.888283 containerd[1453]: time="2025-01-17T12:20:56.887630986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.888283 containerd[1453]: time="2025-01-17T12:20:56.887643190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.888283 containerd[1453]: time="2025-01-17T12:20:56.887654742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.888283 containerd[1453]: time="2025-01-17T12:20:56.887666387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.888283 containerd[1453]: time="2025-01-17T12:20:56.887677859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.888283 containerd[1453]: time="2025-01-17T12:20:56.887695404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.888283 containerd[1453]: time="2025-01-17T12:20:56.887712079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.888283 containerd[1453]: time="2025-01-17T12:20:56.887723299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.888283 containerd[1453]: time="2025-01-17T12:20:56.887735572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.888283 containerd[1453]: time="2025-01-17T12:20:56.887749755Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:20:56.888283 containerd[1453]: time="2025-01-17T12:20:56.887771234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.888283 containerd[1453]: time="2025-01-17T12:20:56.887783689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.888283 containerd[1453]: time="2025-01-17T12:20:56.887812285Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:20:56.890072 containerd[1453]: time="2025-01-17T12:20:56.888804134Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:20:56.890072 containerd[1453]: time="2025-01-17T12:20:56.888855986Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:20:56.890072 containerd[1453]: time="2025-01-17T12:20:56.888876547Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:20:56.890072 containerd[1453]: time="2025-01-17T12:20:56.888896792Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:20:56.890072 containerd[1453]: time="2025-01-17T12:20:56.888912322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.890072 containerd[1453]: time="2025-01-17T12:20:56.888936473Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:20:56.890072 containerd[1453]: time="2025-01-17T12:20:56.888957724Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:20:56.890072 containerd[1453]: time="2025-01-17T12:20:56.888968246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:20:56.890282 containerd[1453]: time="2025-01-17T12:20:56.889399620Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:20:56.890282 containerd[1453]: time="2025-01-17T12:20:56.889476303Z" level=info msg="Connect containerd service" Jan 17 12:20:56.890282 containerd[1453]: time="2025-01-17T12:20:56.889518875Z" level=info msg="using legacy CRI server" Jan 17 12:20:56.890282 containerd[1453]: time="2025-01-17T12:20:56.889525899Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:20:56.890282 containerd[1453]: time="2025-01-17T12:20:56.889644544Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:20:56.892380 containerd[1453]: time="2025-01-17T12:20:56.891050376Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:20:56.892380 containerd[1453]: time="2025-01-17T12:20:56.891279575Z" level=info msg="Start subscribing containerd event" Jan 17 12:20:56.892380 containerd[1453]: time="2025-01-17T12:20:56.891364331Z" level=info msg="Start recovering state" Jan 17 12:20:56.892380 containerd[1453]: time="2025-01-17T12:20:56.891435323Z" level=info msg="Start event monitor" Jan 17 12:20:56.892380 containerd[1453]: time="2025-01-17T12:20:56.891454502Z" level=info msg="Start snapshots syncer" Jan 17 12:20:56.892380 containerd[1453]: time="2025-01-17T12:20:56.891463665Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:20:56.892380 containerd[1453]: time="2025-01-17T12:20:56.891470330Z" level=info msg="Start streaming server" Jan 17 12:20:56.892798 containerd[1453]: time="2025-01-17T12:20:56.892780190Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:20:56.892938 containerd[1453]: time="2025-01-17T12:20:56.892898165Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:20:56.893168 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:20:56.895616 containerd[1453]: time="2025-01-17T12:20:56.895589095Z" level=info msg="containerd successfully booted in 0.086487s" Jan 17 12:20:56.922139 systemd-networkd[1372]: eth1: Gained IPv6LL Jan 17 12:20:56.922838 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:20:56.924457 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:20:56.928677 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:20:56.938863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:20:56.948709 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:20:57.019124 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:20:57.103644 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:20:57.151208 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:20:57.165437 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:20:57.184921 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:20:57.185134 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:20:57.193409 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:20:57.219430 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:20:57.228478 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:20:57.240467 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:20:57.241934 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:20:57.413280 tar[1445]: linux-amd64/LICENSE Jan 17 12:20:57.413280 tar[1445]: linux-amd64/README.md Jan 17 12:20:57.424485 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:20:57.690294 systemd-networkd[1372]: eth0: Gained IPv6LL Jan 17 12:20:57.691636 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:20:57.982823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:20:57.986907 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:20:57.988810 (kubelet)[1554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:20:57.991814 systemd[1]: Startup finished in 987ms (kernel) + 5.391s (initrd) + 4.946s (userspace) = 11.325s. Jan 17 12:20:58.725993 kubelet[1554]: E0117 12:20:58.725894 1554 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:20:58.729449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:20:58.729667 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:20:58.730169 systemd[1]: kubelet.service: Consumed 1.212s CPU time. Jan 17 12:20:59.857943 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:20:59.863788 systemd[1]: Started sshd@0-137.184.44.6:22-139.178.68.195:59506.service - OpenSSH per-connection server daemon (139.178.68.195:59506). Jan 17 12:20:59.926095 sshd[1567]: Accepted publickey for core from 139.178.68.195 port 59506 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:20:59.929227 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:59.939610 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:20:59.946491 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:20:59.951685 systemd-logind[1441]: New session 1 of user core. Jan 17 12:20:59.963349 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:20:59.969497 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:20:59.976543 (systemd)[1571]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:21:00.094840 systemd[1571]: Queued start job for default target default.target. Jan 17 12:21:00.102934 systemd[1571]: Created slice app.slice - User Application Slice. Jan 17 12:21:00.102975 systemd[1571]: Reached target paths.target - Paths. Jan 17 12:21:00.102990 systemd[1571]: Reached target timers.target - Timers. Jan 17 12:21:00.104692 systemd[1571]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:21:00.120050 systemd[1571]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:21:00.120234 systemd[1571]: Reached target sockets.target - Sockets. Jan 17 12:21:00.120252 systemd[1571]: Reached target basic.target - Basic System. Jan 17 12:21:00.120301 systemd[1571]: Reached target default.target - Main User Target. Jan 17 12:21:00.120350 systemd[1571]: Startup finished in 135ms. Jan 17 12:21:00.120722 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:21:00.122471 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:21:00.191482 systemd[1]: Started sshd@1-137.184.44.6:22-139.178.68.195:59512.service - OpenSSH per-connection server daemon (139.178.68.195:59512). Jan 17 12:21:00.248878 sshd[1582]: Accepted publickey for core from 139.178.68.195 port 59512 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:21:00.251538 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:00.257919 systemd-logind[1441]: New session 2 of user core. Jan 17 12:21:00.263464 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:21:00.325466 sshd[1582]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:00.341405 systemd[1]: sshd@1-137.184.44.6:22-139.178.68.195:59512.service: Deactivated successfully. Jan 17 12:21:00.343334 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:21:00.345365 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:21:00.349529 systemd[1]: Started sshd@2-137.184.44.6:22-139.178.68.195:59522.service - OpenSSH per-connection server daemon (139.178.68.195:59522). Jan 17 12:21:00.351526 systemd-logind[1441]: Removed session 2. Jan 17 12:21:00.408747 sshd[1589]: Accepted publickey for core from 139.178.68.195 port 59522 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:21:00.411281 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:00.416606 systemd-logind[1441]: New session 3 of user core. Jan 17 12:21:00.426399 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:21:00.485902 sshd[1589]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:00.501048 systemd[1]: sshd@2-137.184.44.6:22-139.178.68.195:59522.service: Deactivated successfully. Jan 17 12:21:00.502863 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:21:00.503617 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:21:00.509730 systemd[1]: Started sshd@3-137.184.44.6:22-139.178.68.195:59534.service - OpenSSH per-connection server daemon (139.178.68.195:59534). Jan 17 12:21:00.511870 systemd-logind[1441]: Removed session 3. Jan 17 12:21:00.557697 sshd[1596]: Accepted publickey for core from 139.178.68.195 port 59534 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:21:00.560198 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:00.569284 systemd-logind[1441]: New session 4 of user core. Jan 17 12:21:00.575565 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:21:00.641279 sshd[1596]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:00.654696 systemd[1]: sshd@3-137.184.44.6:22-139.178.68.195:59534.service: Deactivated successfully. Jan 17 12:21:00.657377 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:21:00.659519 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:21:00.664537 systemd[1]: Started sshd@4-137.184.44.6:22-139.178.68.195:59544.service - OpenSSH per-connection server daemon (139.178.68.195:59544). Jan 17 12:21:00.667430 systemd-logind[1441]: Removed session 4. Jan 17 12:21:00.732171 sshd[1603]: Accepted publickey for core from 139.178.68.195 port 59544 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:21:00.734130 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:00.740544 systemd-logind[1441]: New session 5 of user core. Jan 17 12:21:00.751578 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:21:00.828736 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:21:00.829677 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:21:00.849645 sudo[1606]: pam_unix(sudo:session): session closed for user root Jan 17 12:21:00.854789 sshd[1603]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:00.879813 systemd[1]: sshd@4-137.184.44.6:22-139.178.68.195:59544.service: Deactivated successfully. Jan 17 12:21:00.882087 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:21:00.884631 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:21:00.891762 systemd[1]: Started sshd@5-137.184.44.6:22-139.178.68.195:59556.service - OpenSSH per-connection server daemon (139.178.68.195:59556). Jan 17 12:21:00.894667 systemd-logind[1441]: Removed session 5. Jan 17 12:21:00.956420 sshd[1611]: Accepted publickey for core from 139.178.68.195 port 59556 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:21:00.958311 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:00.968759 systemd-logind[1441]: New session 6 of user core. Jan 17 12:21:00.975572 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:21:01.043017 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:21:01.044097 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:21:01.049631 sudo[1615]: pam_unix(sudo:session): session closed for user root Jan 17 12:21:01.059231 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:21:01.060251 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:21:01.088624 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:21:01.094025 auditctl[1618]: No rules Jan 17 12:21:01.094733 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:21:01.095106 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:21:01.116700 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:21:01.158228 augenrules[1636]: No rules Jan 17 12:21:01.159904 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:21:01.161967 sudo[1614]: pam_unix(sudo:session): session closed for user root Jan 17 12:21:01.167620 sshd[1611]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:01.183484 systemd[1]: sshd@5-137.184.44.6:22-139.178.68.195:59556.service: Deactivated successfully. Jan 17 12:21:01.186760 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:21:01.188208 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:21:01.197661 systemd[1]: Started sshd@6-137.184.44.6:22-139.178.68.195:59566.service - OpenSSH per-connection server daemon (139.178.68.195:59566). Jan 17 12:21:01.200304 systemd-logind[1441]: Removed session 6. Jan 17 12:21:01.278715 sshd[1644]: Accepted publickey for core from 139.178.68.195 port 59566 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:21:01.281000 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:01.289095 systemd-logind[1441]: New session 7 of user core. Jan 17 12:21:01.295771 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:21:01.361326 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:21:01.361798 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:21:02.118558 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:21:02.119875 (dockerd)[1663]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:21:02.826043 dockerd[1663]: time="2025-01-17T12:21:02.825955017Z" level=info msg="Starting up" Jan 17 12:21:03.039951 dockerd[1663]: time="2025-01-17T12:21:03.039574942Z" level=info msg="Loading containers: start." Jan 17 12:21:03.203549 kernel: Initializing XFRM netlink socket Jan 17 12:21:03.240276 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:21:03.243563 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:21:03.253632 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:21:03.317560 systemd-networkd[1372]: docker0: Link UP Jan 17 12:21:03.318247 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Jan 17 12:21:03.353505 dockerd[1663]: time="2025-01-17T12:21:03.353440068Z" level=info msg="Loading containers: done." Jan 17 12:21:03.378093 dockerd[1663]: time="2025-01-17T12:21:03.377826881Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:21:03.378093 dockerd[1663]: time="2025-01-17T12:21:03.378014336Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:21:03.378383 dockerd[1663]: time="2025-01-17T12:21:03.378188699Z" level=info msg="Daemon has completed initialization" Jan 17 12:21:03.429553 dockerd[1663]: time="2025-01-17T12:21:03.429426702Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:21:03.429686 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:21:04.838674 containerd[1453]: time="2025-01-17T12:21:04.838617559Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:21:05.496655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3492157659.mount: Deactivated successfully. Jan 17 12:21:07.122711 containerd[1453]: time="2025-01-17T12:21:07.122646873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:07.124686 containerd[1453]: time="2025-01-17T12:21:07.124621837Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35140730" Jan 17 12:21:07.125658 containerd[1453]: time="2025-01-17T12:21:07.125597589Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:07.128781 containerd[1453]: time="2025-01-17T12:21:07.128734028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:07.130472 containerd[1453]: time="2025-01-17T12:21:07.130257740Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 2.29158111s" Jan 17 12:21:07.130472 containerd[1453]: time="2025-01-17T12:21:07.130301307Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 17 12:21:07.163202 containerd[1453]: time="2025-01-17T12:21:07.162807399Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:21:08.850752 containerd[1453]: time="2025-01-17T12:21:08.850671634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:08.852026 containerd[1453]: time="2025-01-17T12:21:08.851964285Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32216641" Jan 17 12:21:08.853175 containerd[1453]: time="2025-01-17T12:21:08.853118351Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:08.857096 containerd[1453]: time="2025-01-17T12:21:08.856960217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:08.858715 containerd[1453]: time="2025-01-17T12:21:08.858558283Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 1.69569585s" Jan 17 12:21:08.858715 containerd[1453]: time="2025-01-17T12:21:08.858612352Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 17 12:21:08.892099 containerd[1453]: time="2025-01-17T12:21:08.891863537Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:21:08.979996 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:21:08.987474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:09.126835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:09.136701 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:21:09.214698 kubelet[1886]: E0117 12:21:09.214090 1886 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:21:09.220298 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:21:09.220512 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:21:09.964902 containerd[1453]: time="2025-01-17T12:21:09.964778278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:09.966499 containerd[1453]: time="2025-01-17T12:21:09.966436623Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17332841" Jan 17 12:21:09.966937 containerd[1453]: time="2025-01-17T12:21:09.966872148Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:09.972953 containerd[1453]: time="2025-01-17T12:21:09.972840839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:09.974920 containerd[1453]: time="2025-01-17T12:21:09.974578727Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 1.082672235s" Jan 17 12:21:09.974920 containerd[1453]: time="2025-01-17T12:21:09.974636477Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 17 12:21:10.004624 containerd[1453]: time="2025-01-17T12:21:10.004563679Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:21:11.038506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088444913.mount: Deactivated successfully. Jan 17 12:21:11.423469 containerd[1453]: time="2025-01-17T12:21:11.423323724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:11.424484 containerd[1453]: time="2025-01-17T12:21:11.424413709Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 17 12:21:11.425262 containerd[1453]: time="2025-01-17T12:21:11.425214908Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:11.426979 containerd[1453]: time="2025-01-17T12:21:11.426942619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:11.428082 containerd[1453]: time="2025-01-17T12:21:11.427766760Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 1.423073744s" Jan 17 12:21:11.428082 containerd[1453]: time="2025-01-17T12:21:11.427798244Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:21:11.459666 containerd[1453]: time="2025-01-17T12:21:11.459445891Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:21:11.461020 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 17 12:21:11.930255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2171020606.mount: Deactivated successfully. Jan 17 12:21:12.802551 containerd[1453]: time="2025-01-17T12:21:12.802490349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:12.804687 containerd[1453]: time="2025-01-17T12:21:12.804141892Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:21:12.806931 containerd[1453]: time="2025-01-17T12:21:12.806728006Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:12.808208 containerd[1453]: time="2025-01-17T12:21:12.808184041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:12.810166 containerd[1453]: time="2025-01-17T12:21:12.809215073Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.349731265s" Jan 17 12:21:12.810166 containerd[1453]: time="2025-01-17T12:21:12.809715496Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:21:12.843374 containerd[1453]: time="2025-01-17T12:21:12.843332224Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:21:13.327409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3113375328.mount: Deactivated successfully. Jan 17 12:21:13.371256 containerd[1453]: time="2025-01-17T12:21:13.370232205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:13.372098 containerd[1453]: time="2025-01-17T12:21:13.371922498Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 17 12:21:13.373094 containerd[1453]: time="2025-01-17T12:21:13.372999293Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:13.379716 containerd[1453]: time="2025-01-17T12:21:13.379642486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:13.381750 containerd[1453]: time="2025-01-17T12:21:13.381093917Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 537.535456ms" Jan 17 12:21:13.381750 containerd[1453]: time="2025-01-17T12:21:13.381146918Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:21:13.416881 containerd[1453]: time="2025-01-17T12:21:13.416833753Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:21:13.925607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2454424275.mount: Deactivated successfully. Jan 17 12:21:14.521329 systemd-resolved[1324]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 17 12:21:15.725019 containerd[1453]: time="2025-01-17T12:21:15.724940327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:15.726539 containerd[1453]: time="2025-01-17T12:21:15.726484430Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 17 12:21:15.727274 containerd[1453]: time="2025-01-17T12:21:15.727246238Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:15.732261 containerd[1453]: time="2025-01-17T12:21:15.731165010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:15.732261 containerd[1453]: time="2025-01-17T12:21:15.732103096Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.315231691s" Jan 17 12:21:15.732261 containerd[1453]: time="2025-01-17T12:21:15.732137912Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 17 12:21:19.470814 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:21:19.480339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:19.565967 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:21:19.566109 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:21:19.566450 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:19.574493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:19.599621 systemd[1]: Reloading requested from client PID 2086 ('systemctl') (unit session-7.scope)... Jan 17 12:21:19.599639 systemd[1]: Reloading... Jan 17 12:21:19.710112 zram_generator::config[2125]: No configuration found. Jan 17 12:21:19.854658 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:21:19.931048 systemd[1]: Reloading finished in 330 ms. Jan 17 12:21:19.982665 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:21:19.982776 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:21:19.983156 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:19.988531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:20.150798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:20.169786 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:21:20.227538 kubelet[2177]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:21:20.227538 kubelet[2177]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:21:20.227538 kubelet[2177]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:21:20.229782 kubelet[2177]: I0117 12:21:20.229696 2177 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:21:20.579889 kubelet[2177]: I0117 12:21:20.579733 2177 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:21:20.579889 kubelet[2177]: I0117 12:21:20.579790 2177 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:21:20.580350 kubelet[2177]: I0117 12:21:20.580223 2177 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:21:20.607974 kubelet[2177]: I0117 12:21:20.607598 2177 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:21:20.608439 kubelet[2177]: E0117 12:21:20.608404 2177 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://137.184.44.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:20.628372 kubelet[2177]: I0117 12:21:20.628297 2177 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:21:20.629711 kubelet[2177]: I0117 12:21:20.629667 2177 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:21:20.630729 kubelet[2177]: I0117 12:21:20.630691 2177 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:21:20.630845 kubelet[2177]: I0117 12:21:20.630743 2177 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:21:20.630845 kubelet[2177]: I0117 12:21:20.630756 2177 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:21:20.630917 kubelet[2177]: I0117 12:21:20.630897 2177 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:21:20.631114 kubelet[2177]: I0117 12:21:20.631098 2177 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:21:20.631160 kubelet[2177]: I0117 12:21:20.631119 2177 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:21:20.631220 kubelet[2177]: I0117 12:21:20.631208 2177 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:21:20.631249 kubelet[2177]: I0117 12:21:20.631228 2177 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:21:20.634075 kubelet[2177]: W0117 12:21:20.633999 2177 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://137.184.44.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-d-600f54fd9d&limit=500&resourceVersion=0": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:20.634903 kubelet[2177]: E0117 12:21:20.634393 2177 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://137.184.44.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-d-600f54fd9d&limit=500&resourceVersion=0": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:20.634903 kubelet[2177]: I0117 12:21:20.634530 2177 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:21:20.638453 kubelet[2177]: W0117 12:21:20.638342 2177 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://137.184.44.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:20.638453 kubelet[2177]: E0117 12:21:20.638392 2177 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://137.184.44.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:20.638614 kubelet[2177]: I0117 12:21:20.638579 2177 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:21:20.638693 kubelet[2177]: W0117 12:21:20.638659 2177 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:21:20.639945 kubelet[2177]: I0117 12:21:20.639804 2177 server.go:1256] "Started kubelet" Jan 17 12:21:20.641648 kubelet[2177]: I0117 12:21:20.641261 2177 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:21:20.646640 kubelet[2177]: E0117 12:21:20.645691 2177 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.44.6:6443/api/v1/namespaces/default/events\": dial tcp 137.184.44.6:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-d-600f54fd9d.181b7a3c4a029e66 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-d-600f54fd9d,UID:ci-4081.3.0-d-600f54fd9d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-d-600f54fd9d,},FirstTimestamp:2025-01-17 12:21:20.639770214 +0000 UTC m=+0.465134942,LastTimestamp:2025-01-17 12:21:20.639770214 +0000 UTC m=+0.465134942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-d-600f54fd9d,}" Jan 17 12:21:20.648765 kubelet[2177]: I0117 12:21:20.648739 2177 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:21:20.651885 kubelet[2177]: I0117 12:21:20.649775 2177 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:21:20.651885 kubelet[2177]: I0117 12:21:20.650866 2177 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:21:20.651885 kubelet[2177]: I0117 12:21:20.651251 2177 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:21:20.652073 kubelet[2177]: I0117 12:21:20.652049 2177 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:21:20.654510 kubelet[2177]: I0117 12:21:20.654475 2177 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:21:20.654627 kubelet[2177]: I0117 12:21:20.654571 2177 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:21:20.654734 kubelet[2177]: E0117 12:21:20.654718 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.44.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-d-600f54fd9d?timeout=10s\": dial tcp 137.184.44.6:6443: connect: connection refused" interval="200ms" Jan 17 12:21:20.655693 kubelet[2177]: W0117 12:21:20.655628 2177 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://137.184.44.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:20.655693 kubelet[2177]: E0117 12:21:20.655700 2177 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://137.184.44.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:20.657332 kubelet[2177]: I0117 12:21:20.657305 2177 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:21:20.659829 kubelet[2177]: E0117 12:21:20.659797 2177 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:21:20.660753 kubelet[2177]: I0117 12:21:20.660715 2177 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:21:20.660875 kubelet[2177]: I0117 12:21:20.660868 2177 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:21:20.668000 kubelet[2177]: I0117 12:21:20.667164 2177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:21:20.669224 kubelet[2177]: I0117 12:21:20.668584 2177 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:21:20.669224 kubelet[2177]: I0117 12:21:20.668616 2177 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:21:20.669224 kubelet[2177]: I0117 12:21:20.668637 2177 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:21:20.669224 kubelet[2177]: E0117 12:21:20.668698 2177 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:21:20.685279 kubelet[2177]: W0117 12:21:20.684712 2177 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://137.184.44.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:20.685279 kubelet[2177]: E0117 12:21:20.684811 2177 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://137.184.44.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:20.688893 kubelet[2177]: I0117 12:21:20.688589 2177 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:21:20.688893 kubelet[2177]: I0117 12:21:20.688624 2177 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:21:20.688893 kubelet[2177]: I0117 12:21:20.688645 2177 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:21:20.692630 kubelet[2177]: I0117 12:21:20.692490 2177 policy_none.go:49] "None policy: Start" Jan 17 12:21:20.693592 kubelet[2177]: I0117 12:21:20.693558 2177 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:21:20.693699 kubelet[2177]: I0117 12:21:20.693635 2177 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:21:20.702565 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:21:20.713859 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:21:20.718706 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:21:20.734450 kubelet[2177]: I0117 12:21:20.733670 2177 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:21:20.734450 kubelet[2177]: I0117 12:21:20.734004 2177 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:21:20.736619 kubelet[2177]: E0117 12:21:20.736589 2177 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-d-600f54fd9d\" not found" Jan 17 12:21:20.753727 kubelet[2177]: I0117 12:21:20.753701 2177 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:20.754334 kubelet[2177]: E0117 12:21:20.754314 2177 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.44.6:6443/api/v1/nodes\": dial tcp 137.184.44.6:6443: connect: connection refused" node="ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:20.769843 kubelet[2177]: I0117 12:21:20.769784 2177 topology_manager.go:215] "Topology Admit Handler" podUID="7d0503c03d7c50089066b7f730aaee51" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:20.772172 kubelet[2177]: I0117 12:21:20.771470 2177 topology_manager.go:215] "Topology Admit Handler" podUID="e8c6d37e76d2c5a5e612b9280f556c65" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:20.772962 kubelet[2177]: I0117 12:21:20.772938 2177 topology_manager.go:215] "Topology Admit Handler" podUID="7dfb321f3392a89540b1ffc0b746adb4" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:20.781031 systemd[1]: Created slice kubepods-burstable-pod7d0503c03d7c50089066b7f730aaee51.slice - libcontainer container kubepods-burstable-pod7d0503c03d7c50089066b7f730aaee51.slice. Jan 17 12:21:20.800413 systemd[1]: Created slice kubepods-burstable-pode8c6d37e76d2c5a5e612b9280f556c65.slice - libcontainer container kubepods-burstable-pode8c6d37e76d2c5a5e612b9280f556c65.slice. Jan 17 12:21:20.814419 systemd[1]: Created slice kubepods-burstable-pod7dfb321f3392a89540b1ffc0b746adb4.slice - libcontainer container kubepods-burstable-pod7dfb321f3392a89540b1ffc0b746adb4.slice. Jan 17 12:21:20.855745 kubelet[2177]: I0117 12:21:20.855491 2177 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8c6d37e76d2c5a5e612b9280f556c65-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-d-600f54fd9d\" (UID: \"e8c6d37e76d2c5a5e612b9280f556c65\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:20.855745 kubelet[2177]: I0117 12:21:20.855538 2177 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e8c6d37e76d2c5a5e612b9280f556c65-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-d-600f54fd9d\" (UID: \"e8c6d37e76d2c5a5e612b9280f556c65\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:20.855745 kubelet[2177]: E0117 12:21:20.855569 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.44.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-d-600f54fd9d?timeout=10s\": dial tcp 137.184.44.6:6443: connect: connection refused" interval="400ms" Jan 17 12:21:20.955953 kubelet[2177]: I0117 12:21:20.955683 2177 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8c6d37e76d2c5a5e612b9280f556c65-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-d-600f54fd9d\" (UID: \"e8c6d37e76d2c5a5e612b9280f556c65\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:20.955953 kubelet[2177]: I0117 12:21:20.955730 2177 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8c6d37e76d2c5a5e612b9280f556c65-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-d-600f54fd9d\" (UID: \"e8c6d37e76d2c5a5e612b9280f556c65\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:20.955953 kubelet[2177]: I0117 12:21:20.955766 2177 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d0503c03d7c50089066b7f730aaee51-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-d-600f54fd9d\" (UID: \"7d0503c03d7c50089066b7f730aaee51\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:20.955953 kubelet[2177]: I0117 12:21:20.955789 2177 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d0503c03d7c50089066b7f730aaee51-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-d-600f54fd9d\" (UID: \"7d0503c03d7c50089066b7f730aaee51\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:20.955953 kubelet[2177]: I0117 12:21:20.955820 2177 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8c6d37e76d2c5a5e612b9280f556c65-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-d-600f54fd9d\" (UID: \"e8c6d37e76d2c5a5e612b9280f556c65\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:20.956428 kubelet[2177]: I0117 12:21:20.955841 2177 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7dfb321f3392a89540b1ffc0b746adb4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-d-600f54fd9d\" (UID: \"7dfb321f3392a89540b1ffc0b746adb4\") " pod="kube-system/kube-scheduler-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:20.956428 kubelet[2177]: I0117 12:21:20.955863 2177 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d0503c03d7c50089066b7f730aaee51-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-d-600f54fd9d\" (UID: \"7d0503c03d7c50089066b7f730aaee51\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:20.956428 kubelet[2177]: I0117 12:21:20.956416 2177 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:20.956936 kubelet[2177]: E0117 12:21:20.956904 2177 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.44.6:6443/api/v1/nodes\": dial tcp 137.184.44.6:6443: connect: connection refused" node="ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:21.097864 kubelet[2177]: E0117 12:21:21.097732 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:21.098525 containerd[1453]: time="2025-01-17T12:21:21.098460379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-d-600f54fd9d,Uid:7d0503c03d7c50089066b7f730aaee51,Namespace:kube-system,Attempt:0,}" Jan 17 12:21:21.101001 systemd-resolved[1324]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 17 12:21:21.112478 kubelet[2177]: E0117 12:21:21.112353 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:21.116921 containerd[1453]: time="2025-01-17T12:21:21.116800978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-d-600f54fd9d,Uid:e8c6d37e76d2c5a5e612b9280f556c65,Namespace:kube-system,Attempt:0,}" Jan 17 12:21:21.117713 kubelet[2177]: E0117 12:21:21.117676 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:21.118620 containerd[1453]: time="2025-01-17T12:21:21.118258090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-d-600f54fd9d,Uid:7dfb321f3392a89540b1ffc0b746adb4,Namespace:kube-system,Attempt:0,}" Jan 17 12:21:21.256650 kubelet[2177]: E0117 12:21:21.256600 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.44.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-d-600f54fd9d?timeout=10s\": dial tcp 137.184.44.6:6443: connect: connection refused" interval="800ms" Jan 17 12:21:21.358743 kubelet[2177]: I0117 12:21:21.358365 2177 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:21.359051 kubelet[2177]: E0117 12:21:21.358964 2177 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.44.6:6443/api/v1/nodes\": dial tcp 137.184.44.6:6443: connect: connection refused" node="ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:21.602835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2093216418.mount: Deactivated successfully. Jan 17 12:21:21.615400 containerd[1453]: time="2025-01-17T12:21:21.615291971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:21:21.617941 containerd[1453]: time="2025-01-17T12:21:21.617867251Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:21:21.618888 containerd[1453]: time="2025-01-17T12:21:21.618760124Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:21:21.620215 containerd[1453]: time="2025-01-17T12:21:21.620177618Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:21:21.623093 containerd[1453]: time="2025-01-17T12:21:21.621745496Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:21:21.623883 containerd[1453]: time="2025-01-17T12:21:21.623855581Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:21:21.624462 containerd[1453]: time="2025-01-17T12:21:21.624422786Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:21:21.626282 containerd[1453]: time="2025-01-17T12:21:21.626246126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:21:21.628602 containerd[1453]: time="2025-01-17T12:21:21.628561947Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 530.013235ms" Jan 17 12:21:21.630583 containerd[1453]: time="2025-01-17T12:21:21.630551465Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 512.036748ms" Jan 17 12:21:21.630906 containerd[1453]: time="2025-01-17T12:21:21.630867127Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 513.97233ms" Jan 17 12:21:21.790515 kubelet[2177]: W0117 12:21:21.790451 2177 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://137.184.44.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:21.791536 kubelet[2177]: E0117 12:21:21.791495 2177 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://137.184.44.6:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:21.802514 containerd[1453]: time="2025-01-17T12:21:21.802167287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:21:21.802514 containerd[1453]: time="2025-01-17T12:21:21.802232043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:21:21.802514 containerd[1453]: time="2025-01-17T12:21:21.802262249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:21.802514 containerd[1453]: time="2025-01-17T12:21:21.802356740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:21.802514 containerd[1453]: time="2025-01-17T12:21:21.802249166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:21:21.802514 containerd[1453]: time="2025-01-17T12:21:21.802300948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:21:21.802514 containerd[1453]: time="2025-01-17T12:21:21.802312960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:21.802514 containerd[1453]: time="2025-01-17T12:21:21.802389903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:21.811872 containerd[1453]: time="2025-01-17T12:21:21.811598937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:21:21.811872 containerd[1453]: time="2025-01-17T12:21:21.811661347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:21:21.811872 containerd[1453]: time="2025-01-17T12:21:21.811672025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:21.811872 containerd[1453]: time="2025-01-17T12:21:21.811773119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:21.819225 kubelet[2177]: W0117 12:21:21.819149 2177 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://137.184.44.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:21.819225 kubelet[2177]: E0117 12:21:21.819198 2177 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://137.184.44.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:21.842366 systemd[1]: Started cri-containerd-81877ddb28332ef45e9fbe99b59bad314cd0bc053fedb260554d266fb3f19e19.scope - libcontainer container 81877ddb28332ef45e9fbe99b59bad314cd0bc053fedb260554d266fb3f19e19. Jan 17 12:21:21.855199 systemd[1]: Started cri-containerd-0508eab9c23fce518e1f3158c90d8b3183c83dd0cb3c4cdd2708e6587ce3ef06.scope - libcontainer container 0508eab9c23fce518e1f3158c90d8b3183c83dd0cb3c4cdd2708e6587ce3ef06. Jan 17 12:21:21.858550 systemd[1]: Started cri-containerd-ee0b3fc7930b2ac8a76c4d8d64f33d601418fab63865a995c87172629d27b95d.scope - libcontainer container ee0b3fc7930b2ac8a76c4d8d64f33d601418fab63865a995c87172629d27b95d. Jan 17 12:21:21.912402 kubelet[2177]: W0117 12:21:21.912250 2177 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://137.184.44.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-d-600f54fd9d&limit=500&resourceVersion=0": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:21.913029 kubelet[2177]: E0117 12:21:21.913006 2177 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://137.184.44.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-d-600f54fd9d&limit=500&resourceVersion=0": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:21.933404 containerd[1453]: time="2025-01-17T12:21:21.933357587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-d-600f54fd9d,Uid:e8c6d37e76d2c5a5e612b9280f556c65,Namespace:kube-system,Attempt:0,} returns sandbox id \"0508eab9c23fce518e1f3158c90d8b3183c83dd0cb3c4cdd2708e6587ce3ef06\"" Jan 17 12:21:21.940486 containerd[1453]: time="2025-01-17T12:21:21.940449313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-d-600f54fd9d,Uid:7d0503c03d7c50089066b7f730aaee51,Namespace:kube-system,Attempt:0,} returns sandbox id \"81877ddb28332ef45e9fbe99b59bad314cd0bc053fedb260554d266fb3f19e19\"" Jan 17 12:21:21.941126 kubelet[2177]: E0117 12:21:21.941098 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:21.944085 kubelet[2177]: E0117 12:21:21.944050 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:21.947105 containerd[1453]: time="2025-01-17T12:21:21.946949425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-d-600f54fd9d,Uid:7dfb321f3392a89540b1ffc0b746adb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee0b3fc7930b2ac8a76c4d8d64f33d601418fab63865a995c87172629d27b95d\"" Jan 17 12:21:21.948963 kubelet[2177]: E0117 12:21:21.948805 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:21.949986 containerd[1453]: time="2025-01-17T12:21:21.949950062Z" level=info msg="CreateContainer within sandbox \"0508eab9c23fce518e1f3158c90d8b3183c83dd0cb3c4cdd2708e6587ce3ef06\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:21:21.950236 containerd[1453]: time="2025-01-17T12:21:21.950210714Z" level=info msg="CreateContainer within sandbox \"81877ddb28332ef45e9fbe99b59bad314cd0bc053fedb260554d266fb3f19e19\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:21:21.952477 containerd[1453]: time="2025-01-17T12:21:21.952444865Z" level=info msg="CreateContainer within sandbox \"ee0b3fc7930b2ac8a76c4d8d64f33d601418fab63865a995c87172629d27b95d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:21:21.975448 containerd[1453]: time="2025-01-17T12:21:21.975402394Z" level=info msg="CreateContainer within sandbox \"ee0b3fc7930b2ac8a76c4d8d64f33d601418fab63865a995c87172629d27b95d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bd188a8f4e4983f172fe8f71cc22d5f56c4647eec7793091bea86f851235650f\"" Jan 17 12:21:21.976803 containerd[1453]: time="2025-01-17T12:21:21.976758853Z" level=info msg="StartContainer for \"bd188a8f4e4983f172fe8f71cc22d5f56c4647eec7793091bea86f851235650f\"" Jan 17 12:21:21.986697 containerd[1453]: time="2025-01-17T12:21:21.986166992Z" level=info msg="CreateContainer within sandbox \"81877ddb28332ef45e9fbe99b59bad314cd0bc053fedb260554d266fb3f19e19\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"18c7a0f2fe66b011febed41cd718e4645d7e3fc0cfc2ecafd28fb9164f10b5e5\"" Jan 17 12:21:21.987367 containerd[1453]: time="2025-01-17T12:21:21.987325366Z" level=info msg="StartContainer for \"18c7a0f2fe66b011febed41cd718e4645d7e3fc0cfc2ecafd28fb9164f10b5e5\"" Jan 17 12:21:21.990797 containerd[1453]: time="2025-01-17T12:21:21.990754667Z" level=info msg="CreateContainer within sandbox \"0508eab9c23fce518e1f3158c90d8b3183c83dd0cb3c4cdd2708e6587ce3ef06\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7baaddb8f426f45fed98a9ffc058b77df66573493591eda3a7550e46bc5796e8\"" Jan 17 12:21:21.991616 containerd[1453]: time="2025-01-17T12:21:21.991558926Z" level=info msg="StartContainer for \"7baaddb8f426f45fed98a9ffc058b77df66573493591eda3a7550e46bc5796e8\"" Jan 17 12:21:22.039268 systemd[1]: Started cri-containerd-bd188a8f4e4983f172fe8f71cc22d5f56c4647eec7793091bea86f851235650f.scope - libcontainer container bd188a8f4e4983f172fe8f71cc22d5f56c4647eec7793091bea86f851235650f. Jan 17 12:21:22.053121 kubelet[2177]: W0117 12:21:22.051694 2177 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://137.184.44.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:22.053121 kubelet[2177]: E0117 12:21:22.051785 2177 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://137.184.44.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.44.6:6443: connect: connection refused Jan 17 12:21:22.052305 systemd[1]: Started cri-containerd-18c7a0f2fe66b011febed41cd718e4645d7e3fc0cfc2ecafd28fb9164f10b5e5.scope - libcontainer container 18c7a0f2fe66b011febed41cd718e4645d7e3fc0cfc2ecafd28fb9164f10b5e5. Jan 17 12:21:22.059152 kubelet[2177]: E0117 12:21:22.059050 2177 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.44.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-d-600f54fd9d?timeout=10s\": dial tcp 137.184.44.6:6443: connect: connection refused" interval="1.6s" Jan 17 12:21:22.088281 systemd[1]: Started cri-containerd-7baaddb8f426f45fed98a9ffc058b77df66573493591eda3a7550e46bc5796e8.scope - libcontainer container 7baaddb8f426f45fed98a9ffc058b77df66573493591eda3a7550e46bc5796e8. Jan 17 12:21:22.147457 containerd[1453]: time="2025-01-17T12:21:22.147217711Z" level=info msg="StartContainer for \"18c7a0f2fe66b011febed41cd718e4645d7e3fc0cfc2ecafd28fb9164f10b5e5\" returns successfully" Jan 17 12:21:22.163998 kubelet[2177]: I0117 12:21:22.163953 2177 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:22.169535 kubelet[2177]: E0117 12:21:22.169471 2177 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.44.6:6443/api/v1/nodes\": dial tcp 137.184.44.6:6443: connect: connection refused" node="ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:22.174450 containerd[1453]: time="2025-01-17T12:21:22.174386546Z" level=info msg="StartContainer for \"bd188a8f4e4983f172fe8f71cc22d5f56c4647eec7793091bea86f851235650f\" returns successfully" Jan 17 12:21:22.219291 containerd[1453]: time="2025-01-17T12:21:22.218596130Z" level=info msg="StartContainer for \"7baaddb8f426f45fed98a9ffc058b77df66573493591eda3a7550e46bc5796e8\" returns successfully" Jan 17 12:21:22.700039 kubelet[2177]: E0117 12:21:22.699516 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:22.703115 kubelet[2177]: E0117 12:21:22.703050 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:22.720777 kubelet[2177]: E0117 12:21:22.720743 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:23.720924 kubelet[2177]: E0117 12:21:23.720894 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:23.721459 kubelet[2177]: E0117 12:21:23.721451 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:23.771738 kubelet[2177]: I0117 12:21:23.771698 2177 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:24.267504 kubelet[2177]: E0117 12:21:24.267465 2177 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-d-600f54fd9d\" not found" node="ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:24.334298 kubelet[2177]: I0117 12:21:24.334250 2177 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:24.520386 kubelet[2177]: E0117 12:21:24.520259 2177 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-d-600f54fd9d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:24.521087 kubelet[2177]: E0117 12:21:24.521048 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:24.640138 kubelet[2177]: I0117 12:21:24.640047 2177 apiserver.go:52] "Watching apiserver" Jan 17 12:21:24.655392 kubelet[2177]: I0117 12:21:24.655300 2177 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:21:26.416546 kubelet[2177]: W0117 12:21:26.416511 2177 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:21:26.417543 kubelet[2177]: E0117 12:21:26.417263 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:26.725623 kubelet[2177]: E0117 12:21:26.725504 2177 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:27.291741 systemd[1]: Reloading requested from client PID 2448 ('systemctl') (unit session-7.scope)... Jan 17 12:21:27.291764 systemd[1]: Reloading... Jan 17 12:21:27.382263 zram_generator::config[2483]: No configuration found. Jan 17 12:21:27.535081 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:21:27.630625 systemd[1]: Reloading finished in 338 ms. Jan 17 12:21:27.673878 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:27.688939 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:21:27.689789 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:27.698557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:27.891344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:27.892002 (kubelet)[2538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:21:27.981503 kubelet[2538]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:21:27.981503 kubelet[2538]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:21:27.981503 kubelet[2538]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:21:27.981977 kubelet[2538]: I0117 12:21:27.981576 2538 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:21:27.988882 kubelet[2538]: I0117 12:21:27.988588 2538 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:21:27.988882 kubelet[2538]: I0117 12:21:27.988632 2538 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:21:27.989227 kubelet[2538]: I0117 12:21:27.989074 2538 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:21:27.991710 kubelet[2538]: I0117 12:21:27.991588 2538 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:21:27.994678 kubelet[2538]: I0117 12:21:27.994022 2538 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:21:28.010575 kubelet[2538]: I0117 12:21:28.010014 2538 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:21:28.010575 kubelet[2538]: I0117 12:21:28.010426 2538 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:21:28.015344 kubelet[2538]: I0117 12:21:28.010646 2538 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:21:28.015344 kubelet[2538]: I0117 12:21:28.010676 2538 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:21:28.015344 kubelet[2538]: I0117 12:21:28.010687 2538 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:21:28.015344 kubelet[2538]: I0117 12:21:28.010728 2538 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:21:28.015344 kubelet[2538]: I0117 12:21:28.010873 2538 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:21:28.015344 kubelet[2538]: I0117 12:21:28.010889 2538 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:21:28.015344 kubelet[2538]: I0117 12:21:28.011520 2538 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:21:28.015814 kubelet[2538]: I0117 12:21:28.011545 2538 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:21:28.015814 kubelet[2538]: I0117 12:21:28.015357 2538 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:21:28.015814 kubelet[2538]: I0117 12:21:28.015591 2538 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:21:28.018381 kubelet[2538]: I0117 12:21:28.017613 2538 server.go:1256] "Started kubelet" Jan 17 12:21:28.020852 kubelet[2538]: I0117 12:21:28.020759 2538 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:21:28.033228 kubelet[2538]: I0117 12:21:28.033150 2538 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:21:28.038137 kubelet[2538]: I0117 12:21:28.037183 2538 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:21:28.043509 kubelet[2538]: I0117 12:21:28.043468 2538 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:21:28.043898 kubelet[2538]: I0117 12:21:28.043882 2538 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:21:28.045511 sudo[2552]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 12:21:28.046208 sudo[2552]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 12:21:28.048103 kubelet[2538]: I0117 12:21:28.047408 2538 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:21:28.048103 kubelet[2538]: I0117 12:21:28.047911 2538 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:21:28.048236 kubelet[2538]: I0117 12:21:28.048155 2538 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:21:28.061418 kubelet[2538]: I0117 12:21:28.061114 2538 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:21:28.062511 kubelet[2538]: I0117 12:21:28.062470 2538 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:21:28.074940 kubelet[2538]: I0117 12:21:28.074057 2538 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:21:28.083194 kubelet[2538]: I0117 12:21:28.083145 2538 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:21:28.085631 kubelet[2538]: I0117 12:21:28.085133 2538 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:21:28.085631 kubelet[2538]: I0117 12:21:28.085176 2538 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:21:28.085631 kubelet[2538]: I0117 12:21:28.085201 2538 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:21:28.085631 kubelet[2538]: E0117 12:21:28.085271 2538 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:21:28.095158 kubelet[2538]: E0117 12:21:28.095039 2538 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:21:28.159201 kubelet[2538]: I0117 12:21:28.159032 2538 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:28.200301 kubelet[2538]: E0117 12:21:28.200227 2538 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:21:28.224080 kubelet[2538]: I0117 12:21:28.223043 2538 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:28.224080 kubelet[2538]: I0117 12:21:28.223769 2538 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:28.267713 kubelet[2538]: I0117 12:21:28.267625 2538 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:21:28.267713 kubelet[2538]: I0117 12:21:28.267658 2538 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:21:28.267713 kubelet[2538]: I0117 12:21:28.267686 2538 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:21:28.268179 kubelet[2538]: I0117 12:21:28.267852 2538 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:21:28.268179 kubelet[2538]: I0117 12:21:28.267874 2538 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:21:28.268179 kubelet[2538]: I0117 12:21:28.267882 2538 policy_none.go:49] "None policy: Start" Jan 17 12:21:28.270390 kubelet[2538]: I0117 12:21:28.269759 2538 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:21:28.270390 kubelet[2538]: I0117 12:21:28.269807 2538 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:21:28.270390 kubelet[2538]: I0117 12:21:28.270241 2538 state_mem.go:75] "Updated machine memory state" Jan 17 12:21:28.283150 kubelet[2538]: I0117 12:21:28.282827 2538 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:21:28.286989 kubelet[2538]: I0117 12:21:28.286443 2538 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:21:28.401635 kubelet[2538]: I0117 12:21:28.400738 2538 topology_manager.go:215] "Topology Admit Handler" podUID="7d0503c03d7c50089066b7f730aaee51" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:28.401635 kubelet[2538]: I0117 12:21:28.400879 2538 topology_manager.go:215] "Topology Admit Handler" podUID="e8c6d37e76d2c5a5e612b9280f556c65" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:28.401635 kubelet[2538]: I0117 12:21:28.400921 2538 topology_manager.go:215] "Topology Admit Handler" podUID="7dfb321f3392a89540b1ffc0b746adb4" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:28.408337 kubelet[2538]: W0117 12:21:28.408303 2538 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:21:28.408684 kubelet[2538]: W0117 12:21:28.408561 2538 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:21:28.410314 kubelet[2538]: W0117 12:21:28.410210 2538 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:21:28.410581 kubelet[2538]: E0117 12:21:28.410501 2538 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-d-600f54fd9d\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:28.450361 kubelet[2538]: I0117 12:21:28.450222 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d0503c03d7c50089066b7f730aaee51-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-d-600f54fd9d\" (UID: \"7d0503c03d7c50089066b7f730aaee51\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:28.450361 kubelet[2538]: I0117 12:21:28.450322 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8c6d37e76d2c5a5e612b9280f556c65-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-d-600f54fd9d\" (UID: \"e8c6d37e76d2c5a5e612b9280f556c65\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:28.451232 kubelet[2538]: I0117 12:21:28.450749 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8c6d37e76d2c5a5e612b9280f556c65-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-d-600f54fd9d\" (UID: \"e8c6d37e76d2c5a5e612b9280f556c65\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:28.451232 kubelet[2538]: I0117 12:21:28.450895 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7dfb321f3392a89540b1ffc0b746adb4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-d-600f54fd9d\" (UID: \"7dfb321f3392a89540b1ffc0b746adb4\") " pod="kube-system/kube-scheduler-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:28.451232 kubelet[2538]: I0117 12:21:28.451179 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d0503c03d7c50089066b7f730aaee51-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-d-600f54fd9d\" (UID: \"7d0503c03d7c50089066b7f730aaee51\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:28.451953 kubelet[2538]: I0117 12:21:28.451681 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d0503c03d7c50089066b7f730aaee51-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-d-600f54fd9d\" (UID: \"7d0503c03d7c50089066b7f730aaee51\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:28.451953 kubelet[2538]: I0117 12:21:28.451838 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e8c6d37e76d2c5a5e612b9280f556c65-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-d-600f54fd9d\" (UID: \"e8c6d37e76d2c5a5e612b9280f556c65\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:28.451953 kubelet[2538]: I0117 12:21:28.451896 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8c6d37e76d2c5a5e612b9280f556c65-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-d-600f54fd9d\" (UID: \"e8c6d37e76d2c5a5e612b9280f556c65\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:28.452241 kubelet[2538]: I0117 12:21:28.452020 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8c6d37e76d2c5a5e612b9280f556c65-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-d-600f54fd9d\" (UID: \"e8c6d37e76d2c5a5e612b9280f556c65\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-600f54fd9d" Jan 17 12:21:28.712536 kubelet[2538]: E0117 12:21:28.711328 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:28.712536 kubelet[2538]: E0117 12:21:28.712013 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:28.712536 kubelet[2538]: E0117 12:21:28.712350 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:28.803249 sudo[2552]: pam_unix(sudo:session): session closed for user root Jan 17 12:21:29.014307 kubelet[2538]: I0117 12:21:29.014144 2538 apiserver.go:52] "Watching apiserver" Jan 17 12:21:29.049330 kubelet[2538]: I0117 12:21:29.049268 2538 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:21:29.185154 kubelet[2538]: E0117 12:21:29.183909 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:29.188992 kubelet[2538]: E0117 12:21:29.188330 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:29.189526 kubelet[2538]: E0117 12:21:29.186137 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:29.240884 kubelet[2538]: I0117 12:21:29.240822 2538 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-d-600f54fd9d" podStartSLOduration=1.240741829 podStartE2EDuration="1.240741829s" podCreationTimestamp="2025-01-17 12:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:21:29.221461944 +0000 UTC m=+1.320408026" watchObservedRunningTime="2025-01-17 12:21:29.240741829 +0000 UTC m=+1.339687914" Jan 17 12:21:29.264830 kubelet[2538]: I0117 12:21:29.264666 2538 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-d-600f54fd9d" podStartSLOduration=1.264609248 podStartE2EDuration="1.264609248s" podCreationTimestamp="2025-01-17 12:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:21:29.24335411 +0000 UTC m=+1.342300192" watchObservedRunningTime="2025-01-17 12:21:29.264609248 +0000 UTC m=+1.363555331" Jan 17 12:21:29.286332 kubelet[2538]: I0117 12:21:29.285602 2538 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-d-600f54fd9d" podStartSLOduration=3.285544014 podStartE2EDuration="3.285544014s" podCreationTimestamp="2025-01-17 12:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:21:29.265095056 +0000 UTC m=+1.364041136" watchObservedRunningTime="2025-01-17 12:21:29.285544014 +0000 UTC m=+1.384490099" Jan 17 12:21:30.189083 kubelet[2538]: E0117 12:21:30.188387 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:30.662288 sudo[1647]: pam_unix(sudo:session): session closed for user root Jan 17 12:21:30.668855 sshd[1644]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:30.677780 systemd[1]: sshd@6-137.184.44.6:22-139.178.68.195:59566.service: Deactivated successfully. Jan 17 12:21:30.683392 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:21:30.684044 systemd[1]: session-7.scope: Consumed 6.732s CPU time, 190.6M memory peak, 0B memory swap peak. Jan 17 12:21:30.685713 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:21:30.687798 systemd-logind[1441]: Removed session 7. Jan 17 12:21:30.758471 kubelet[2538]: E0117 12:21:30.758284 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:32.727467 kubelet[2538]: E0117 12:21:32.727424 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:33.197216 kubelet[2538]: E0117 12:21:33.194733 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:34.053289 systemd-resolved[1324]: Clock change detected. Flushing caches. Jan 17 12:21:34.053362 systemd-timesyncd[1346]: Contacted time server 83.147.242.172:123 (2.flatcar.pool.ntp.org). Jan 17 12:21:34.053416 systemd-timesyncd[1346]: Initial clock synchronization to Fri 2025-01-17 12:21:34.053105 UTC. Jan 17 12:21:36.425184 kubelet[2538]: E0117 12:21:36.424868 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:36.680509 kubelet[2538]: E0117 12:21:36.680306 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:37.681600 kubelet[2538]: E0117 12:21:37.681533 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:41.205283 kubelet[2538]: I0117 12:21:41.205248 2538 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:21:41.205915 containerd[1453]: time="2025-01-17T12:21:41.205731066Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:21:41.207779 kubelet[2538]: I0117 12:21:41.206159 2538 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:21:41.249401 kubelet[2538]: E0117 12:21:41.248968 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:41.306583 kubelet[2538]: I0117 12:21:41.305973 2538 topology_manager.go:215] "Topology Admit Handler" podUID="790b2c8a-7172-4089-aeee-f75a275d4d9d" podNamespace="kube-system" podName="cilium-operator-5cc964979-47lfp" Jan 17 12:21:41.320477 systemd[1]: Created slice kubepods-besteffort-pod790b2c8a_7172_4089_aeee_f75a275d4d9d.slice - libcontainer container kubepods-besteffort-pod790b2c8a_7172_4089_aeee_f75a275d4d9d.slice. Jan 17 12:21:41.327840 kubelet[2538]: W0117 12:21:41.327441 2538 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081.3.0-d-600f54fd9d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-d-600f54fd9d' and this object Jan 17 12:21:41.327840 kubelet[2538]: E0117 12:21:41.327485 2538 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081.3.0-d-600f54fd9d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-d-600f54fd9d' and this object Jan 17 12:21:41.329568 kubelet[2538]: W0117 12:21:41.329192 2538 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-d-600f54fd9d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-d-600f54fd9d' and this object Jan 17 12:21:41.329568 kubelet[2538]: E0117 12:21:41.329232 2538 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-d-600f54fd9d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-d-600f54fd9d' and this object Jan 17 12:21:41.359527 kubelet[2538]: I0117 12:21:41.359486 2538 topology_manager.go:215] "Topology Admit Handler" podUID="8f0e9a2c-99e3-484a-8b3e-00a1f6ab2d76" podNamespace="kube-system" podName="kube-proxy-ql99m" Jan 17 12:21:41.366855 systemd[1]: Created slice kubepods-besteffort-pod8f0e9a2c_99e3_484a_8b3e_00a1f6ab2d76.slice - libcontainer container kubepods-besteffort-pod8f0e9a2c_99e3_484a_8b3e_00a1f6ab2d76.slice. Jan 17 12:21:41.402111 kubelet[2538]: I0117 12:21:41.401895 2538 topology_manager.go:215] "Topology Admit Handler" podUID="bc390e01-8ff2-4a02-9422-e61edb9ad0d9" podNamespace="kube-system" podName="cilium-jlgwf" Jan 17 12:21:41.405129 kubelet[2538]: I0117 12:21:41.404576 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ffxd\" (UniqueName: \"kubernetes.io/projected/8f0e9a2c-99e3-484a-8b3e-00a1f6ab2d76-kube-api-access-5ffxd\") pod \"kube-proxy-ql99m\" (UID: \"8f0e9a2c-99e3-484a-8b3e-00a1f6ab2d76\") " pod="kube-system/kube-proxy-ql99m" Jan 17 12:21:41.405129 kubelet[2538]: I0117 12:21:41.404620 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/790b2c8a-7172-4089-aeee-f75a275d4d9d-cilium-config-path\") pod \"cilium-operator-5cc964979-47lfp\" (UID: \"790b2c8a-7172-4089-aeee-f75a275d4d9d\") " pod="kube-system/cilium-operator-5cc964979-47lfp" Jan 17 12:21:41.405129 kubelet[2538]: I0117 12:21:41.404643 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7v47\" (UniqueName: \"kubernetes.io/projected/790b2c8a-7172-4089-aeee-f75a275d4d9d-kube-api-access-s7v47\") pod \"cilium-operator-5cc964979-47lfp\" (UID: \"790b2c8a-7172-4089-aeee-f75a275d4d9d\") " pod="kube-system/cilium-operator-5cc964979-47lfp" Jan 17 12:21:41.405129 kubelet[2538]: I0117 12:21:41.404669 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f0e9a2c-99e3-484a-8b3e-00a1f6ab2d76-xtables-lock\") pod \"kube-proxy-ql99m\" (UID: \"8f0e9a2c-99e3-484a-8b3e-00a1f6ab2d76\") " pod="kube-system/kube-proxy-ql99m" Jan 17 12:21:41.405129 kubelet[2538]: I0117 12:21:41.404688 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f0e9a2c-99e3-484a-8b3e-00a1f6ab2d76-lib-modules\") pod \"kube-proxy-ql99m\" (UID: \"8f0e9a2c-99e3-484a-8b3e-00a1f6ab2d76\") " pod="kube-system/kube-proxy-ql99m" Jan 17 12:21:41.405441 kubelet[2538]: I0117 12:21:41.404707 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8f0e9a2c-99e3-484a-8b3e-00a1f6ab2d76-kube-proxy\") pod \"kube-proxy-ql99m\" (UID: \"8f0e9a2c-99e3-484a-8b3e-00a1f6ab2d76\") " pod="kube-system/kube-proxy-ql99m" Jan 17 12:21:41.415450 systemd[1]: Created slice kubepods-burstable-podbc390e01_8ff2_4a02_9422_e61edb9ad0d9.slice - libcontainer container kubepods-burstable-podbc390e01_8ff2_4a02_9422_e61edb9ad0d9.slice. Jan 17 12:21:41.506677 kubelet[2538]: I0117 12:21:41.505009 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-cilium-cgroup\") pod \"cilium-jlgwf\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " pod="kube-system/cilium-jlgwf" Jan 17 12:21:41.506677 kubelet[2538]: I0117 12:21:41.505061 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-etc-cni-netd\") pod \"cilium-jlgwf\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " pod="kube-system/cilium-jlgwf" Jan 17 12:21:41.506677 kubelet[2538]: I0117 12:21:41.505110 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-cilium-run\") pod \"cilium-jlgwf\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " pod="kube-system/cilium-jlgwf" Jan 17 12:21:41.506677 kubelet[2538]: I0117 12:21:41.505130 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-bpf-maps\") pod \"cilium-jlgwf\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " pod="kube-system/cilium-jlgwf" Jan 17 12:21:41.506677 kubelet[2538]: I0117 12:21:41.505433 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-cni-path\") pod \"cilium-jlgwf\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " pod="kube-system/cilium-jlgwf" Jan 17 12:21:41.506677 kubelet[2538]: I0117 12:21:41.505455 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-hubble-tls\") pod \"cilium-jlgwf\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " pod="kube-system/cilium-jlgwf" Jan 17 12:21:41.507036 kubelet[2538]: I0117 12:21:41.505477 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-host-proc-sys-net\") pod \"cilium-jlgwf\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " pod="kube-system/cilium-jlgwf" Jan 17 12:21:41.507036 kubelet[2538]: I0117 12:21:41.505497 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-host-proc-sys-kernel\") pod \"cilium-jlgwf\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " pod="kube-system/cilium-jlgwf" Jan 17 12:21:41.507036 kubelet[2538]: I0117 12:21:41.505517 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-hostproc\") pod \"cilium-jlgwf\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " pod="kube-system/cilium-jlgwf" Jan 17 12:21:41.507036 kubelet[2538]: I0117 12:21:41.505537 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-xtables-lock\") pod \"cilium-jlgwf\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " pod="kube-system/cilium-jlgwf" Jan 17 12:21:41.507036 kubelet[2538]: I0117 12:21:41.505556 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-clustermesh-secrets\") pod \"cilium-jlgwf\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " pod="kube-system/cilium-jlgwf" Jan 17 12:21:41.508170 kubelet[2538]: I0117 12:21:41.505576 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-cilium-config-path\") pod \"cilium-jlgwf\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " pod="kube-system/cilium-jlgwf" Jan 17 12:21:41.508170 kubelet[2538]: I0117 12:21:41.505596 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv6bz\" (UniqueName: \"kubernetes.io/projected/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-kube-api-access-gv6bz\") pod \"cilium-jlgwf\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " pod="kube-system/cilium-jlgwf" Jan 17 12:21:41.508170 kubelet[2538]: I0117 12:21:41.505646 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-lib-modules\") pod \"cilium-jlgwf\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " pod="kube-system/cilium-jlgwf" Jan 17 12:21:42.150281 update_engine[1442]: I20250117 12:21:42.150033 1442 update_attempter.cc:509] Updating boot flags... Jan 17 12:21:42.198007 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2622) Jan 17 12:21:42.260262 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2623) Jan 17 12:21:42.272560 kubelet[2538]: E0117 12:21:42.272506 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:42.292964 containerd[1453]: time="2025-01-17T12:21:42.292244544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ql99m,Uid:8f0e9a2c-99e3-484a-8b3e-00a1f6ab2d76,Namespace:kube-system,Attempt:0,}" Jan 17 12:21:42.337403 containerd[1453]: time="2025-01-17T12:21:42.337257387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:21:42.337403 containerd[1453]: time="2025-01-17T12:21:42.337328650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:21:42.337403 containerd[1453]: time="2025-01-17T12:21:42.337359637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:42.337967 containerd[1453]: time="2025-01-17T12:21:42.337463172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:42.359282 systemd[1]: Started cri-containerd-48f6e706b890da2295db4525c00146e6597a811d965bd3f83d1ee91e61eae908.scope - libcontainer container 48f6e706b890da2295db4525c00146e6597a811d965bd3f83d1ee91e61eae908. Jan 17 12:21:42.391266 containerd[1453]: time="2025-01-17T12:21:42.391202481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ql99m,Uid:8f0e9a2c-99e3-484a-8b3e-00a1f6ab2d76,Namespace:kube-system,Attempt:0,} returns sandbox id \"48f6e706b890da2295db4525c00146e6597a811d965bd3f83d1ee91e61eae908\"" Jan 17 12:21:42.392251 kubelet[2538]: E0117 12:21:42.392222 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:42.396241 containerd[1453]: time="2025-01-17T12:21:42.396194991Z" level=info msg="CreateContainer within sandbox \"48f6e706b890da2295db4525c00146e6597a811d965bd3f83d1ee91e61eae908\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:21:42.416587 containerd[1453]: time="2025-01-17T12:21:42.416424954Z" level=info msg="CreateContainer within sandbox \"48f6e706b890da2295db4525c00146e6597a811d965bd3f83d1ee91e61eae908\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3fc5775993b1492af2353c981c1ed664dc19f09283ac58abd7d0af05739f3144\"" Jan 17 12:21:42.417135 containerd[1453]: time="2025-01-17T12:21:42.417110851Z" level=info msg="StartContainer for \"3fc5775993b1492af2353c981c1ed664dc19f09283ac58abd7d0af05739f3144\"" Jan 17 12:21:42.457236 systemd[1]: Started cri-containerd-3fc5775993b1492af2353c981c1ed664dc19f09283ac58abd7d0af05739f3144.scope - libcontainer container 3fc5775993b1492af2353c981c1ed664dc19f09283ac58abd7d0af05739f3144. Jan 17 12:21:42.492845 containerd[1453]: time="2025-01-17T12:21:42.492783749Z" level=info msg="StartContainer for \"3fc5775993b1492af2353c981c1ed664dc19f09283ac58abd7d0af05739f3144\" returns successfully" Jan 17 12:21:42.509647 kubelet[2538]: E0117 12:21:42.509599 2538 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 17 12:21:42.509832 kubelet[2538]: E0117 12:21:42.509774 2538 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/790b2c8a-7172-4089-aeee-f75a275d4d9d-cilium-config-path podName:790b2c8a-7172-4089-aeee-f75a275d4d9d nodeName:}" failed. No retries permitted until 2025-01-17 12:21:43.009740889 +0000 UTC m=+14.628805917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/790b2c8a-7172-4089-aeee-f75a275d4d9d-cilium-config-path") pod "cilium-operator-5cc964979-47lfp" (UID: "790b2c8a-7172-4089-aeee-f75a275d4d9d") : failed to sync configmap cache: timed out waiting for the condition Jan 17 12:21:42.618638 kubelet[2538]: E0117 12:21:42.618371 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:42.619555 containerd[1453]: time="2025-01-17T12:21:42.619198918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jlgwf,Uid:bc390e01-8ff2-4a02-9422-e61edb9ad0d9,Namespace:kube-system,Attempt:0,}" Jan 17 12:21:42.651407 containerd[1453]: time="2025-01-17T12:21:42.650570231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:21:42.651686 containerd[1453]: time="2025-01-17T12:21:42.651422486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:21:42.651686 containerd[1453]: time="2025-01-17T12:21:42.651440143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:42.651686 containerd[1453]: time="2025-01-17T12:21:42.651531004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:42.677231 systemd[1]: Started cri-containerd-64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146.scope - libcontainer container 64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146. Jan 17 12:21:42.698006 kubelet[2538]: E0117 12:21:42.697309 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:42.714692 containerd[1453]: time="2025-01-17T12:21:42.714653167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jlgwf,Uid:bc390e01-8ff2-4a02-9422-e61edb9ad0d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146\"" Jan 17 12:21:42.716283 kubelet[2538]: E0117 12:21:42.716230 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:42.724566 containerd[1453]: time="2025-01-17T12:21:42.724517480Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 12:21:43.128045 kubelet[2538]: E0117 12:21:43.127999 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:43.129199 containerd[1453]: time="2025-01-17T12:21:43.128528441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-47lfp,Uid:790b2c8a-7172-4089-aeee-f75a275d4d9d,Namespace:kube-system,Attempt:0,}" Jan 17 12:21:43.164159 containerd[1453]: time="2025-01-17T12:21:43.163882943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:21:43.164159 containerd[1453]: time="2025-01-17T12:21:43.163968309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:21:43.164159 containerd[1453]: time="2025-01-17T12:21:43.163980470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:43.164159 containerd[1453]: time="2025-01-17T12:21:43.164070612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:21:43.189190 systemd[1]: Started cri-containerd-56e5f0a94643a208a500958f6f2af25e495855d7d712409d0b36d2f70b2f4656.scope - libcontainer container 56e5f0a94643a208a500958f6f2af25e495855d7d712409d0b36d2f70b2f4656. Jan 17 12:21:43.238853 containerd[1453]: time="2025-01-17T12:21:43.238731457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-47lfp,Uid:790b2c8a-7172-4089-aeee-f75a275d4d9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"56e5f0a94643a208a500958f6f2af25e495855d7d712409d0b36d2f70b2f4656\"" Jan 17 12:21:43.240431 kubelet[2538]: E0117 12:21:43.240134 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:48.597447 kubelet[2538]: I0117 12:21:48.597403 2538 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ql99m" podStartSLOduration=7.597363088 podStartE2EDuration="7.597363088s" podCreationTimestamp="2025-01-17 12:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:21:42.723661732 +0000 UTC m=+14.342726760" watchObservedRunningTime="2025-01-17 12:21:48.597363088 +0000 UTC m=+20.216428119" Jan 17 12:21:53.057981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3085338166.mount: Deactivated successfully. Jan 17 12:21:55.568076 containerd[1453]: time="2025-01-17T12:21:55.568004488Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:55.569753 containerd[1453]: time="2025-01-17T12:21:55.569679993Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735267" Jan 17 12:21:55.570384 containerd[1453]: time="2025-01-17T12:21:55.570141742Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:21:55.573596 containerd[1453]: time="2025-01-17T12:21:55.573553681Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.848976099s" Jan 17 12:21:55.573596 containerd[1453]: time="2025-01-17T12:21:55.573593281Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 12:21:55.574544 containerd[1453]: time="2025-01-17T12:21:55.574307678Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 12:21:55.584488 containerd[1453]: time="2025-01-17T12:21:55.584364900Z" level=info msg="CreateContainer within sandbox \"64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:21:55.655182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3371428025.mount: Deactivated successfully. Jan 17 12:21:55.672168 containerd[1453]: time="2025-01-17T12:21:55.672111596Z" level=info msg="CreateContainer within sandbox \"64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93\"" Jan 17 12:21:55.673502 containerd[1453]: time="2025-01-17T12:21:55.673316485Z" level=info msg="StartContainer for \"3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93\"" Jan 17 12:21:55.764330 systemd[1]: run-containerd-runc-k8s.io-3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93-runc.y8cNYY.mount: Deactivated successfully. Jan 17 12:21:55.773178 systemd[1]: Started cri-containerd-3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93.scope - libcontainer container 3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93. Jan 17 12:21:55.818186 containerd[1453]: time="2025-01-17T12:21:55.817965680Z" level=info msg="StartContainer for \"3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93\" returns successfully" Jan 17 12:21:55.831616 systemd[1]: cri-containerd-3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93.scope: Deactivated successfully. Jan 17 12:21:55.975384 containerd[1453]: time="2025-01-17T12:21:55.960502646Z" level=info msg="shim disconnected" id=3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93 namespace=k8s.io Jan 17 12:21:55.975384 containerd[1453]: time="2025-01-17T12:21:55.975380201Z" level=warning msg="cleaning up after shim disconnected" id=3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93 namespace=k8s.io Jan 17 12:21:55.975699 containerd[1453]: time="2025-01-17T12:21:55.975401876Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:21:56.651931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93-rootfs.mount: Deactivated successfully. Jan 17 12:21:56.746401 kubelet[2538]: E0117 12:21:56.746366 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:56.749782 containerd[1453]: time="2025-01-17T12:21:56.749729966Z" level=info msg="CreateContainer within sandbox \"64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:21:56.775780 containerd[1453]: time="2025-01-17T12:21:56.775304510Z" level=info msg="CreateContainer within sandbox \"64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64\"" Jan 17 12:21:56.778388 containerd[1453]: time="2025-01-17T12:21:56.778338077Z" level=info msg="StartContainer for \"6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64\"" Jan 17 12:21:56.821183 systemd[1]: Started cri-containerd-6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64.scope - libcontainer container 6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64. Jan 17 12:21:56.859640 containerd[1453]: time="2025-01-17T12:21:56.859491901Z" level=info msg="StartContainer for \"6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64\" returns successfully" Jan 17 12:21:56.876853 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:21:56.877596 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:56.877886 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:21:56.884817 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:21:56.885468 systemd[1]: cri-containerd-6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64.scope: Deactivated successfully. Jan 17 12:21:56.929164 containerd[1453]: time="2025-01-17T12:21:56.929107568Z" level=info msg="shim disconnected" id=6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64 namespace=k8s.io Jan 17 12:21:56.929658 containerd[1453]: time="2025-01-17T12:21:56.929454443Z" level=warning msg="cleaning up after shim disconnected" id=6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64 namespace=k8s.io Jan 17 12:21:56.929658 containerd[1453]: time="2025-01-17T12:21:56.929475090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:21:56.933302 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:57.652668 systemd[1]: run-containerd-runc-k8s.io-6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64-runc.tmDxw3.mount: Deactivated successfully. Jan 17 12:21:57.652802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64-rootfs.mount: Deactivated successfully. Jan 17 12:21:57.750769 kubelet[2538]: E0117 12:21:57.749283 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:57.755811 containerd[1453]: time="2025-01-17T12:21:57.754366199Z" level=info msg="CreateContainer within sandbox \"64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:21:57.816059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1592125430.mount: Deactivated successfully. Jan 17 12:21:57.822077 containerd[1453]: time="2025-01-17T12:21:57.822024779Z" level=info msg="CreateContainer within sandbox \"64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657\"" Jan 17 12:21:57.822938 containerd[1453]: time="2025-01-17T12:21:57.822908028Z" level=info msg="StartContainer for \"1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657\"" Jan 17 12:21:57.864241 systemd[1]: Started cri-containerd-1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657.scope - libcontainer container 1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657. Jan 17 12:21:57.899684 containerd[1453]: time="2025-01-17T12:21:57.899549784Z" level=info msg="StartContainer for \"1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657\" returns successfully" Jan 17 12:21:57.903891 systemd[1]: cri-containerd-1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657.scope: Deactivated successfully. Jan 17 12:21:57.931607 containerd[1453]: time="2025-01-17T12:21:57.931423046Z" level=info msg="shim disconnected" id=1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657 namespace=k8s.io Jan 17 12:21:57.931607 containerd[1453]: time="2025-01-17T12:21:57.931480758Z" level=warning msg="cleaning up after shim disconnected" id=1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657 namespace=k8s.io Jan 17 12:21:57.931607 containerd[1453]: time="2025-01-17T12:21:57.931489955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:21:58.651927 systemd[1]: run-containerd-runc-k8s.io-1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657-runc.iVcrOi.mount: Deactivated successfully. Jan 17 12:21:58.652099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657-rootfs.mount: Deactivated successfully. Jan 17 12:21:58.753898 kubelet[2538]: E0117 12:21:58.753852 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:58.756478 containerd[1453]: time="2025-01-17T12:21:58.756303799Z" level=info msg="CreateContainer within sandbox \"64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:21:58.798831 containerd[1453]: time="2025-01-17T12:21:58.798559862Z" level=info msg="CreateContainer within sandbox \"64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62\"" Jan 17 12:21:58.799281 containerd[1453]: time="2025-01-17T12:21:58.799209946Z" level=info msg="StartContainer for \"b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62\"" Jan 17 12:21:58.836164 systemd[1]: Started cri-containerd-b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62.scope - libcontainer container b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62. Jan 17 12:21:58.864635 systemd[1]: cri-containerd-b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62.scope: Deactivated successfully. Jan 17 12:21:58.867540 containerd[1453]: time="2025-01-17T12:21:58.867498197Z" level=info msg="StartContainer for \"b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62\" returns successfully" Jan 17 12:21:58.906342 containerd[1453]: time="2025-01-17T12:21:58.906067969Z" level=info msg="shim disconnected" id=b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62 namespace=k8s.io Jan 17 12:21:58.906342 containerd[1453]: time="2025-01-17T12:21:58.906192478Z" level=warning msg="cleaning up after shim disconnected" id=b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62 namespace=k8s.io Jan 17 12:21:58.906342 containerd[1453]: time="2025-01-17T12:21:58.906207442Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:21:59.651936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62-rootfs.mount: Deactivated successfully. Jan 17 12:21:59.758756 kubelet[2538]: E0117 12:21:59.758618 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:21:59.763895 containerd[1453]: time="2025-01-17T12:21:59.763853912Z" level=info msg="CreateContainer within sandbox \"64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:21:59.790958 containerd[1453]: time="2025-01-17T12:21:59.790781813Z" level=info msg="CreateContainer within sandbox \"64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a\"" Jan 17 12:21:59.791987 containerd[1453]: time="2025-01-17T12:21:59.791605367Z" level=info msg="StartContainer for \"fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a\"" Jan 17 12:21:59.831296 systemd[1]: Started cri-containerd-fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a.scope - libcontainer container fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a. Jan 17 12:21:59.866961 containerd[1453]: time="2025-01-17T12:21:59.866401415Z" level=info msg="StartContainer for \"fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a\" returns successfully" Jan 17 12:22:00.147477 kubelet[2538]: I0117 12:22:00.147221 2538 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:22:00.184032 kubelet[2538]: I0117 12:22:00.182990 2538 topology_manager.go:215] "Topology Admit Handler" podUID="f9c5c509-0cd1-465d-817a-96c6498d299e" podNamespace="kube-system" podName="coredns-76f75df574-zvsw5" Jan 17 12:22:00.186244 kubelet[2538]: I0117 12:22:00.186207 2538 topology_manager.go:215] "Topology Admit Handler" podUID="bb5a42c6-5571-4975-9b74-248669138e3c" podNamespace="kube-system" podName="coredns-76f75df574-pxlhc" Jan 17 12:22:00.200579 systemd[1]: Created slice kubepods-burstable-podf9c5c509_0cd1_465d_817a_96c6498d299e.slice - libcontainer container kubepods-burstable-podf9c5c509_0cd1_465d_817a_96c6498d299e.slice. Jan 17 12:22:00.215524 systemd[1]: Created slice kubepods-burstable-podbb5a42c6_5571_4975_9b74_248669138e3c.slice - libcontainer container kubepods-burstable-podbb5a42c6_5571_4975_9b74_248669138e3c.slice. Jan 17 12:22:00.249904 kubelet[2538]: I0117 12:22:00.249800 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rswsk\" (UniqueName: \"kubernetes.io/projected/bb5a42c6-5571-4975-9b74-248669138e3c-kube-api-access-rswsk\") pod \"coredns-76f75df574-pxlhc\" (UID: \"bb5a42c6-5571-4975-9b74-248669138e3c\") " pod="kube-system/coredns-76f75df574-pxlhc" Jan 17 12:22:00.250997 kubelet[2538]: I0117 12:22:00.250872 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qztf\" (UniqueName: \"kubernetes.io/projected/f9c5c509-0cd1-465d-817a-96c6498d299e-kube-api-access-8qztf\") pod \"coredns-76f75df574-zvsw5\" (UID: \"f9c5c509-0cd1-465d-817a-96c6498d299e\") " pod="kube-system/coredns-76f75df574-zvsw5" Jan 17 12:22:00.251663 kubelet[2538]: I0117 12:22:00.251603 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb5a42c6-5571-4975-9b74-248669138e3c-config-volume\") pod \"coredns-76f75df574-pxlhc\" (UID: \"bb5a42c6-5571-4975-9b74-248669138e3c\") " pod="kube-system/coredns-76f75df574-pxlhc" Jan 17 12:22:00.251997 kubelet[2538]: I0117 12:22:00.251826 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9c5c509-0cd1-465d-817a-96c6498d299e-config-volume\") pod \"coredns-76f75df574-zvsw5\" (UID: \"f9c5c509-0cd1-465d-817a-96c6498d299e\") " pod="kube-system/coredns-76f75df574-zvsw5" Jan 17 12:22:00.511452 kubelet[2538]: E0117 12:22:00.511120 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:00.513924 containerd[1453]: time="2025-01-17T12:22:00.513879745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zvsw5,Uid:f9c5c509-0cd1-465d-817a-96c6498d299e,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:00.522408 kubelet[2538]: E0117 12:22:00.522309 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:00.523913 containerd[1453]: time="2025-01-17T12:22:00.523401225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pxlhc,Uid:bb5a42c6-5571-4975-9b74-248669138e3c,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:00.765120 kubelet[2538]: E0117 12:22:00.764921 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:01.768405 kubelet[2538]: E0117 12:22:01.768272 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:02.775139 kubelet[2538]: E0117 12:22:02.775086 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:04.813998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60028053.mount: Deactivated successfully. Jan 17 12:22:06.622177 systemd[1]: Started sshd@7-137.184.44.6:22-139.178.68.195:55518.service - OpenSSH per-connection server daemon (139.178.68.195:55518). Jan 17 12:22:06.701675 sshd[3322]: Accepted publickey for core from 139.178.68.195 port 55518 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:06.703589 sshd[3322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:06.710138 systemd-logind[1441]: New session 8 of user core. Jan 17 12:22:06.715168 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:22:07.072199 sshd[3322]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:07.077580 systemd[1]: sshd@7-137.184.44.6:22-139.178.68.195:55518.service: Deactivated successfully. Jan 17 12:22:07.080618 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:22:07.082195 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:22:07.083673 systemd-logind[1441]: Removed session 8. Jan 17 12:22:08.278902 containerd[1453]: time="2025-01-17T12:22:08.277931836Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:08.278902 containerd[1453]: time="2025-01-17T12:22:08.278817745Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907289" Jan 17 12:22:08.279578 containerd[1453]: time="2025-01-17T12:22:08.279548102Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:08.280692 containerd[1453]: time="2025-01-17T12:22:08.280659706Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 12.706323275s" Jan 17 12:22:08.280755 containerd[1453]: time="2025-01-17T12:22:08.280693726Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 12:22:08.282747 containerd[1453]: time="2025-01-17T12:22:08.282534316Z" level=info msg="CreateContainer within sandbox \"56e5f0a94643a208a500958f6f2af25e495855d7d712409d0b36d2f70b2f4656\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 12:22:08.301816 containerd[1453]: time="2025-01-17T12:22:08.301760181Z" level=info msg="CreateContainer within sandbox \"56e5f0a94643a208a500958f6f2af25e495855d7d712409d0b36d2f70b2f4656\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da\"" Jan 17 12:22:08.303917 containerd[1453]: time="2025-01-17T12:22:08.302496680Z" level=info msg="StartContainer for \"c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da\"" Jan 17 12:22:08.355240 systemd[1]: Started cri-containerd-c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da.scope - libcontainer container c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da. Jan 17 12:22:08.391883 containerd[1453]: time="2025-01-17T12:22:08.391836853Z" level=info msg="StartContainer for \"c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da\" returns successfully" Jan 17 12:22:08.789977 kubelet[2538]: E0117 12:22:08.788628 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:08.809733 kubelet[2538]: I0117 12:22:08.809690 2538 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jlgwf" podStartSLOduration=14.953326137 podStartE2EDuration="27.809649488s" podCreationTimestamp="2025-01-17 12:21:41 +0000 UTC" firstStartedPulling="2025-01-17 12:21:42.717639835 +0000 UTC m=+14.336704845" lastFinishedPulling="2025-01-17 12:21:55.573963139 +0000 UTC m=+27.193028196" observedRunningTime="2025-01-17 12:22:00.791965167 +0000 UTC m=+32.411030221" watchObservedRunningTime="2025-01-17 12:22:08.809649488 +0000 UTC m=+40.428714529" Jan 17 12:22:09.792911 kubelet[2538]: E0117 12:22:09.792802 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:12.029915 systemd-networkd[1372]: cilium_host: Link UP Jan 17 12:22:12.032087 systemd-networkd[1372]: cilium_net: Link UP Jan 17 12:22:12.034268 systemd-networkd[1372]: cilium_net: Gained carrier Jan 17 12:22:12.035529 systemd-networkd[1372]: cilium_host: Gained carrier Jan 17 12:22:12.036783 systemd-networkd[1372]: cilium_net: Gained IPv6LL Jan 17 12:22:12.037105 systemd-networkd[1372]: cilium_host: Gained IPv6LL Jan 17 12:22:12.095370 systemd[1]: Started sshd@8-137.184.44.6:22-139.178.68.195:55534.service - OpenSSH per-connection server daemon (139.178.68.195:55534). Jan 17 12:22:12.159384 sshd[3413]: Accepted publickey for core from 139.178.68.195 port 55534 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:12.160635 sshd[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:12.168788 systemd-logind[1441]: New session 9 of user core. Jan 17 12:22:12.175279 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:22:12.225188 systemd-networkd[1372]: cilium_vxlan: Link UP Jan 17 12:22:12.225200 systemd-networkd[1372]: cilium_vxlan: Gained carrier Jan 17 12:22:12.451626 sshd[3413]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:12.457696 systemd[1]: sshd@8-137.184.44.6:22-139.178.68.195:55534.service: Deactivated successfully. Jan 17 12:22:12.462741 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:22:12.464423 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:22:12.467689 systemd-logind[1441]: Removed session 9. Jan 17 12:22:12.819008 kernel: NET: Registered PF_ALG protocol family Jan 17 12:22:13.433149 systemd-networkd[1372]: cilium_vxlan: Gained IPv6LL Jan 17 12:22:13.624227 systemd-networkd[1372]: lxc_health: Link UP Jan 17 12:22:13.650525 systemd-networkd[1372]: lxc_health: Gained carrier Jan 17 12:22:14.146055 systemd-networkd[1372]: lxc95e98912f414: Link UP Jan 17 12:22:14.154112 kernel: eth0: renamed from tmp2b650 Jan 17 12:22:14.161229 systemd-networkd[1372]: lxc95e98912f414: Gained carrier Jan 17 12:22:14.199534 systemd-networkd[1372]: lxcf0a4ac036dc2: Link UP Jan 17 12:22:14.203435 kernel: eth0: renamed from tmp9fd2f Jan 17 12:22:14.210272 systemd-networkd[1372]: lxcf0a4ac036dc2: Gained carrier Jan 17 12:22:14.623521 kubelet[2538]: E0117 12:22:14.623383 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:14.653378 kubelet[2538]: I0117 12:22:14.652724 2538 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-47lfp" podStartSLOduration=8.61232558 podStartE2EDuration="33.652659543s" podCreationTimestamp="2025-01-17 12:21:41 +0000 UTC" firstStartedPulling="2025-01-17 12:21:43.240668579 +0000 UTC m=+14.859733589" lastFinishedPulling="2025-01-17 12:22:08.281002539 +0000 UTC m=+39.900067552" observedRunningTime="2025-01-17 12:22:08.810166721 +0000 UTC m=+40.429231753" watchObservedRunningTime="2025-01-17 12:22:14.652659543 +0000 UTC m=+46.271724596" Jan 17 12:22:14.805981 kubelet[2538]: E0117 12:22:14.805479 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:15.099897 systemd-networkd[1372]: lxc_health: Gained IPv6LL Jan 17 12:22:15.808415 kubelet[2538]: E0117 12:22:15.808363 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:15.929155 systemd-networkd[1372]: lxcf0a4ac036dc2: Gained IPv6LL Jan 17 12:22:15.993708 systemd-networkd[1372]: lxc95e98912f414: Gained IPv6LL Jan 17 12:22:17.471431 systemd[1]: Started sshd@9-137.184.44.6:22-139.178.68.195:51304.service - OpenSSH per-connection server daemon (139.178.68.195:51304). Jan 17 12:22:17.543922 sshd[3774]: Accepted publickey for core from 139.178.68.195 port 51304 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:17.546312 sshd[3774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:17.553780 systemd-logind[1441]: New session 10 of user core. Jan 17 12:22:17.559292 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:22:17.837429 sshd[3774]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:17.842790 systemd[1]: sshd@9-137.184.44.6:22-139.178.68.195:51304.service: Deactivated successfully. Jan 17 12:22:17.847895 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:22:17.852264 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:22:17.853558 systemd-logind[1441]: Removed session 10. Jan 17 12:22:19.603667 containerd[1453]: time="2025-01-17T12:22:19.603239890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:19.603667 containerd[1453]: time="2025-01-17T12:22:19.603323639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:19.603667 containerd[1453]: time="2025-01-17T12:22:19.603354226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:19.603667 containerd[1453]: time="2025-01-17T12:22:19.603524957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:19.622446 containerd[1453]: time="2025-01-17T12:22:19.622219517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:19.622446 containerd[1453]: time="2025-01-17T12:22:19.622295561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:19.622446 containerd[1453]: time="2025-01-17T12:22:19.622307122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:19.622446 containerd[1453]: time="2025-01-17T12:22:19.622401962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:19.639534 systemd[1]: run-containerd-runc-k8s.io-2b650d42e4b32a475b4c7138c531d74acdae95ca2fbcc948686acf8306f35a09-runc.BSfFzn.mount: Deactivated successfully. Jan 17 12:22:19.654201 systemd[1]: Started cri-containerd-2b650d42e4b32a475b4c7138c531d74acdae95ca2fbcc948686acf8306f35a09.scope - libcontainer container 2b650d42e4b32a475b4c7138c531d74acdae95ca2fbcc948686acf8306f35a09. Jan 17 12:22:19.675393 systemd[1]: Started cri-containerd-9fd2f65b7bf12336173527cdd0f32009988273be0eef61143298a52d068664ad.scope - libcontainer container 9fd2f65b7bf12336173527cdd0f32009988273be0eef61143298a52d068664ad. Jan 17 12:22:19.763571 containerd[1453]: time="2025-01-17T12:22:19.763443624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zvsw5,Uid:f9c5c509-0cd1-465d-817a-96c6498d299e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b650d42e4b32a475b4c7138c531d74acdae95ca2fbcc948686acf8306f35a09\"" Jan 17 12:22:19.768116 kubelet[2538]: E0117 12:22:19.768067 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:19.781992 containerd[1453]: time="2025-01-17T12:22:19.781324775Z" level=info msg="CreateContainer within sandbox \"2b650d42e4b32a475b4c7138c531d74acdae95ca2fbcc948686acf8306f35a09\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:22:19.800498 containerd[1453]: time="2025-01-17T12:22:19.800296088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pxlhc,Uid:bb5a42c6-5571-4975-9b74-248669138e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fd2f65b7bf12336173527cdd0f32009988273be0eef61143298a52d068664ad\"" Jan 17 12:22:19.802797 kubelet[2538]: E0117 12:22:19.802764 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:19.809884 containerd[1453]: time="2025-01-17T12:22:19.809677775Z" level=info msg="CreateContainer within sandbox \"9fd2f65b7bf12336173527cdd0f32009988273be0eef61143298a52d068664ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:22:19.828862 containerd[1453]: time="2025-01-17T12:22:19.828763702Z" level=info msg="CreateContainer within sandbox \"2b650d42e4b32a475b4c7138c531d74acdae95ca2fbcc948686acf8306f35a09\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b92ee9e08b0fdc36afa9be3487ff910f7942dddfd185a6d9f039fa0c6c83119\"" Jan 17 12:22:19.831974 containerd[1453]: time="2025-01-17T12:22:19.830098314Z" level=info msg="StartContainer for \"1b92ee9e08b0fdc36afa9be3487ff910f7942dddfd185a6d9f039fa0c6c83119\"" Jan 17 12:22:19.836963 containerd[1453]: time="2025-01-17T12:22:19.836691834Z" level=info msg="CreateContainer within sandbox \"9fd2f65b7bf12336173527cdd0f32009988273be0eef61143298a52d068664ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e1eb663c1d54c471687b10691d2b02bfd8204f8483dc30635bd405db946220e\"" Jan 17 12:22:19.838322 containerd[1453]: time="2025-01-17T12:22:19.838280023Z" level=info msg="StartContainer for \"5e1eb663c1d54c471687b10691d2b02bfd8204f8483dc30635bd405db946220e\"" Jan 17 12:22:19.904232 systemd[1]: Started cri-containerd-1b92ee9e08b0fdc36afa9be3487ff910f7942dddfd185a6d9f039fa0c6c83119.scope - libcontainer container 1b92ee9e08b0fdc36afa9be3487ff910f7942dddfd185a6d9f039fa0c6c83119. Jan 17 12:22:19.921230 systemd[1]: Started cri-containerd-5e1eb663c1d54c471687b10691d2b02bfd8204f8483dc30635bd405db946220e.scope - libcontainer container 5e1eb663c1d54c471687b10691d2b02bfd8204f8483dc30635bd405db946220e. Jan 17 12:22:19.977313 containerd[1453]: time="2025-01-17T12:22:19.977115431Z" level=info msg="StartContainer for \"1b92ee9e08b0fdc36afa9be3487ff910f7942dddfd185a6d9f039fa0c6c83119\" returns successfully" Jan 17 12:22:19.977313 containerd[1453]: time="2025-01-17T12:22:19.977160939Z" level=info msg="StartContainer for \"5e1eb663c1d54c471687b10691d2b02bfd8204f8483dc30635bd405db946220e\" returns successfully" Jan 17 12:22:20.834990 kubelet[2538]: E0117 12:22:20.834811 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:20.843330 kubelet[2538]: E0117 12:22:20.840850 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:20.854549 kubelet[2538]: I0117 12:22:20.853482 2538 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zvsw5" podStartSLOduration=39.85344365 podStartE2EDuration="39.85344365s" podCreationTimestamp="2025-01-17 12:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:20.853051291 +0000 UTC m=+52.472116323" watchObservedRunningTime="2025-01-17 12:22:20.85344365 +0000 UTC m=+52.472508681" Jan 17 12:22:20.886923 kubelet[2538]: I0117 12:22:20.886381 2538 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-pxlhc" podStartSLOduration=39.886335316 podStartE2EDuration="39.886335316s" podCreationTimestamp="2025-01-17 12:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:22:20.872850623 +0000 UTC m=+52.491915658" watchObservedRunningTime="2025-01-17 12:22:20.886335316 +0000 UTC m=+52.505400339" Jan 17 12:22:21.846574 kubelet[2538]: E0117 12:22:21.844760 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:21.846574 kubelet[2538]: E0117 12:22:21.845582 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:22.847616 kubelet[2538]: E0117 12:22:22.847161 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:22.848363 kubelet[2538]: E0117 12:22:22.848340 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:22.857390 systemd[1]: Started sshd@10-137.184.44.6:22-139.178.68.195:51306.service - OpenSSH per-connection server daemon (139.178.68.195:51306). Jan 17 12:22:22.925535 sshd[3959]: Accepted publickey for core from 139.178.68.195 port 51306 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:22.927576 sshd[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:22.933127 systemd-logind[1441]: New session 11 of user core. Jan 17 12:22:22.938188 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:22:23.162156 sshd[3959]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:23.172033 systemd[1]: sshd@10-137.184.44.6:22-139.178.68.195:51306.service: Deactivated successfully. Jan 17 12:22:23.175346 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:22:23.176478 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:22:23.183343 systemd[1]: Started sshd@11-137.184.44.6:22-139.178.68.195:51322.service - OpenSSH per-connection server daemon (139.178.68.195:51322). Jan 17 12:22:23.186398 systemd-logind[1441]: Removed session 11. Jan 17 12:22:23.249752 sshd[3973]: Accepted publickey for core from 139.178.68.195 port 51322 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:23.251893 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:23.259646 systemd-logind[1441]: New session 12 of user core. Jan 17 12:22:23.267237 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:22:23.460844 sshd[3973]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:23.475615 systemd[1]: sshd@11-137.184.44.6:22-139.178.68.195:51322.service: Deactivated successfully. Jan 17 12:22:23.479526 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:22:23.484302 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:22:23.497161 systemd[1]: Started sshd@12-137.184.44.6:22-139.178.68.195:51332.service - OpenSSH per-connection server daemon (139.178.68.195:51332). Jan 17 12:22:23.504017 systemd-logind[1441]: Removed session 12. Jan 17 12:22:23.554572 sshd[3984]: Accepted publickey for core from 139.178.68.195 port 51332 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:23.556312 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:23.561048 systemd-logind[1441]: New session 13 of user core. Jan 17 12:22:23.567238 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:22:23.716969 sshd[3984]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:23.721298 systemd[1]: sshd@12-137.184.44.6:22-139.178.68.195:51332.service: Deactivated successfully. Jan 17 12:22:23.724853 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:22:23.727210 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:22:23.728217 systemd-logind[1441]: Removed session 13. Jan 17 12:22:28.737277 systemd[1]: Started sshd@13-137.184.44.6:22-139.178.68.195:35170.service - OpenSSH per-connection server daemon (139.178.68.195:35170). Jan 17 12:22:28.791978 sshd[3999]: Accepted publickey for core from 139.178.68.195 port 35170 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:28.792934 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:28.798311 systemd-logind[1441]: New session 14 of user core. Jan 17 12:22:28.807152 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:22:29.005244 sshd[3999]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:29.010411 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:22:29.011445 systemd[1]: sshd@13-137.184.44.6:22-139.178.68.195:35170.service: Deactivated successfully. Jan 17 12:22:29.013557 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:22:29.014866 systemd-logind[1441]: Removed session 14. Jan 17 12:22:34.022850 systemd[1]: Started sshd@14-137.184.44.6:22-139.178.68.195:35184.service - OpenSSH per-connection server daemon (139.178.68.195:35184). Jan 17 12:22:34.082504 sshd[4013]: Accepted publickey for core from 139.178.68.195 port 35184 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:34.084534 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:34.090080 systemd-logind[1441]: New session 15 of user core. Jan 17 12:22:34.096242 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:22:34.235453 sshd[4013]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:34.240758 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:22:34.241046 systemd[1]: sshd@14-137.184.44.6:22-139.178.68.195:35184.service: Deactivated successfully. Jan 17 12:22:34.243566 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:22:34.246442 systemd-logind[1441]: Removed session 15. Jan 17 12:22:39.255301 systemd[1]: Started sshd@15-137.184.44.6:22-139.178.68.195:33168.service - OpenSSH per-connection server daemon (139.178.68.195:33168). Jan 17 12:22:39.303761 sshd[4025]: Accepted publickey for core from 139.178.68.195 port 33168 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:39.306287 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:39.313910 systemd-logind[1441]: New session 16 of user core. Jan 17 12:22:39.324269 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:22:39.465734 sshd[4025]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:39.476862 systemd[1]: sshd@15-137.184.44.6:22-139.178.68.195:33168.service: Deactivated successfully. Jan 17 12:22:39.480532 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:22:39.482470 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:22:39.493452 systemd[1]: Started sshd@16-137.184.44.6:22-139.178.68.195:33176.service - OpenSSH per-connection server daemon (139.178.68.195:33176). Jan 17 12:22:39.495064 systemd-logind[1441]: Removed session 16. Jan 17 12:22:39.539631 sshd[4038]: Accepted publickey for core from 139.178.68.195 port 33176 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:39.542709 sshd[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:39.549570 systemd-logind[1441]: New session 17 of user core. Jan 17 12:22:39.554197 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:22:39.881544 sshd[4038]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:39.897409 systemd[1]: Started sshd@17-137.184.44.6:22-139.178.68.195:33186.service - OpenSSH per-connection server daemon (139.178.68.195:33186). Jan 17 12:22:39.898024 systemd[1]: sshd@16-137.184.44.6:22-139.178.68.195:33176.service: Deactivated successfully. Jan 17 12:22:39.901631 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:22:39.904415 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:22:39.907530 systemd-logind[1441]: Removed session 17. Jan 17 12:22:39.962705 sshd[4047]: Accepted publickey for core from 139.178.68.195 port 33186 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:39.965718 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:39.974870 systemd-logind[1441]: New session 18 of user core. Jan 17 12:22:39.978187 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:22:41.663379 sshd[4047]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:41.677837 systemd[1]: sshd@17-137.184.44.6:22-139.178.68.195:33186.service: Deactivated successfully. Jan 17 12:22:41.682672 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:22:41.687830 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:22:41.700196 systemd[1]: Started sshd@18-137.184.44.6:22-139.178.68.195:33196.service - OpenSSH per-connection server daemon (139.178.68.195:33196). Jan 17 12:22:41.703587 systemd-logind[1441]: Removed session 18. Jan 17 12:22:41.766094 sshd[4068]: Accepted publickey for core from 139.178.68.195 port 33196 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:41.767994 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:41.774032 systemd-logind[1441]: New session 19 of user core. Jan 17 12:22:41.784260 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:22:42.254524 sshd[4068]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:42.266768 systemd[1]: sshd@18-137.184.44.6:22-139.178.68.195:33196.service: Deactivated successfully. Jan 17 12:22:42.271003 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:22:42.275021 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:22:42.282889 systemd[1]: Started sshd@19-137.184.44.6:22-139.178.68.195:33202.service - OpenSSH per-connection server daemon (139.178.68.195:33202). Jan 17 12:22:42.285328 systemd-logind[1441]: Removed session 19. Jan 17 12:22:42.330838 sshd[4079]: Accepted publickey for core from 139.178.68.195 port 33202 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:42.332532 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:42.337918 systemd-logind[1441]: New session 20 of user core. Jan 17 12:22:42.342173 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:22:42.479170 sshd[4079]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:42.482818 systemd[1]: sshd@19-137.184.44.6:22-139.178.68.195:33202.service: Deactivated successfully. Jan 17 12:22:42.485349 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:22:42.488351 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:22:42.491366 systemd-logind[1441]: Removed session 20. Jan 17 12:22:47.497325 systemd[1]: Started sshd@20-137.184.44.6:22-139.178.68.195:55734.service - OpenSSH per-connection server daemon (139.178.68.195:55734). Jan 17 12:22:47.553555 sshd[4097]: Accepted publickey for core from 139.178.68.195 port 55734 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:47.555635 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:47.561921 systemd-logind[1441]: New session 21 of user core. Jan 17 12:22:47.566213 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:22:47.569685 kubelet[2538]: E0117 12:22:47.569102 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:47.710379 sshd[4097]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:47.715306 systemd[1]: sshd@20-137.184.44.6:22-139.178.68.195:55734.service: Deactivated successfully. Jan 17 12:22:47.717720 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:22:47.719807 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:22:47.721114 systemd-logind[1441]: Removed session 21. Jan 17 12:22:49.566715 kubelet[2538]: E0117 12:22:49.566668 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:50.567872 kubelet[2538]: E0117 12:22:50.567191 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:51.566530 kubelet[2538]: E0117 12:22:51.566164 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:22:52.731319 systemd[1]: Started sshd@21-137.184.44.6:22-139.178.68.195:55742.service - OpenSSH per-connection server daemon (139.178.68.195:55742). Jan 17 12:22:52.782086 sshd[4110]: Accepted publickey for core from 139.178.68.195 port 55742 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:52.783843 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:52.789669 systemd-logind[1441]: New session 22 of user core. Jan 17 12:22:52.795212 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:22:52.939310 sshd[4110]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:52.943979 systemd[1]: sshd@21-137.184.44.6:22-139.178.68.195:55742.service: Deactivated successfully. Jan 17 12:22:52.947775 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:22:52.949389 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:22:52.950917 systemd-logind[1441]: Removed session 22. Jan 17 12:22:57.959327 systemd[1]: Started sshd@22-137.184.44.6:22-139.178.68.195:39006.service - OpenSSH per-connection server daemon (139.178.68.195:39006). Jan 17 12:22:58.007728 sshd[4122]: Accepted publickey for core from 139.178.68.195 port 39006 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:58.009686 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:58.016183 systemd-logind[1441]: New session 23 of user core. Jan 17 12:22:58.021403 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:22:58.150307 sshd[4122]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:58.163418 systemd[1]: sshd@22-137.184.44.6:22-139.178.68.195:39006.service: Deactivated successfully. Jan 17 12:22:58.165854 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:22:58.167744 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:22:58.172542 systemd[1]: Started sshd@23-137.184.44.6:22-139.178.68.195:39014.service - OpenSSH per-connection server daemon (139.178.68.195:39014). Jan 17 12:22:58.174582 systemd-logind[1441]: Removed session 23. Jan 17 12:22:58.221616 sshd[4135]: Accepted publickey for core from 139.178.68.195 port 39014 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:22:58.224054 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:58.230745 systemd-logind[1441]: New session 24 of user core. Jan 17 12:22:58.239291 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:22:59.801574 systemd[1]: run-containerd-runc-k8s.io-fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a-runc.5lnSik.mount: Deactivated successfully. Jan 17 12:22:59.834588 containerd[1453]: time="2025-01-17T12:22:59.834467823Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:22:59.896189 containerd[1453]: time="2025-01-17T12:22:59.896138666Z" level=info msg="StopContainer for \"c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da\" with timeout 30 (s)" Jan 17 12:22:59.896372 containerd[1453]: time="2025-01-17T12:22:59.896355142Z" level=info msg="StopContainer for \"fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a\" with timeout 2 (s)" Jan 17 12:22:59.896992 containerd[1453]: time="2025-01-17T12:22:59.896921899Z" level=info msg="Stop container \"c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da\" with signal terminated" Jan 17 12:22:59.897073 containerd[1453]: time="2025-01-17T12:22:59.896988798Z" level=info msg="Stop container \"fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a\" with signal terminated" Jan 17 12:22:59.912436 systemd[1]: cri-containerd-c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da.scope: Deactivated successfully. Jan 17 12:22:59.917414 systemd-networkd[1372]: lxc_health: Link DOWN Jan 17 12:22:59.917423 systemd-networkd[1372]: lxc_health: Lost carrier Jan 17 12:22:59.944912 systemd[1]: cri-containerd-fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a.scope: Deactivated successfully. Jan 17 12:22:59.945849 systemd[1]: cri-containerd-fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a.scope: Consumed 8.708s CPU time. Jan 17 12:22:59.970760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da-rootfs.mount: Deactivated successfully. Jan 17 12:22:59.986497 containerd[1453]: time="2025-01-17T12:22:59.985804731Z" level=info msg="shim disconnected" id=c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da namespace=k8s.io Jan 17 12:22:59.986497 containerd[1453]: time="2025-01-17T12:22:59.985880239Z" level=warning msg="cleaning up after shim disconnected" id=c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da namespace=k8s.io Jan 17 12:22:59.986497 containerd[1453]: time="2025-01-17T12:22:59.985893597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:59.993264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a-rootfs.mount: Deactivated successfully. Jan 17 12:22:59.999790 containerd[1453]: time="2025-01-17T12:22:59.999701530Z" level=info msg="shim disconnected" id=fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a namespace=k8s.io Jan 17 12:22:59.999790 containerd[1453]: time="2025-01-17T12:22:59.999780279Z" level=warning msg="cleaning up after shim disconnected" id=fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a namespace=k8s.io Jan 17 12:22:59.999790 containerd[1453]: time="2025-01-17T12:22:59.999793046Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:00.011991 containerd[1453]: time="2025-01-17T12:23:00.011847161Z" level=info msg="StopContainer for \"c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da\" returns successfully" Jan 17 12:23:00.026291 containerd[1453]: time="2025-01-17T12:23:00.025926117Z" level=info msg="StopPodSandbox for \"56e5f0a94643a208a500958f6f2af25e495855d7d712409d0b36d2f70b2f4656\"" Jan 17 12:23:00.026291 containerd[1453]: time="2025-01-17T12:23:00.026146590Z" level=info msg="Container to stop \"c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:23:00.029536 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56e5f0a94643a208a500958f6f2af25e495855d7d712409d0b36d2f70b2f4656-shm.mount: Deactivated successfully. Jan 17 12:23:00.037819 containerd[1453]: time="2025-01-17T12:23:00.037671177Z" level=info msg="StopContainer for \"fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a\" returns successfully" Jan 17 12:23:00.038727 containerd[1453]: time="2025-01-17T12:23:00.038441793Z" level=info msg="StopPodSandbox for \"64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146\"" Jan 17 12:23:00.038727 containerd[1453]: time="2025-01-17T12:23:00.038577043Z" level=info msg="Container to stop \"6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:23:00.038727 containerd[1453]: time="2025-01-17T12:23:00.038599714Z" level=info msg="Container to stop \"1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:23:00.038727 containerd[1453]: time="2025-01-17T12:23:00.038616997Z" level=info msg="Container to stop \"b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:23:00.038727 containerd[1453]: time="2025-01-17T12:23:00.038626026Z" level=info msg="Container to stop \"fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:23:00.038727 containerd[1453]: time="2025-01-17T12:23:00.038635411Z" level=info msg="Container to stop \"3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:23:00.043092 systemd[1]: cri-containerd-56e5f0a94643a208a500958f6f2af25e495855d7d712409d0b36d2f70b2f4656.scope: Deactivated successfully. Jan 17 12:23:00.056316 systemd[1]: cri-containerd-64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146.scope: Deactivated successfully. Jan 17 12:23:00.099832 containerd[1453]: time="2025-01-17T12:23:00.099541456Z" level=info msg="shim disconnected" id=56e5f0a94643a208a500958f6f2af25e495855d7d712409d0b36d2f70b2f4656 namespace=k8s.io Jan 17 12:23:00.099832 containerd[1453]: time="2025-01-17T12:23:00.099604445Z" level=warning msg="cleaning up after shim disconnected" id=56e5f0a94643a208a500958f6f2af25e495855d7d712409d0b36d2f70b2f4656 namespace=k8s.io Jan 17 12:23:00.099832 containerd[1453]: time="2025-01-17T12:23:00.099613055Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:00.100523 containerd[1453]: time="2025-01-17T12:23:00.100320457Z" level=info msg="shim disconnected" id=64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146 namespace=k8s.io Jan 17 12:23:00.100523 containerd[1453]: time="2025-01-17T12:23:00.100366638Z" level=warning msg="cleaning up after shim disconnected" id=64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146 namespace=k8s.io Jan 17 12:23:00.100523 containerd[1453]: time="2025-01-17T12:23:00.100374867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:00.126199 containerd[1453]: time="2025-01-17T12:23:00.126017799Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:23:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:23:00.128998 containerd[1453]: time="2025-01-17T12:23:00.128866449Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:23:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:23:00.138797 containerd[1453]: time="2025-01-17T12:23:00.137793146Z" level=info msg="TearDown network for sandbox \"56e5f0a94643a208a500958f6f2af25e495855d7d712409d0b36d2f70b2f4656\" successfully" Jan 17 12:23:00.138797 containerd[1453]: time="2025-01-17T12:23:00.137849908Z" level=info msg="StopPodSandbox for \"56e5f0a94643a208a500958f6f2af25e495855d7d712409d0b36d2f70b2f4656\" returns successfully" Jan 17 12:23:00.141995 containerd[1453]: time="2025-01-17T12:23:00.141819280Z" level=info msg="TearDown network for sandbox \"64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146\" successfully" Jan 17 12:23:00.141995 containerd[1453]: time="2025-01-17T12:23:00.141884862Z" level=info msg="StopPodSandbox for \"64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146\" returns successfully" Jan 17 12:23:00.277379 kubelet[2538]: I0117 12:23:00.276815 2538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-xtables-lock\") pod \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " Jan 17 12:23:00.277379 kubelet[2538]: I0117 12:23:00.276884 2538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/790b2c8a-7172-4089-aeee-f75a275d4d9d-cilium-config-path\") pod \"790b2c8a-7172-4089-aeee-f75a275d4d9d\" (UID: \"790b2c8a-7172-4089-aeee-f75a275d4d9d\") " Jan 17 12:23:00.277379 kubelet[2538]: I0117 12:23:00.276913 2538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-hostproc\") pod \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " Jan 17 12:23:00.277379 kubelet[2538]: I0117 12:23:00.276964 2538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-clustermesh-secrets\") pod \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " Jan 17 12:23:00.277379 kubelet[2538]: I0117 12:23:00.276984 2538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv6bz\" (UniqueName: \"kubernetes.io/projected/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-kube-api-access-gv6bz\") pod \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " Jan 17 12:23:00.277379 kubelet[2538]: I0117 12:23:00.277002 2538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-etc-cni-netd\") pod \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " Jan 17 12:23:00.279177 kubelet[2538]: I0117 12:23:00.277027 2538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7v47\" (UniqueName: \"kubernetes.io/projected/790b2c8a-7172-4089-aeee-f75a275d4d9d-kube-api-access-s7v47\") pod \"790b2c8a-7172-4089-aeee-f75a275d4d9d\" (UID: \"790b2c8a-7172-4089-aeee-f75a275d4d9d\") " Jan 17 12:23:00.279177 kubelet[2538]: I0117 12:23:00.277051 2538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-bpf-maps\") pod \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " Jan 17 12:23:00.279177 kubelet[2538]: I0117 12:23:00.277075 2538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-cilium-run\") pod \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " Jan 17 12:23:00.279177 kubelet[2538]: I0117 12:23:00.277097 2538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-lib-modules\") pod \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " Jan 17 12:23:00.279177 kubelet[2538]: I0117 12:23:00.277115 2538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-hubble-tls\") pod \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " Jan 17 12:23:00.279177 kubelet[2538]: I0117 12:23:00.277135 2538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-cilium-cgroup\") pod \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " Jan 17 12:23:00.279394 kubelet[2538]: I0117 12:23:00.277151 2538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-cni-path\") pod \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " Jan 17 12:23:00.279394 kubelet[2538]: I0117 12:23:00.277168 2538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-host-proc-sys-net\") pod \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " Jan 17 12:23:00.279394 kubelet[2538]: I0117 12:23:00.277190 2538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-cilium-config-path\") pod \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " Jan 17 12:23:00.279394 kubelet[2538]: I0117 12:23:00.277209 2538 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-host-proc-sys-kernel\") pod \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\" (UID: \"bc390e01-8ff2-4a02-9422-e61edb9ad0d9\") " Jan 17 12:23:00.281001 kubelet[2538]: I0117 12:23:00.279524 2538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bc390e01-8ff2-4a02-9422-e61edb9ad0d9" (UID: "bc390e01-8ff2-4a02-9422-e61edb9ad0d9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:00.281903 kubelet[2538]: I0117 12:23:00.281362 2538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bc390e01-8ff2-4a02-9422-e61edb9ad0d9" (UID: "bc390e01-8ff2-4a02-9422-e61edb9ad0d9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:00.283210 kubelet[2538]: I0117 12:23:00.282272 2538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bc390e01-8ff2-4a02-9422-e61edb9ad0d9" (UID: "bc390e01-8ff2-4a02-9422-e61edb9ad0d9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:00.283210 kubelet[2538]: I0117 12:23:00.282388 2538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bc390e01-8ff2-4a02-9422-e61edb9ad0d9" (UID: "bc390e01-8ff2-4a02-9422-e61edb9ad0d9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:00.283210 kubelet[2538]: I0117 12:23:00.282407 2538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bc390e01-8ff2-4a02-9422-e61edb9ad0d9" (UID: "bc390e01-8ff2-4a02-9422-e61edb9ad0d9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:00.284863 kubelet[2538]: I0117 12:23:00.284820 2538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/790b2c8a-7172-4089-aeee-f75a275d4d9d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "790b2c8a-7172-4089-aeee-f75a275d4d9d" (UID: "790b2c8a-7172-4089-aeee-f75a275d4d9d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:23:00.284973 kubelet[2538]: I0117 12:23:00.284896 2538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-hostproc" (OuterVolumeSpecName: "hostproc") pod "bc390e01-8ff2-4a02-9422-e61edb9ad0d9" (UID: "bc390e01-8ff2-4a02-9422-e61edb9ad0d9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:00.290289 kubelet[2538]: I0117 12:23:00.290246 2538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bc390e01-8ff2-4a02-9422-e61edb9ad0d9" (UID: "bc390e01-8ff2-4a02-9422-e61edb9ad0d9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:23:00.290475 kubelet[2538]: I0117 12:23:00.290242 2538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bc390e01-8ff2-4a02-9422-e61edb9ad0d9" (UID: "bc390e01-8ff2-4a02-9422-e61edb9ad0d9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:23:00.292580 kubelet[2538]: I0117 12:23:00.290649 2538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bc390e01-8ff2-4a02-9422-e61edb9ad0d9" (UID: "bc390e01-8ff2-4a02-9422-e61edb9ad0d9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:00.292580 kubelet[2538]: I0117 12:23:00.290676 2538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-cni-path" (OuterVolumeSpecName: "cni-path") pod "bc390e01-8ff2-4a02-9422-e61edb9ad0d9" (UID: "bc390e01-8ff2-4a02-9422-e61edb9ad0d9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:00.292580 kubelet[2538]: I0117 12:23:00.290694 2538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bc390e01-8ff2-4a02-9422-e61edb9ad0d9" (UID: "bc390e01-8ff2-4a02-9422-e61edb9ad0d9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:00.293597 kubelet[2538]: I0117 12:23:00.293560 2538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bc390e01-8ff2-4a02-9422-e61edb9ad0d9" (UID: "bc390e01-8ff2-4a02-9422-e61edb9ad0d9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:23:00.293809 kubelet[2538]: I0117 12:23:00.293667 2538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-kube-api-access-gv6bz" (OuterVolumeSpecName: "kube-api-access-gv6bz") pod "bc390e01-8ff2-4a02-9422-e61edb9ad0d9" (UID: "bc390e01-8ff2-4a02-9422-e61edb9ad0d9"). InnerVolumeSpecName "kube-api-access-gv6bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:23:00.293888 kubelet[2538]: I0117 12:23:00.293699 2538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bc390e01-8ff2-4a02-9422-e61edb9ad0d9" (UID: "bc390e01-8ff2-4a02-9422-e61edb9ad0d9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:00.296626 kubelet[2538]: I0117 12:23:00.296574 2538 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/790b2c8a-7172-4089-aeee-f75a275d4d9d-kube-api-access-s7v47" (OuterVolumeSpecName: "kube-api-access-s7v47") pod "790b2c8a-7172-4089-aeee-f75a275d4d9d" (UID: "790b2c8a-7172-4089-aeee-f75a275d4d9d"). InnerVolumeSpecName "kube-api-access-s7v47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:23:00.378263 kubelet[2538]: I0117 12:23:00.377934 2538 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-cilium-run\") on node \"ci-4081.3.0-d-600f54fd9d\" DevicePath \"\"" Jan 17 12:23:00.378263 kubelet[2538]: I0117 12:23:00.377989 2538 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-s7v47\" (UniqueName: \"kubernetes.io/projected/790b2c8a-7172-4089-aeee-f75a275d4d9d-kube-api-access-s7v47\") on node \"ci-4081.3.0-d-600f54fd9d\" DevicePath \"\"" Jan 17 12:23:00.378263 kubelet[2538]: I0117 12:23:00.378000 2538 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-bpf-maps\") on node \"ci-4081.3.0-d-600f54fd9d\" DevicePath \"\"" Jan 17 12:23:00.378263 kubelet[2538]: I0117 12:23:00.378011 2538 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-hubble-tls\") on node \"ci-4081.3.0-d-600f54fd9d\" DevicePath \"\"" Jan 17 12:23:00.378263 kubelet[2538]: I0117 12:23:00.378020 2538 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-lib-modules\") on node \"ci-4081.3.0-d-600f54fd9d\" DevicePath \"\"" Jan 17 12:23:00.378263 kubelet[2538]: I0117 12:23:00.378033 2538 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-host-proc-sys-net\") on node \"ci-4081.3.0-d-600f54fd9d\" DevicePath \"\"" Jan 17 12:23:00.378263 kubelet[2538]: I0117 12:23:00.378043 2538 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-cilium-config-path\") on node \"ci-4081.3.0-d-600f54fd9d\" DevicePath \"\"" Jan 17 12:23:00.378263 kubelet[2538]: I0117 12:23:00.378053 2538 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-cilium-cgroup\") on node \"ci-4081.3.0-d-600f54fd9d\" DevicePath \"\"" Jan 17 12:23:00.378762 kubelet[2538]: I0117 12:23:00.378064 2538 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-cni-path\") on node \"ci-4081.3.0-d-600f54fd9d\" DevicePath \"\"" Jan 17 12:23:00.378762 kubelet[2538]: I0117 12:23:00.378076 2538 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-host-proc-sys-kernel\") on node \"ci-4081.3.0-d-600f54fd9d\" DevicePath \"\"" Jan 17 12:23:00.378762 kubelet[2538]: I0117 12:23:00.378128 2538 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/790b2c8a-7172-4089-aeee-f75a275d4d9d-cilium-config-path\") on node \"ci-4081.3.0-d-600f54fd9d\" DevicePath \"\"" Jan 17 12:23:00.378762 kubelet[2538]: I0117 12:23:00.378138 2538 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-xtables-lock\") on node \"ci-4081.3.0-d-600f54fd9d\" DevicePath \"\"" Jan 17 12:23:00.378762 kubelet[2538]: I0117 12:23:00.378152 2538 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-hostproc\") on node \"ci-4081.3.0-d-600f54fd9d\" DevicePath \"\"" Jan 17 12:23:00.378762 kubelet[2538]: I0117 12:23:00.378167 2538 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-clustermesh-secrets\") on node \"ci-4081.3.0-d-600f54fd9d\" DevicePath \"\"" Jan 17 12:23:00.378762 kubelet[2538]: I0117 12:23:00.378182 2538 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gv6bz\" (UniqueName: \"kubernetes.io/projected/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-kube-api-access-gv6bz\") on node \"ci-4081.3.0-d-600f54fd9d\" DevicePath \"\"" Jan 17 12:23:00.378762 kubelet[2538]: I0117 12:23:00.378196 2538 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc390e01-8ff2-4a02-9422-e61edb9ad0d9-etc-cni-netd\") on node \"ci-4081.3.0-d-600f54fd9d\" DevicePath \"\"" Jan 17 12:23:00.589963 systemd[1]: Removed slice kubepods-burstable-podbc390e01_8ff2_4a02_9422_e61edb9ad0d9.slice - libcontainer container kubepods-burstable-podbc390e01_8ff2_4a02_9422_e61edb9ad0d9.slice. Jan 17 12:23:00.590137 systemd[1]: kubepods-burstable-podbc390e01_8ff2_4a02_9422_e61edb9ad0d9.slice: Consumed 8.801s CPU time. Jan 17 12:23:00.594888 systemd[1]: Removed slice kubepods-besteffort-pod790b2c8a_7172_4089_aeee_f75a275d4d9d.slice - libcontainer container kubepods-besteffort-pod790b2c8a_7172_4089_aeee_f75a275d4d9d.slice. Jan 17 12:23:00.794116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56e5f0a94643a208a500958f6f2af25e495855d7d712409d0b36d2f70b2f4656-rootfs.mount: Deactivated successfully. Jan 17 12:23:00.794295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146-rootfs.mount: Deactivated successfully. Jan 17 12:23:00.794398 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-64a9f7945f86b3c2cdd7c08b188ad34a70ae0f6d3335265608d75ac90ae52146-shm.mount: Deactivated successfully. Jan 17 12:23:00.794620 systemd[1]: var-lib-kubelet-pods-790b2c8a\x2d7172\x2d4089\x2daeee\x2df75a275d4d9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds7v47.mount: Deactivated successfully. Jan 17 12:23:00.794717 systemd[1]: var-lib-kubelet-pods-bc390e01\x2d8ff2\x2d4a02\x2d9422\x2de61edb9ad0d9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgv6bz.mount: Deactivated successfully. Jan 17 12:23:00.794812 systemd[1]: var-lib-kubelet-pods-bc390e01\x2d8ff2\x2d4a02\x2d9422\x2de61edb9ad0d9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 12:23:00.794900 systemd[1]: var-lib-kubelet-pods-bc390e01\x2d8ff2\x2d4a02\x2d9422\x2de61edb9ad0d9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 12:23:00.955295 kubelet[2538]: I0117 12:23:00.955156 2538 scope.go:117] "RemoveContainer" containerID="c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da" Jan 17 12:23:00.957994 containerd[1453]: time="2025-01-17T12:23:00.957333652Z" level=info msg="RemoveContainer for \"c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da\"" Jan 17 12:23:00.971840 containerd[1453]: time="2025-01-17T12:23:00.971758834Z" level=info msg="RemoveContainer for \"c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da\" returns successfully" Jan 17 12:23:00.984603 kubelet[2538]: I0117 12:23:00.984249 2538 scope.go:117] "RemoveContainer" containerID="c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da" Jan 17 12:23:01.019797 containerd[1453]: time="2025-01-17T12:23:00.987426238Z" level=error msg="ContainerStatus for \"c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da\": not found" Jan 17 12:23:01.026978 kubelet[2538]: E0117 12:23:01.026776 2538 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da\": not found" containerID="c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da" Jan 17 12:23:01.052173 kubelet[2538]: I0117 12:23:01.051009 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da"} err="failed to get container status \"c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9b512cf82b1839fc341c55aed332fa7b7e8ba888c9a92f397225b2f969a83da\": not found" Jan 17 12:23:01.052173 kubelet[2538]: I0117 12:23:01.051074 2538 scope.go:117] "RemoveContainer" containerID="fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a" Jan 17 12:23:01.057735 containerd[1453]: time="2025-01-17T12:23:01.057202770Z" level=info msg="RemoveContainer for \"fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a\"" Jan 17 12:23:01.062620 containerd[1453]: time="2025-01-17T12:23:01.062449769Z" level=info msg="RemoveContainer for \"fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a\" returns successfully" Jan 17 12:23:01.063775 kubelet[2538]: I0117 12:23:01.063288 2538 scope.go:117] "RemoveContainer" containerID="b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62" Jan 17 12:23:01.074699 containerd[1453]: time="2025-01-17T12:23:01.074632071Z" level=info msg="RemoveContainer for \"b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62\"" Jan 17 12:23:01.081571 containerd[1453]: time="2025-01-17T12:23:01.080629686Z" level=info msg="RemoveContainer for \"b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62\" returns successfully" Jan 17 12:23:01.081908 kubelet[2538]: I0117 12:23:01.081061 2538 scope.go:117] "RemoveContainer" containerID="1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657" Jan 17 12:23:01.084129 containerd[1453]: time="2025-01-17T12:23:01.084073856Z" level=info msg="RemoveContainer for \"1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657\"" Jan 17 12:23:01.090025 containerd[1453]: time="2025-01-17T12:23:01.089889237Z" level=info msg="RemoveContainer for \"1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657\" returns successfully" Jan 17 12:23:01.090683 kubelet[2538]: I0117 12:23:01.090404 2538 scope.go:117] "RemoveContainer" containerID="6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64" Jan 17 12:23:01.092605 containerd[1453]: time="2025-01-17T12:23:01.092504318Z" level=info msg="RemoveContainer for \"6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64\"" Jan 17 12:23:01.097720 containerd[1453]: time="2025-01-17T12:23:01.097637885Z" level=info msg="RemoveContainer for \"6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64\" returns successfully" Jan 17 12:23:01.098198 kubelet[2538]: I0117 12:23:01.098157 2538 scope.go:117] "RemoveContainer" containerID="3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93" Jan 17 12:23:01.100311 containerd[1453]: time="2025-01-17T12:23:01.100234813Z" level=info msg="RemoveContainer for \"3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93\"" Jan 17 12:23:01.106773 containerd[1453]: time="2025-01-17T12:23:01.106428153Z" level=info msg="RemoveContainer for \"3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93\" returns successfully" Jan 17 12:23:01.107184 kubelet[2538]: I0117 12:23:01.107146 2538 scope.go:117] "RemoveContainer" containerID="fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a" Jan 17 12:23:01.107705 containerd[1453]: time="2025-01-17T12:23:01.107573704Z" level=error msg="ContainerStatus for \"fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a\": not found" Jan 17 12:23:01.107834 kubelet[2538]: E0117 12:23:01.107778 2538 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a\": not found" containerID="fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a" Jan 17 12:23:01.107834 kubelet[2538]: I0117 12:23:01.107831 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a"} err="failed to get container status \"fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc39040a646b700585f7ec6a7715be4c120bf77356aebe323fd634ea6c9aef0a\": not found" Jan 17 12:23:01.109234 kubelet[2538]: I0117 12:23:01.107850 2538 scope.go:117] "RemoveContainer" containerID="b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62" Jan 17 12:23:01.109328 containerd[1453]: time="2025-01-17T12:23:01.108744644Z" level=error msg="ContainerStatus for \"b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62\": not found" Jan 17 12:23:01.110350 kubelet[2538]: E0117 12:23:01.110242 2538 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62\": not found" containerID="b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62" Jan 17 12:23:01.110694 kubelet[2538]: I0117 12:23:01.110666 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62"} err="failed to get container status \"b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1dafe3fc4a54e8c19044d8f39bee706d451d5711ee3ec403a701835512a6d62\": not found" Jan 17 12:23:01.110772 kubelet[2538]: I0117 12:23:01.110715 2538 scope.go:117] "RemoveContainer" containerID="1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657" Jan 17 12:23:01.112247 containerd[1453]: time="2025-01-17T12:23:01.112182017Z" level=error msg="ContainerStatus for \"1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657\": not found" Jan 17 12:23:01.113660 kubelet[2538]: E0117 12:23:01.113486 2538 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657\": not found" containerID="1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657" Jan 17 12:23:01.113660 kubelet[2538]: I0117 12:23:01.113560 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657"} err="failed to get container status \"1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657\": rpc error: code = NotFound desc = an error occurred when try to find container \"1afcbdcf2c24b5942d6bfb4f8faa7924c75697edb30accfb2f091b6004907657\": not found" Jan 17 12:23:01.114025 kubelet[2538]: I0117 12:23:01.113586 2538 scope.go:117] "RemoveContainer" containerID="6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64" Jan 17 12:23:01.114688 containerd[1453]: time="2025-01-17T12:23:01.114633260Z" level=error msg="ContainerStatus for \"6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64\": not found" Jan 17 12:23:01.116085 kubelet[2538]: E0117 12:23:01.116057 2538 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64\": not found" containerID="6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64" Jan 17 12:23:01.116271 kubelet[2538]: I0117 12:23:01.116252 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64"} err="failed to get container status \"6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a55ba3b941b1a17a0961eef4060c4a18a7263e771295012362f69a23dd80f64\": not found" Jan 17 12:23:01.116901 kubelet[2538]: I0117 12:23:01.116874 2538 scope.go:117] "RemoveContainer" containerID="3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93" Jan 17 12:23:01.123018 containerd[1453]: time="2025-01-17T12:23:01.122657793Z" level=error msg="ContainerStatus for \"3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93\": not found" Jan 17 12:23:01.123234 kubelet[2538]: E0117 12:23:01.122983 2538 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93\": not found" containerID="3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93" Jan 17 12:23:01.123234 kubelet[2538]: I0117 12:23:01.123045 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93"} err="failed to get container status \"3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b0920603377b41413f4a0a87bfa58a55d2f8dae57ba854cb5feb2996f5b9d93\": not found" Jan 17 12:23:01.706116 sshd[4135]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:01.719803 systemd[1]: sshd@23-137.184.44.6:22-139.178.68.195:39014.service: Deactivated successfully. Jan 17 12:23:01.729550 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:23:01.737108 systemd-logind[1441]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:23:01.745693 systemd[1]: Started sshd@24-137.184.44.6:22-139.178.68.195:39024.service - OpenSSH per-connection server daemon (139.178.68.195:39024). Jan 17 12:23:01.748392 systemd-logind[1441]: Removed session 24. Jan 17 12:23:01.874780 sshd[4296]: Accepted publickey for core from 139.178.68.195 port 39024 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:01.876490 sshd[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:01.889935 systemd-logind[1441]: New session 25 of user core. Jan 17 12:23:01.913290 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:23:02.589349 kubelet[2538]: I0117 12:23:02.588889 2538 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="790b2c8a-7172-4089-aeee-f75a275d4d9d" path="/var/lib/kubelet/pods/790b2c8a-7172-4089-aeee-f75a275d4d9d/volumes" Jan 17 12:23:02.592438 kubelet[2538]: I0117 12:23:02.591310 2538 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bc390e01-8ff2-4a02-9422-e61edb9ad0d9" path="/var/lib/kubelet/pods/bc390e01-8ff2-4a02-9422-e61edb9ad0d9/volumes" Jan 17 12:23:03.406893 sshd[4296]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:03.424758 systemd[1]: sshd@24-137.184.44.6:22-139.178.68.195:39024.service: Deactivated successfully. Jan 17 12:23:03.429416 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:23:03.429668 systemd[1]: session-25.scope: Consumed 1.027s CPU time. Jan 17 12:23:03.434335 systemd-logind[1441]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:23:03.440512 systemd[1]: Started sshd@25-137.184.44.6:22-139.178.68.195:39028.service - OpenSSH per-connection server daemon (139.178.68.195:39028). Jan 17 12:23:03.446773 systemd-logind[1441]: Removed session 25. Jan 17 12:23:03.514539 kubelet[2538]: I0117 12:23:03.514448 2538 topology_manager.go:215] "Topology Admit Handler" podUID="52c487df-f977-4b18-a1eb-fc8c2688ff34" podNamespace="kube-system" podName="cilium-pwl9x" Jan 17 12:23:03.521574 kubelet[2538]: E0117 12:23:03.519473 2538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc390e01-8ff2-4a02-9422-e61edb9ad0d9" containerName="cilium-agent" Jan 17 12:23:03.522150 kubelet[2538]: E0117 12:23:03.521886 2538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc390e01-8ff2-4a02-9422-e61edb9ad0d9" containerName="mount-cgroup" Jan 17 12:23:03.522368 kubelet[2538]: E0117 12:23:03.522339 2538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc390e01-8ff2-4a02-9422-e61edb9ad0d9" containerName="clean-cilium-state" Jan 17 12:23:03.523394 kubelet[2538]: E0117 12:23:03.522995 2538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="790b2c8a-7172-4089-aeee-f75a275d4d9d" containerName="cilium-operator" Jan 17 12:23:03.523394 kubelet[2538]: E0117 12:23:03.523030 2538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc390e01-8ff2-4a02-9422-e61edb9ad0d9" containerName="apply-sysctl-overwrites" Jan 17 12:23:03.523394 kubelet[2538]: E0117 12:23:03.523042 2538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc390e01-8ff2-4a02-9422-e61edb9ad0d9" containerName="mount-bpf-fs" Jan 17 12:23:03.523394 kubelet[2538]: I0117 12:23:03.523116 2538 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc390e01-8ff2-4a02-9422-e61edb9ad0d9" containerName="cilium-agent" Jan 17 12:23:03.523394 kubelet[2538]: I0117 12:23:03.523129 2538 memory_manager.go:354] "RemoveStaleState removing state" podUID="790b2c8a-7172-4089-aeee-f75a275d4d9d" containerName="cilium-operator" Jan 17 12:23:03.592769 sshd[4307]: Accepted publickey for core from 139.178.68.195 port 39028 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:03.599263 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:03.616917 systemd[1]: Created slice kubepods-burstable-pod52c487df_f977_4b18_a1eb_fc8c2688ff34.slice - libcontainer container kubepods-burstable-pod52c487df_f977_4b18_a1eb_fc8c2688ff34.slice. Jan 17 12:23:03.623335 systemd-logind[1441]: New session 26 of user core. Jan 17 12:23:03.631355 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:23:03.646011 kubelet[2538]: I0117 12:23:03.645372 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52c487df-f977-4b18-a1eb-fc8c2688ff34-cilium-cgroup\") pod \"cilium-pwl9x\" (UID: \"52c487df-f977-4b18-a1eb-fc8c2688ff34\") " pod="kube-system/cilium-pwl9x" Jan 17 12:23:03.646011 kubelet[2538]: I0117 12:23:03.645458 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb6c9\" (UniqueName: \"kubernetes.io/projected/52c487df-f977-4b18-a1eb-fc8c2688ff34-kube-api-access-lb6c9\") pod \"cilium-pwl9x\" (UID: \"52c487df-f977-4b18-a1eb-fc8c2688ff34\") " pod="kube-system/cilium-pwl9x" Jan 17 12:23:03.646011 kubelet[2538]: I0117 12:23:03.645498 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52c487df-f977-4b18-a1eb-fc8c2688ff34-lib-modules\") pod \"cilium-pwl9x\" (UID: \"52c487df-f977-4b18-a1eb-fc8c2688ff34\") " pod="kube-system/cilium-pwl9x" Jan 17 12:23:03.655980 kubelet[2538]: I0117 12:23:03.655861 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52c487df-f977-4b18-a1eb-fc8c2688ff34-bpf-maps\") pod \"cilium-pwl9x\" (UID: \"52c487df-f977-4b18-a1eb-fc8c2688ff34\") " pod="kube-system/cilium-pwl9x" Jan 17 12:23:03.655980 kubelet[2538]: I0117 12:23:03.656001 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52c487df-f977-4b18-a1eb-fc8c2688ff34-hostproc\") pod \"cilium-pwl9x\" (UID: \"52c487df-f977-4b18-a1eb-fc8c2688ff34\") " pod="kube-system/cilium-pwl9x" Jan 17 12:23:03.656257 kubelet[2538]: I0117 12:23:03.656048 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52c487df-f977-4b18-a1eb-fc8c2688ff34-hubble-tls\") pod \"cilium-pwl9x\" (UID: \"52c487df-f977-4b18-a1eb-fc8c2688ff34\") " pod="kube-system/cilium-pwl9x" Jan 17 12:23:03.656257 kubelet[2538]: I0117 12:23:03.656126 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52c487df-f977-4b18-a1eb-fc8c2688ff34-host-proc-sys-net\") pod \"cilium-pwl9x\" (UID: \"52c487df-f977-4b18-a1eb-fc8c2688ff34\") " pod="kube-system/cilium-pwl9x" Jan 17 12:23:03.656257 kubelet[2538]: I0117 12:23:03.656196 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52c487df-f977-4b18-a1eb-fc8c2688ff34-cilium-run\") pod \"cilium-pwl9x\" (UID: \"52c487df-f977-4b18-a1eb-fc8c2688ff34\") " pod="kube-system/cilium-pwl9x" Jan 17 12:23:03.656257 kubelet[2538]: I0117 12:23:03.656230 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52c487df-f977-4b18-a1eb-fc8c2688ff34-xtables-lock\") pod \"cilium-pwl9x\" (UID: \"52c487df-f977-4b18-a1eb-fc8c2688ff34\") " pod="kube-system/cilium-pwl9x" Jan 17 12:23:03.656413 kubelet[2538]: I0117 12:23:03.656275 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52c487df-f977-4b18-a1eb-fc8c2688ff34-etc-cni-netd\") pod \"cilium-pwl9x\" (UID: \"52c487df-f977-4b18-a1eb-fc8c2688ff34\") " pod="kube-system/cilium-pwl9x" Jan 17 12:23:03.656413 kubelet[2538]: I0117 12:23:03.656319 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52c487df-f977-4b18-a1eb-fc8c2688ff34-clustermesh-secrets\") pod \"cilium-pwl9x\" (UID: \"52c487df-f977-4b18-a1eb-fc8c2688ff34\") " pod="kube-system/cilium-pwl9x" Jan 17 12:23:03.656413 kubelet[2538]: I0117 12:23:03.656350 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/52c487df-f977-4b18-a1eb-fc8c2688ff34-cilium-ipsec-secrets\") pod \"cilium-pwl9x\" (UID: \"52c487df-f977-4b18-a1eb-fc8c2688ff34\") " pod="kube-system/cilium-pwl9x" Jan 17 12:23:03.656413 kubelet[2538]: I0117 12:23:03.656380 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52c487df-f977-4b18-a1eb-fc8c2688ff34-cni-path\") pod \"cilium-pwl9x\" (UID: \"52c487df-f977-4b18-a1eb-fc8c2688ff34\") " pod="kube-system/cilium-pwl9x" Jan 17 12:23:03.656413 kubelet[2538]: I0117 12:23:03.656412 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52c487df-f977-4b18-a1eb-fc8c2688ff34-host-proc-sys-kernel\") pod \"cilium-pwl9x\" (UID: \"52c487df-f977-4b18-a1eb-fc8c2688ff34\") " pod="kube-system/cilium-pwl9x" Jan 17 12:23:03.656638 kubelet[2538]: I0117 12:23:03.656446 2538 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52c487df-f977-4b18-a1eb-fc8c2688ff34-cilium-config-path\") pod \"cilium-pwl9x\" (UID: \"52c487df-f977-4b18-a1eb-fc8c2688ff34\") " pod="kube-system/cilium-pwl9x" Jan 17 12:23:03.711280 sshd[4307]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:03.729702 systemd[1]: sshd@25-137.184.44.6:22-139.178.68.195:39028.service: Deactivated successfully. Jan 17 12:23:03.739442 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:23:03.748075 systemd-logind[1441]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:23:03.762582 systemd[1]: Started sshd@26-137.184.44.6:22-139.178.68.195:39034.service - OpenSSH per-connection server daemon (139.178.68.195:39034). Jan 17 12:23:03.782801 systemd-logind[1441]: Removed session 26. Jan 17 12:23:03.805685 kubelet[2538]: E0117 12:23:03.805630 2538 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:23:03.887821 sshd[4315]: Accepted publickey for core from 139.178.68.195 port 39034 ssh2: RSA SHA256:r8mW/Iv+p7nZqo0WbSWD5Er765ayjb8xE8XAH1LjSMM Jan 17 12:23:03.890604 sshd[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:23:03.916939 systemd-logind[1441]: New session 27 of user core. Jan 17 12:23:03.924030 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 12:23:03.951926 kubelet[2538]: E0117 12:23:03.950903 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:23:03.953345 containerd[1453]: time="2025-01-17T12:23:03.952415680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pwl9x,Uid:52c487df-f977-4b18-a1eb-fc8c2688ff34,Namespace:kube-system,Attempt:0,}" Jan 17 12:23:04.073091 containerd[1453]: time="2025-01-17T12:23:04.067906440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:04.073091 containerd[1453]: time="2025-01-17T12:23:04.068756896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:04.073091 containerd[1453]: time="2025-01-17T12:23:04.068779340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:04.073091 containerd[1453]: time="2025-01-17T12:23:04.068912454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:04.151631 systemd[1]: Started cri-containerd-214edbd25084b0201cb32a94309125e59d33bef3922351592f9fcbc692ed1ee0.scope - libcontainer container 214edbd25084b0201cb32a94309125e59d33bef3922351592f9fcbc692ed1ee0. Jan 17 12:23:04.273341 containerd[1453]: time="2025-01-17T12:23:04.273270765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pwl9x,Uid:52c487df-f977-4b18-a1eb-fc8c2688ff34,Namespace:kube-system,Attempt:0,} returns sandbox id \"214edbd25084b0201cb32a94309125e59d33bef3922351592f9fcbc692ed1ee0\"" Jan 17 12:23:04.276346 kubelet[2538]: E0117 12:23:04.276300 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:23:04.287445 containerd[1453]: time="2025-01-17T12:23:04.287365221Z" level=info msg="CreateContainer within sandbox \"214edbd25084b0201cb32a94309125e59d33bef3922351592f9fcbc692ed1ee0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:23:04.317165 containerd[1453]: time="2025-01-17T12:23:04.317048357Z" level=info msg="CreateContainer within sandbox \"214edbd25084b0201cb32a94309125e59d33bef3922351592f9fcbc692ed1ee0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d72c011fa6108752bb26f30f534091dcb765865b6262c08750687d66a2ddaf67\"" Jan 17 12:23:04.320310 containerd[1453]: time="2025-01-17T12:23:04.317911001Z" level=info msg="StartContainer for \"d72c011fa6108752bb26f30f534091dcb765865b6262c08750687d66a2ddaf67\"" Jan 17 12:23:04.358334 systemd[1]: Started cri-containerd-d72c011fa6108752bb26f30f534091dcb765865b6262c08750687d66a2ddaf67.scope - libcontainer container d72c011fa6108752bb26f30f534091dcb765865b6262c08750687d66a2ddaf67. Jan 17 12:23:04.399514 containerd[1453]: time="2025-01-17T12:23:04.399273007Z" level=info msg="StartContainer for \"d72c011fa6108752bb26f30f534091dcb765865b6262c08750687d66a2ddaf67\" returns successfully" Jan 17 12:23:04.424600 systemd[1]: cri-containerd-d72c011fa6108752bb26f30f534091dcb765865b6262c08750687d66a2ddaf67.scope: Deactivated successfully. Jan 17 12:23:04.485290 containerd[1453]: time="2025-01-17T12:23:04.485206808Z" level=info msg="shim disconnected" id=d72c011fa6108752bb26f30f534091dcb765865b6262c08750687d66a2ddaf67 namespace=k8s.io Jan 17 12:23:04.485290 containerd[1453]: time="2025-01-17T12:23:04.485277560Z" level=warning msg="cleaning up after shim disconnected" id=d72c011fa6108752bb26f30f534091dcb765865b6262c08750687d66a2ddaf67 namespace=k8s.io Jan 17 12:23:04.485290 containerd[1453]: time="2025-01-17T12:23:04.485291391Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:04.987736 kubelet[2538]: E0117 12:23:04.987177 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:23:04.997084 containerd[1453]: time="2025-01-17T12:23:04.997028140Z" level=info msg="CreateContainer within sandbox \"214edbd25084b0201cb32a94309125e59d33bef3922351592f9fcbc692ed1ee0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:23:05.024831 containerd[1453]: time="2025-01-17T12:23:05.024618370Z" level=info msg="CreateContainer within sandbox \"214edbd25084b0201cb32a94309125e59d33bef3922351592f9fcbc692ed1ee0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"63330cf191a549854194ddd67a71931eb7bb623972eee137b94484d398255fbf\"" Jan 17 12:23:05.025785 containerd[1453]: time="2025-01-17T12:23:05.025678277Z" level=info msg="StartContainer for \"63330cf191a549854194ddd67a71931eb7bb623972eee137b94484d398255fbf\"" Jan 17 12:23:05.075366 systemd[1]: Started cri-containerd-63330cf191a549854194ddd67a71931eb7bb623972eee137b94484d398255fbf.scope - libcontainer container 63330cf191a549854194ddd67a71931eb7bb623972eee137b94484d398255fbf. Jan 17 12:23:05.133288 containerd[1453]: time="2025-01-17T12:23:05.133169022Z" level=info msg="StartContainer for \"63330cf191a549854194ddd67a71931eb7bb623972eee137b94484d398255fbf\" returns successfully" Jan 17 12:23:05.145633 systemd[1]: cri-containerd-63330cf191a549854194ddd67a71931eb7bb623972eee137b94484d398255fbf.scope: Deactivated successfully. Jan 17 12:23:05.186924 containerd[1453]: time="2025-01-17T12:23:05.186766582Z" level=info msg="shim disconnected" id=63330cf191a549854194ddd67a71931eb7bb623972eee137b94484d398255fbf namespace=k8s.io Jan 17 12:23:05.186924 containerd[1453]: time="2025-01-17T12:23:05.186863968Z" level=warning msg="cleaning up after shim disconnected" id=63330cf191a549854194ddd67a71931eb7bb623972eee137b94484d398255fbf namespace=k8s.io Jan 17 12:23:05.186924 containerd[1453]: time="2025-01-17T12:23:05.186879248Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:05.770162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63330cf191a549854194ddd67a71931eb7bb623972eee137b94484d398255fbf-rootfs.mount: Deactivated successfully. Jan 17 12:23:05.988992 kubelet[2538]: E0117 12:23:05.986773 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:23:05.993535 containerd[1453]: time="2025-01-17T12:23:05.992995590Z" level=info msg="CreateContainer within sandbox \"214edbd25084b0201cb32a94309125e59d33bef3922351592f9fcbc692ed1ee0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:23:06.042829 containerd[1453]: time="2025-01-17T12:23:06.039588298Z" level=info msg="CreateContainer within sandbox \"214edbd25084b0201cb32a94309125e59d33bef3922351592f9fcbc692ed1ee0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a37fb5faff311aab132c91a5fff2c830b208a16da847606de699bcd36cf63998\"" Jan 17 12:23:06.042829 containerd[1453]: time="2025-01-17T12:23:06.042134218Z" level=info msg="StartContainer for \"a37fb5faff311aab132c91a5fff2c830b208a16da847606de699bcd36cf63998\"" Jan 17 12:23:06.103328 systemd[1]: Started cri-containerd-a37fb5faff311aab132c91a5fff2c830b208a16da847606de699bcd36cf63998.scope - libcontainer container a37fb5faff311aab132c91a5fff2c830b208a16da847606de699bcd36cf63998. Jan 17 12:23:06.163216 containerd[1453]: time="2025-01-17T12:23:06.161664596Z" level=info msg="StartContainer for \"a37fb5faff311aab132c91a5fff2c830b208a16da847606de699bcd36cf63998\" returns successfully" Jan 17 12:23:06.173480 systemd[1]: cri-containerd-a37fb5faff311aab132c91a5fff2c830b208a16da847606de699bcd36cf63998.scope: Deactivated successfully. Jan 17 12:23:06.218127 containerd[1453]: time="2025-01-17T12:23:06.218014243Z" level=info msg="shim disconnected" id=a37fb5faff311aab132c91a5fff2c830b208a16da847606de699bcd36cf63998 namespace=k8s.io Jan 17 12:23:06.218127 containerd[1453]: time="2025-01-17T12:23:06.218117160Z" level=warning msg="cleaning up after shim disconnected" id=a37fb5faff311aab132c91a5fff2c830b208a16da847606de699bcd36cf63998 namespace=k8s.io Jan 17 12:23:06.218832 containerd[1453]: time="2025-01-17T12:23:06.218147206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:06.770664 systemd[1]: run-containerd-runc-k8s.io-a37fb5faff311aab132c91a5fff2c830b208a16da847606de699bcd36cf63998-runc.tbjvle.mount: Deactivated successfully. Jan 17 12:23:06.770837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a37fb5faff311aab132c91a5fff2c830b208a16da847606de699bcd36cf63998-rootfs.mount: Deactivated successfully. Jan 17 12:23:06.990665 kubelet[2538]: E0117 12:23:06.990597 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:23:06.996001 containerd[1453]: time="2025-01-17T12:23:06.995907248Z" level=info msg="CreateContainer within sandbox \"214edbd25084b0201cb32a94309125e59d33bef3922351592f9fcbc692ed1ee0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:23:07.020817 containerd[1453]: time="2025-01-17T12:23:07.020634692Z" level=info msg="CreateContainer within sandbox \"214edbd25084b0201cb32a94309125e59d33bef3922351592f9fcbc692ed1ee0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f5941b6fdab054af25433aa717912e31220a7da28a64fd459c08c31457a00f47\"" Jan 17 12:23:07.023279 containerd[1453]: time="2025-01-17T12:23:07.023214593Z" level=info msg="StartContainer for \"f5941b6fdab054af25433aa717912e31220a7da28a64fd459c08c31457a00f47\"" Jan 17 12:23:07.075336 systemd[1]: Started cri-containerd-f5941b6fdab054af25433aa717912e31220a7da28a64fd459c08c31457a00f47.scope - libcontainer container f5941b6fdab054af25433aa717912e31220a7da28a64fd459c08c31457a00f47. Jan 17 12:23:07.114710 systemd[1]: cri-containerd-f5941b6fdab054af25433aa717912e31220a7da28a64fd459c08c31457a00f47.scope: Deactivated successfully. Jan 17 12:23:07.117985 containerd[1453]: time="2025-01-17T12:23:07.117837334Z" level=info msg="StartContainer for \"f5941b6fdab054af25433aa717912e31220a7da28a64fd459c08c31457a00f47\" returns successfully" Jan 17 12:23:07.154572 containerd[1453]: time="2025-01-17T12:23:07.154354642Z" level=info msg="shim disconnected" id=f5941b6fdab054af25433aa717912e31220a7da28a64fd459c08c31457a00f47 namespace=k8s.io Jan 17 12:23:07.154572 containerd[1453]: time="2025-01-17T12:23:07.154511575Z" level=warning msg="cleaning up after shim disconnected" id=f5941b6fdab054af25433aa717912e31220a7da28a64fd459c08c31457a00f47 namespace=k8s.io Jan 17 12:23:07.154572 containerd[1453]: time="2025-01-17T12:23:07.154528936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:07.770685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5941b6fdab054af25433aa717912e31220a7da28a64fd459c08c31457a00f47-rootfs.mount: Deactivated successfully. Jan 17 12:23:07.997780 kubelet[2538]: E0117 12:23:07.997747 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:23:08.003259 containerd[1453]: time="2025-01-17T12:23:08.002867244Z" level=info msg="CreateContainer within sandbox \"214edbd25084b0201cb32a94309125e59d33bef3922351592f9fcbc692ed1ee0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:23:08.029316 containerd[1453]: time="2025-01-17T12:23:08.028287217Z" level=info msg="CreateContainer within sandbox \"214edbd25084b0201cb32a94309125e59d33bef3922351592f9fcbc692ed1ee0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"10defd178ff731d44c14e12be4b72862b4d2be112943fc3330ebc643f0b36552\"" Jan 17 12:23:08.030492 containerd[1453]: time="2025-01-17T12:23:08.030420992Z" level=info msg="StartContainer for \"10defd178ff731d44c14e12be4b72862b4d2be112943fc3330ebc643f0b36552\"" Jan 17 12:23:08.072228 systemd[1]: Started cri-containerd-10defd178ff731d44c14e12be4b72862b4d2be112943fc3330ebc643f0b36552.scope - libcontainer container 10defd178ff731d44c14e12be4b72862b4d2be112943fc3330ebc643f0b36552. Jan 17 12:23:08.128688 containerd[1453]: time="2025-01-17T12:23:08.128501166Z" level=info msg="StartContainer for \"10defd178ff731d44c14e12be4b72862b4d2be112943fc3330ebc643f0b36552\" returns successfully" Jan 17 12:23:08.631540 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 12:23:09.004257 kubelet[2538]: E0117 12:23:09.003714 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:23:10.005890 kubelet[2538]: E0117 12:23:10.005857 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:23:10.653465 systemd[1]: Started sshd@27-137.184.44.6:22-92.255.85.189:41962.service - OpenSSH per-connection server daemon (92.255.85.189:41962). Jan 17 12:23:10.747992 kubelet[2538]: E0117 12:23:10.747918 2538 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40986->127.0.0.1:39011: write tcp 127.0.0.1:40986->127.0.0.1:39011: write: broken pipe Jan 17 12:23:11.882853 systemd-networkd[1372]: lxc_health: Link UP Jan 17 12:23:11.887279 systemd-networkd[1372]: lxc_health: Gained carrier Jan 17 12:23:11.932462 sshd[4834]: Connection closed by authenticating user root 92.255.85.189 port 41962 [preauth] Jan 17 12:23:11.936107 systemd[1]: sshd@27-137.184.44.6:22-92.255.85.189:41962.service: Deactivated successfully. Jan 17 12:23:11.968778 kubelet[2538]: E0117 12:23:11.967935 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:23:11.988547 kubelet[2538]: I0117 12:23:11.988501 2538 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pwl9x" podStartSLOduration=8.988464214 podStartE2EDuration="8.988464214s" podCreationTimestamp="2025-01-17 12:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:23:09.020164571 +0000 UTC m=+100.639229603" watchObservedRunningTime="2025-01-17 12:23:11.988464214 +0000 UTC m=+103.607529261" Jan 17 12:23:12.010285 kubelet[2538]: E0117 12:23:12.010243 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:23:12.833586 systemd[1]: run-containerd-runc-k8s.io-10defd178ff731d44c14e12be4b72862b4d2be112943fc3330ebc643f0b36552-runc.JuNPbl.mount: Deactivated successfully. Jan 17 12:23:13.012403 kubelet[2538]: E0117 12:23:13.012368 2538 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 17 12:23:13.145176 systemd-networkd[1372]: lxc_health: Gained IPv6LL Jan 17 12:23:15.054365 systemd[1]: run-containerd-runc-k8s.io-10defd178ff731d44c14e12be4b72862b4d2be112943fc3330ebc643f0b36552-runc.KhB5Vn.mount: Deactivated successfully. Jan 17 12:23:17.207864 systemd[1]: run-containerd-runc-k8s.io-10defd178ff731d44c14e12be4b72862b4d2be112943fc3330ebc643f0b36552-runc.fr5VKg.mount: Deactivated successfully. Jan 17 12:23:17.289164 sshd[4315]: pam_unix(sshd:session): session closed for user core Jan 17 12:23:17.295055 systemd[1]: sshd@26-137.184.44.6:22-139.178.68.195:39034.service: Deactivated successfully. Jan 17 12:23:17.299486 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 12:23:17.303135 systemd-logind[1441]: Session 27 logged out. Waiting for processes to exit. Jan 17 12:23:17.304668 systemd-logind[1441]: Removed session 27.