Jan 30 05:00:33.990178 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 05:00:33.990217 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:00:33.990237 kernel: BIOS-provided physical RAM map: Jan 30 05:00:33.990248 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 05:00:33.990260 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 05:00:33.990272 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 05:00:33.990287 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 30 05:00:33.990301 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 30 05:00:33.990314 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 05:00:33.990330 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 05:00:33.990342 kernel: NX (Execute Disable) protection: active Jan 30 05:00:33.990355 kernel: APIC: Static calls initialized Jan 30 05:00:33.990375 kernel: SMBIOS 2.8 present. Jan 30 05:00:33.990388 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 30 05:00:33.990402 kernel: Hypervisor detected: KVM Jan 30 05:00:33.990421 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 05:00:33.990439 kernel: kvm-clock: using sched offset of 3288346682 cycles Jan 30 05:00:33.990452 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 05:00:33.990465 kernel: tsc: Detected 2494.140 MHz processor Jan 30 05:00:33.990478 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 05:00:33.990490 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 05:00:33.990503 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 30 05:00:33.990516 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 05:00:33.990528 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 05:00:33.990545 kernel: ACPI: Early table checksum verification disabled Jan 30 05:00:33.990586 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 30 05:00:33.990599 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:00:33.990612 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:00:33.990624 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:00:33.990636 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 30 05:00:33.990648 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:00:33.990662 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:00:33.990675 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:00:33.990695 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:00:33.990708 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 30 05:00:33.990723 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 30 05:00:33.990737 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 30 05:00:33.990752 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 30 05:00:33.990766 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 30 05:00:33.990779 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 30 05:00:33.990800 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 30 05:00:33.990816 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 05:00:33.990829 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 05:00:33.990843 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 05:00:33.990856 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 05:00:33.990876 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 30 05:00:33.990891 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 30 05:00:33.990912 kernel: Zone ranges: Jan 30 05:00:33.990928 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 05:00:33.990944 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 30 05:00:33.990956 kernel: Normal empty Jan 30 05:00:33.990969 kernel: Movable zone start for each node Jan 30 05:00:33.990984 kernel: Early memory node ranges Jan 30 05:00:33.990997 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 05:00:33.991010 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 30 05:00:33.991025 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 30 05:00:33.991046 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 05:00:33.991061 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 05:00:33.991079 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 30 05:00:33.991104 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 05:00:33.991118 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 05:00:33.991133 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 05:00:33.991149 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 05:00:33.991165 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 05:00:33.991182 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 05:00:33.991201 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 05:00:33.991215 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 05:00:33.991230 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 05:00:33.991246 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 05:00:33.991260 kernel: TSC deadline timer available Jan 30 05:00:33.991274 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 05:00:33.991290 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 05:00:33.991306 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 30 05:00:33.991327 kernel: Booting paravirtualized kernel on KVM Jan 30 05:00:33.991340 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 05:00:33.991379 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 05:00:33.991394 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 05:00:33.991407 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 05:00:33.991419 kernel: pcpu-alloc: [0] 0 1 Jan 30 05:00:33.991432 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 05:00:33.991448 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:00:33.991462 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 05:00:33.991474 kernel: random: crng init done Jan 30 05:00:33.991492 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 05:00:33.991507 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 05:00:33.991519 kernel: Fallback order for Node 0: 0 Jan 30 05:00:33.991534 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 30 05:00:33.991548 kernel: Policy zone: DMA32 Jan 30 05:00:33.991562 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 05:00:33.991608 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Jan 30 05:00:33.991624 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 05:00:33.991645 kernel: Kernel/User page tables isolation: enabled Jan 30 05:00:33.991660 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 05:00:33.991674 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 05:00:33.991688 kernel: Dynamic Preempt: voluntary Jan 30 05:00:33.991702 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 05:00:33.991718 kernel: rcu: RCU event tracing is enabled. Jan 30 05:00:33.991749 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 05:00:33.991766 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 05:00:33.991780 kernel: Rude variant of Tasks RCU enabled. Jan 30 05:00:33.991793 kernel: Tracing variant of Tasks RCU enabled. Jan 30 05:00:33.991814 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 05:00:33.991827 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 05:00:33.991842 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 05:00:33.991856 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 05:00:33.991875 kernel: Console: colour VGA+ 80x25 Jan 30 05:00:33.991890 kernel: printk: console [tty0] enabled Jan 30 05:00:33.991904 kernel: printk: console [ttyS0] enabled Jan 30 05:00:33.991919 kernel: ACPI: Core revision 20230628 Jan 30 05:00:33.991933 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 05:00:33.991953 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 05:00:33.991970 kernel: x2apic enabled Jan 30 05:00:33.991986 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 05:00:33.992002 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 05:00:33.992018 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jan 30 05:00:33.992035 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jan 30 05:00:33.992052 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 05:00:33.992068 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 05:00:33.992102 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 05:00:33.992120 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 05:00:33.992135 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 05:00:33.992154 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 05:00:33.992168 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 05:00:33.992182 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 05:00:33.992197 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 05:00:33.992213 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 05:00:33.992227 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 05:00:33.992254 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 05:00:33.992271 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 05:00:33.992287 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 05:00:33.992304 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 05:00:33.992321 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 05:00:33.992338 kernel: Freeing SMP alternatives memory: 32K Jan 30 05:00:33.992353 kernel: pid_max: default: 32768 minimum: 301 Jan 30 05:00:33.992368 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 05:00:33.992389 kernel: landlock: Up and running. Jan 30 05:00:33.992406 kernel: SELinux: Initializing. Jan 30 05:00:33.992422 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 05:00:33.992437 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 05:00:33.992453 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 30 05:00:33.992469 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:00:33.992485 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:00:33.992504 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:00:33.992527 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 30 05:00:33.992543 kernel: signal: max sigframe size: 1776 Jan 30 05:00:33.992579 kernel: rcu: Hierarchical SRCU implementation. Jan 30 05:00:33.992595 kernel: rcu: Max phase no-delay instances is 400. Jan 30 05:00:33.992613 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 05:00:33.992628 kernel: smp: Bringing up secondary CPUs ... Jan 30 05:00:33.992645 kernel: smpboot: x86: Booting SMP configuration: Jan 30 05:00:33.992661 kernel: .... node #0, CPUs: #1 Jan 30 05:00:33.992676 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 05:00:33.992697 kernel: smpboot: Max logical packages: 1 Jan 30 05:00:33.992717 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jan 30 05:00:33.992733 kernel: devtmpfs: initialized Jan 30 05:00:33.992746 kernel: x86/mm: Memory block size: 128MB Jan 30 05:00:33.992760 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 05:00:33.992775 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 05:00:33.992790 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 05:00:33.992805 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 05:00:33.992822 kernel: audit: initializing netlink subsys (disabled) Jan 30 05:00:33.992838 kernel: audit: type=2000 audit(1738213233.289:1): state=initialized audit_enabled=0 res=1 Jan 30 05:00:33.992860 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 05:00:33.992875 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 05:00:33.992890 kernel: cpuidle: using governor menu Jan 30 05:00:33.992905 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 05:00:33.992921 kernel: dca service started, version 1.12.1 Jan 30 05:00:33.992936 kernel: PCI: Using configuration type 1 for base access Jan 30 05:00:33.992951 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 05:00:33.992966 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 05:00:33.992982 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 05:00:33.993016 kernel: ACPI: Added _OSI(Module Device) Jan 30 05:00:33.993030 kernel: ACPI: Added _OSI(Processor Device) Jan 30 05:00:33.993044 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 05:00:33.993057 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 05:00:33.993070 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 05:00:33.993085 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 05:00:33.993098 kernel: ACPI: Interpreter enabled Jan 30 05:00:33.993112 kernel: ACPI: PM: (supports S0 S5) Jan 30 05:00:33.993126 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 05:00:33.993149 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 05:00:33.993165 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 05:00:33.993180 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 05:00:33.993195 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 05:00:33.993536 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 05:00:33.993794 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 05:00:33.993972 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 05:00:33.994005 kernel: acpiphp: Slot [3] registered Jan 30 05:00:33.994022 kernel: acpiphp: Slot [4] registered Jan 30 05:00:33.994053 kernel: acpiphp: Slot [5] registered Jan 30 05:00:33.994067 kernel: acpiphp: Slot [6] registered Jan 30 05:00:33.994081 kernel: acpiphp: Slot [7] registered Jan 30 05:00:33.994095 kernel: acpiphp: Slot [8] registered Jan 30 05:00:33.994110 kernel: acpiphp: Slot [9] registered Jan 30 05:00:33.994124 kernel: acpiphp: Slot [10] registered Jan 30 05:00:33.994140 kernel: acpiphp: Slot [11] registered Jan 30 05:00:33.994162 kernel: acpiphp: Slot [12] registered Jan 30 05:00:33.994177 kernel: acpiphp: Slot [13] registered Jan 30 05:00:33.994191 kernel: acpiphp: Slot [14] registered Jan 30 05:00:33.994207 kernel: acpiphp: Slot [15] registered Jan 30 05:00:33.994223 kernel: acpiphp: Slot [16] registered Jan 30 05:00:33.994237 kernel: acpiphp: Slot [17] registered Jan 30 05:00:33.994252 kernel: acpiphp: Slot [18] registered Jan 30 05:00:33.994268 kernel: acpiphp: Slot [19] registered Jan 30 05:00:33.994285 kernel: acpiphp: Slot [20] registered Jan 30 05:00:33.994301 kernel: acpiphp: Slot [21] registered Jan 30 05:00:33.994324 kernel: acpiphp: Slot [22] registered Jan 30 05:00:33.994340 kernel: acpiphp: Slot [23] registered Jan 30 05:00:33.994355 kernel: acpiphp: Slot [24] registered Jan 30 05:00:33.994372 kernel: acpiphp: Slot [25] registered Jan 30 05:00:33.994387 kernel: acpiphp: Slot [26] registered Jan 30 05:00:33.994404 kernel: acpiphp: Slot [27] registered Jan 30 05:00:33.994420 kernel: acpiphp: Slot [28] registered Jan 30 05:00:33.994438 kernel: acpiphp: Slot [29] registered Jan 30 05:00:33.994455 kernel: acpiphp: Slot [30] registered Jan 30 05:00:33.994478 kernel: acpiphp: Slot [31] registered Jan 30 05:00:33.994493 kernel: PCI host bridge to bus 0000:00 Jan 30 05:00:33.994731 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 05:00:33.994901 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 05:00:33.995066 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 05:00:33.995215 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 05:00:33.995361 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 30 05:00:33.995486 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 05:00:33.995712 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 05:00:33.995934 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 05:00:33.996123 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 05:00:33.996294 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 30 05:00:33.996469 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 05:00:33.996649 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 05:00:33.996859 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 05:00:33.997050 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 05:00:33.997284 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 30 05:00:33.997458 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 30 05:00:33.997748 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 05:00:33.997930 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 05:00:33.998116 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 05:00:33.998316 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 05:00:33.998486 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 05:00:33.998714 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 30 05:00:33.998886 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 30 05:00:33.999052 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 05:00:33.999236 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 05:00:33.999437 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 05:00:33.999648 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 30 05:00:33.999848 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 30 05:00:34.000016 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 30 05:00:34.000225 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 05:00:34.000388 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 30 05:00:34.000590 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 30 05:00:34.000752 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 30 05:00:34.000943 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 30 05:00:34.001100 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 30 05:00:34.001253 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 30 05:00:34.001416 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 30 05:00:34.001621 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 30 05:00:34.001790 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 05:00:34.001947 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 30 05:00:34.002108 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 30 05:00:34.002293 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 30 05:00:34.002454 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 30 05:00:34.002693 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 30 05:00:34.002853 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 30 05:00:34.003071 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 05:00:34.003244 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 30 05:00:34.003403 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 30 05:00:34.003426 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 05:00:34.003445 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 05:00:34.003463 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 05:00:34.003481 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 05:00:34.003505 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 05:00:34.003523 kernel: iommu: Default domain type: Translated Jan 30 05:00:34.003540 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 05:00:34.003568 kernel: PCI: Using ACPI for IRQ routing Jan 30 05:00:34.003585 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 05:00:34.003603 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 05:00:34.003620 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 30 05:00:34.003811 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 05:00:34.003989 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 05:00:34.004157 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 05:00:34.004181 kernel: vgaarb: loaded Jan 30 05:00:34.004199 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 05:00:34.004217 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 05:00:34.004235 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 05:00:34.004250 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 05:00:34.004266 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 05:00:34.004284 kernel: pnp: PnP ACPI init Jan 30 05:00:34.004302 kernel: pnp: PnP ACPI: found 4 devices Jan 30 05:00:34.004326 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 05:00:34.004344 kernel: NET: Registered PF_INET protocol family Jan 30 05:00:34.004362 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 05:00:34.004379 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 05:00:34.004398 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 05:00:34.004416 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 05:00:34.004434 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 05:00:34.004451 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 05:00:34.004469 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 05:00:34.004491 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 05:00:34.004508 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 05:00:34.004526 kernel: NET: Registered PF_XDP protocol family Jan 30 05:00:34.004780 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 05:00:34.004927 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 05:00:34.005063 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 05:00:34.005200 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 05:00:34.005341 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 30 05:00:34.005519 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 05:00:34.005711 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 05:00:34.005737 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 05:00:34.005893 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 39380 usecs Jan 30 05:00:34.005917 kernel: PCI: CLS 0 bytes, default 64 Jan 30 05:00:34.005935 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 05:00:34.005953 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jan 30 05:00:34.005971 kernel: Initialise system trusted keyrings Jan 30 05:00:34.005996 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 05:00:34.006014 kernel: Key type asymmetric registered Jan 30 05:00:34.006032 kernel: Asymmetric key parser 'x509' registered Jan 30 05:00:34.006050 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 05:00:34.006068 kernel: io scheduler mq-deadline registered Jan 30 05:00:34.006084 kernel: io scheduler kyber registered Jan 30 05:00:34.006101 kernel: io scheduler bfq registered Jan 30 05:00:34.006119 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 05:00:34.006134 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 05:00:34.006153 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 05:00:34.006170 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 05:00:34.006186 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 05:00:34.006204 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 05:00:34.006222 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 05:00:34.006236 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 05:00:34.006251 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 05:00:34.006459 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 05:00:34.006484 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 05:00:34.006713 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 05:00:34.006869 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T05:00:33 UTC (1738213233) Jan 30 05:00:34.007014 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 05:00:34.007037 kernel: intel_pstate: CPU model not supported Jan 30 05:00:34.007055 kernel: NET: Registered PF_INET6 protocol family Jan 30 05:00:34.007074 kernel: Segment Routing with IPv6 Jan 30 05:00:34.007093 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 05:00:34.007110 kernel: NET: Registered PF_PACKET protocol family Jan 30 05:00:34.007132 kernel: Key type dns_resolver registered Jan 30 05:00:34.007148 kernel: IPI shorthand broadcast: enabled Jan 30 05:00:34.007166 kernel: sched_clock: Marking stable (1215007390, 106179597)->(1339756806, -18569819) Jan 30 05:00:34.007185 kernel: registered taskstats version 1 Jan 30 05:00:34.007203 kernel: Loading compiled-in X.509 certificates Jan 30 05:00:34.007235 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 05:00:34.007253 kernel: Key type .fscrypt registered Jan 30 05:00:34.007270 kernel: Key type fscrypt-provisioning registered Jan 30 05:00:34.007288 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 05:00:34.007311 kernel: ima: Allocated hash algorithm: sha1 Jan 30 05:00:34.007326 kernel: ima: No architecture policies found Jan 30 05:00:34.007344 kernel: clk: Disabling unused clocks Jan 30 05:00:34.007363 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 05:00:34.007381 kernel: Write protecting the kernel read-only data: 36864k Jan 30 05:00:34.007432 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 05:00:34.007454 kernel: Run /init as init process Jan 30 05:00:34.007474 kernel: with arguments: Jan 30 05:00:34.007493 kernel: /init Jan 30 05:00:34.007515 kernel: with environment: Jan 30 05:00:34.007534 kernel: HOME=/ Jan 30 05:00:34.007552 kernel: TERM=linux Jan 30 05:00:34.007643 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 05:00:34.007664 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 05:00:34.007685 systemd[1]: Detected virtualization kvm. Jan 30 05:00:34.007702 systemd[1]: Detected architecture x86-64. Jan 30 05:00:34.007718 systemd[1]: Running in initrd. Jan 30 05:00:34.007756 systemd[1]: No hostname configured, using default hostname. Jan 30 05:00:34.007773 systemd[1]: Hostname set to . Jan 30 05:00:34.007791 systemd[1]: Initializing machine ID from VM UUID. Jan 30 05:00:34.007808 systemd[1]: Queued start job for default target initrd.target. Jan 30 05:00:34.007825 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:00:34.007842 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:00:34.007864 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 05:00:34.007884 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 05:00:34.007910 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 05:00:34.007931 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 05:00:34.007954 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 05:00:34.007975 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 05:00:34.007995 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:00:34.008016 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:00:34.008040 systemd[1]: Reached target paths.target - Path Units. Jan 30 05:00:34.008061 systemd[1]: Reached target slices.target - Slice Units. Jan 30 05:00:34.008083 systemd[1]: Reached target swap.target - Swaps. Jan 30 05:00:34.008107 systemd[1]: Reached target timers.target - Timer Units. Jan 30 05:00:34.008127 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 05:00:34.008148 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 05:00:34.008171 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 05:00:34.008191 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 05:00:34.008212 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:00:34.008231 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 05:00:34.008251 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:00:34.008271 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 05:00:34.008291 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 05:00:34.008307 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 05:00:34.008330 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 05:00:34.008351 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 05:00:34.008371 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 05:00:34.008391 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 05:00:34.008412 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:00:34.008433 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 05:00:34.008498 systemd-journald[184]: Collecting audit messages is disabled. Jan 30 05:00:34.008567 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:00:34.008593 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 05:00:34.008631 systemd-journald[184]: Journal started Jan 30 05:00:34.008692 systemd-journald[184]: Runtime Journal (/run/log/journal/2695a9b383b14564b66fb8a88d92f938) is 4.9M, max 39.3M, 34.4M free. Jan 30 05:00:34.013693 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 05:00:34.012661 systemd-modules-load[185]: Inserted module 'overlay' Jan 30 05:00:34.027593 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 05:00:34.043792 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 05:00:34.045770 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 05:00:34.062829 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 05:00:34.099467 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 05:00:34.099509 kernel: Bridge firewalling registered Jan 30 05:00:34.070270 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 30 05:00:34.100374 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 05:00:34.104045 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:00:34.104958 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:00:34.115025 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:00:34.117936 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:00:34.119594 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:00:34.139021 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:00:34.149016 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 05:00:34.150537 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:00:34.154111 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 05:00:34.181384 systemd-resolved[214]: Positive Trust Anchors: Jan 30 05:00:34.181405 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 05:00:34.181440 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 05:00:34.184771 systemd-resolved[214]: Defaulting to hostname 'linux'. Jan 30 05:00:34.186046 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 05:00:34.189346 dracut-cmdline[219]: dracut-dracut-053 Jan 30 05:00:34.187377 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:00:34.193078 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:00:34.305616 kernel: SCSI subsystem initialized Jan 30 05:00:34.316597 kernel: Loading iSCSI transport class v2.0-870. Jan 30 05:00:34.329634 kernel: iscsi: registered transport (tcp) Jan 30 05:00:34.362238 kernel: iscsi: registered transport (qla4xxx) Jan 30 05:00:34.362367 kernel: QLogic iSCSI HBA Driver Jan 30 05:00:34.428538 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 05:00:34.433874 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 05:00:34.468029 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 05:00:34.468122 kernel: device-mapper: uevent: version 1.0.3 Jan 30 05:00:34.469414 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 05:00:34.519705 kernel: raid6: avx2x4 gen() 14452 MB/s Jan 30 05:00:34.536628 kernel: raid6: avx2x2 gen() 13672 MB/s Jan 30 05:00:34.553693 kernel: raid6: avx2x1 gen() 11077 MB/s Jan 30 05:00:34.553801 kernel: raid6: using algorithm avx2x4 gen() 14452 MB/s Jan 30 05:00:34.571695 kernel: raid6: .... xor() 5578 MB/s, rmw enabled Jan 30 05:00:34.571867 kernel: raid6: using avx2x2 recovery algorithm Jan 30 05:00:34.600600 kernel: xor: automatically using best checksumming function avx Jan 30 05:00:34.799598 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 05:00:34.814917 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 05:00:34.824053 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:00:34.848486 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 30 05:00:34.854740 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:00:34.863674 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 05:00:34.899263 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 30 05:00:34.948426 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 05:00:34.954889 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 05:00:35.041277 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:00:35.049910 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 05:00:35.077313 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 05:00:35.087990 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 05:00:35.089802 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:00:35.091212 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 05:00:35.098908 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 05:00:35.134596 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 05:00:35.153617 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 30 05:00:35.272563 kernel: scsi host0: Virtio SCSI HBA Jan 30 05:00:35.272894 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 05:00:35.273099 kernel: libata version 3.00 loaded. Jan 30 05:00:35.273130 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 05:00:35.273157 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 05:00:35.273174 kernel: GPT:9289727 != 125829119 Jan 30 05:00:35.273186 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 05:00:35.273209 kernel: GPT:9289727 != 125829119 Jan 30 05:00:35.273221 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 05:00:35.273234 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:00:35.273246 kernel: ACPI: bus type USB registered Jan 30 05:00:35.273258 kernel: usbcore: registered new interface driver usbfs Jan 30 05:00:35.273270 kernel: usbcore: registered new interface driver hub Jan 30 05:00:35.273283 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 05:00:35.312773 kernel: usbcore: registered new device driver usb Jan 30 05:00:35.312808 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 30 05:00:35.317178 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 05:00:35.317205 kernel: scsi host1: ata_piix Jan 30 05:00:35.317371 kernel: AES CTR mode by8 optimization enabled Jan 30 05:00:35.317385 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jan 30 05:00:35.317533 kernel: scsi host2: ata_piix Jan 30 05:00:35.318297 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 30 05:00:35.318327 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 30 05:00:35.269658 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 05:00:35.269891 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:00:35.273458 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:00:35.273911 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:00:35.274118 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:00:35.275104 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:00:35.285737 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:00:35.342597 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 30 05:00:35.343219 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 30 05:00:35.343478 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 30 05:00:35.344939 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 30 05:00:35.345125 kernel: hub 1-0:1.0: USB hub found Jan 30 05:00:35.345305 kernel: hub 1-0:1.0: 2 ports detected Jan 30 05:00:35.399785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:00:35.408913 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:00:35.447482 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:00:35.497871 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) Jan 30 05:00:35.512591 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (453) Jan 30 05:00:35.524123 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 05:00:35.532099 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 05:00:35.540452 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 05:00:35.547237 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 05:00:35.548168 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 05:00:35.556989 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 05:00:35.568761 disk-uuid[547]: Primary Header is updated. Jan 30 05:00:35.568761 disk-uuid[547]: Secondary Entries is updated. Jan 30 05:00:35.568761 disk-uuid[547]: Secondary Header is updated. Jan 30 05:00:35.575867 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:00:35.583603 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:00:35.597662 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:00:36.593660 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:00:36.594717 disk-uuid[548]: The operation has completed successfully. Jan 30 05:00:36.653720 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 05:00:36.654976 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 05:00:36.666927 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 05:00:36.685691 sh[563]: Success Jan 30 05:00:36.704995 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 05:00:36.791162 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 05:00:36.804836 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 05:00:36.808165 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 05:00:36.852266 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 05:00:36.852369 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:00:36.852392 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 05:00:36.854907 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 05:00:36.855018 kernel: BTRFS info (device dm-0): using free space tree Jan 30 05:00:36.867537 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 05:00:36.869398 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 05:00:36.876938 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 05:00:36.884189 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 05:00:36.903635 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:00:36.903881 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:00:36.904768 kernel: BTRFS info (device vda6): using free space tree Jan 30 05:00:36.910635 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 05:00:36.928203 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 05:00:36.929202 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:00:36.938359 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 05:00:36.948984 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 05:00:37.096410 ignition[661]: Ignition 2.19.0 Jan 30 05:00:37.096428 ignition[661]: Stage: fetch-offline Jan 30 05:00:37.096491 ignition[661]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:00:37.098789 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 05:00:37.096506 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:00:37.096658 ignition[661]: parsed url from cmdline: "" Jan 30 05:00:37.096663 ignition[661]: no config URL provided Jan 30 05:00:37.096669 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 05:00:37.096678 ignition[661]: no config at "/usr/lib/ignition/user.ign" Jan 30 05:00:37.096685 ignition[661]: failed to fetch config: resource requires networking Jan 30 05:00:37.096912 ignition[661]: Ignition finished successfully Jan 30 05:00:37.117756 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 05:00:37.126003 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 05:00:37.171100 systemd-networkd[752]: lo: Link UP Jan 30 05:00:37.171120 systemd-networkd[752]: lo: Gained carrier Jan 30 05:00:37.176078 systemd-networkd[752]: Enumeration completed Jan 30 05:00:37.176838 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 05:00:37.176881 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 05:00:37.176888 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 30 05:00:37.178034 systemd[1]: Reached target network.target - Network. Jan 30 05:00:37.178334 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:00:37.178341 systemd-networkd[752]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 05:00:37.179420 systemd-networkd[752]: eth0: Link UP Jan 30 05:00:37.179428 systemd-networkd[752]: eth0: Gained carrier Jan 30 05:00:37.179460 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 05:00:37.184455 systemd-networkd[752]: eth1: Link UP Jan 30 05:00:37.184460 systemd-networkd[752]: eth1: Gained carrier Jan 30 05:00:37.184477 systemd-networkd[752]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:00:37.186159 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 05:00:37.204305 systemd-networkd[752]: eth0: DHCPv4 address 146.190.174.183/20, gateway 146.190.160.1 acquired from 169.254.169.253 Jan 30 05:00:37.216766 systemd-networkd[752]: eth1: DHCPv4 address 10.124.0.8/20 acquired from 169.254.169.253 Jan 30 05:00:37.233939 ignition[755]: Ignition 2.19.0 Jan 30 05:00:37.233954 ignition[755]: Stage: fetch Jan 30 05:00:37.234296 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:00:37.234317 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:00:37.234507 ignition[755]: parsed url from cmdline: "" Jan 30 05:00:37.234514 ignition[755]: no config URL provided Jan 30 05:00:37.234523 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 05:00:37.234539 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jan 30 05:00:37.234602 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 30 05:00:37.252700 ignition[755]: GET result: OK Jan 30 05:00:37.253632 ignition[755]: parsing config with SHA512: 735f98133f8f4b27ef98cdbc9a3e059869eca00ad30e0780707d61cba480e00eb8442fdb37a34320e7d7d6cd700b88ef51b0eb6101d54e45dedb0613d1ec7245 Jan 30 05:00:37.264849 unknown[755]: fetched base config from "system" Jan 30 05:00:37.264883 unknown[755]: fetched base config from "system" Jan 30 05:00:37.265906 ignition[755]: fetch: fetch complete Jan 30 05:00:37.264895 unknown[755]: fetched user config from "digitalocean" Jan 30 05:00:37.265916 ignition[755]: fetch: fetch passed Jan 30 05:00:37.266018 ignition[755]: Ignition finished successfully Jan 30 05:00:37.268760 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 05:00:37.280915 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 05:00:37.310072 ignition[762]: Ignition 2.19.0 Jan 30 05:00:37.310094 ignition[762]: Stage: kargs Jan 30 05:00:37.310457 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:00:37.310476 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:00:37.315633 ignition[762]: kargs: kargs passed Jan 30 05:00:37.316408 ignition[762]: Ignition finished successfully Jan 30 05:00:37.318855 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 05:00:37.325932 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 05:00:37.366532 ignition[768]: Ignition 2.19.0 Jan 30 05:00:37.366546 ignition[768]: Stage: disks Jan 30 05:00:37.366885 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:00:37.366905 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:00:37.371277 ignition[768]: disks: disks passed Jan 30 05:00:37.373236 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 05:00:37.371420 ignition[768]: Ignition finished successfully Jan 30 05:00:37.375265 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 05:00:37.376325 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 05:00:37.377214 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 05:00:37.378103 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 05:00:37.379133 systemd[1]: Reached target basic.target - Basic System. Jan 30 05:00:37.385993 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 05:00:37.417313 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 05:00:37.421472 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 05:00:37.431915 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 05:00:37.552602 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 05:00:37.553845 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 05:00:37.555034 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 05:00:37.560818 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 05:00:37.570882 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 05:00:37.575507 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 30 05:00:37.583590 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (784) Jan 30 05:00:37.585941 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 05:00:37.590380 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:00:37.590416 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:00:37.590429 kernel: BTRFS info (device vda6): using free space tree Jan 30 05:00:37.590996 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 05:00:37.591044 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 05:00:37.595575 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 05:00:37.607584 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 05:00:37.618003 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 05:00:37.626153 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 05:00:37.682401 coreos-metadata[787]: Jan 30 05:00:37.681 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 05:00:37.694434 coreos-metadata[786]: Jan 30 05:00:37.694 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 05:00:37.695743 coreos-metadata[787]: Jan 30 05:00:37.695 INFO Fetch successful Jan 30 05:00:37.702694 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 05:00:37.704418 coreos-metadata[787]: Jan 30 05:00:37.703 INFO wrote hostname ci-4081.3.0-d-9062e890fd to /sysroot/etc/hostname Jan 30 05:00:37.705163 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 05:00:37.709517 coreos-metadata[786]: Jan 30 05:00:37.709 INFO Fetch successful Jan 30 05:00:37.714732 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Jan 30 05:00:37.716519 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 30 05:00:37.716698 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 30 05:00:37.724130 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 05:00:37.730094 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 05:00:37.859889 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 05:00:37.865801 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 05:00:37.867980 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 05:00:37.885001 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 05:00:37.886509 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:00:37.914467 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 05:00:37.935665 ignition[906]: INFO : Ignition 2.19.0 Jan 30 05:00:37.935665 ignition[906]: INFO : Stage: mount Jan 30 05:00:37.937459 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:00:37.937459 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:00:37.938654 ignition[906]: INFO : mount: mount passed Jan 30 05:00:37.938654 ignition[906]: INFO : Ignition finished successfully Jan 30 05:00:37.940740 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 05:00:37.946852 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 05:00:37.980092 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 05:00:37.993620 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (918) Jan 30 05:00:37.997616 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:00:37.997734 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:00:37.997756 kernel: BTRFS info (device vda6): using free space tree Jan 30 05:00:38.003622 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 05:00:38.008059 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 05:00:38.045753 ignition[935]: INFO : Ignition 2.19.0 Jan 30 05:00:38.045753 ignition[935]: INFO : Stage: files Jan 30 05:00:38.047154 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:00:38.047154 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:00:38.048417 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Jan 30 05:00:38.049138 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 05:00:38.049138 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 05:00:38.053735 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 05:00:38.054664 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 05:00:38.054664 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 05:00:38.054330 unknown[935]: wrote ssh authorized keys file for user: core Jan 30 05:00:38.057576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 05:00:38.057576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 05:00:38.057576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 05:00:38.057576 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 05:00:38.100021 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 05:00:38.210745 systemd-networkd[752]: eth0: Gained IPv6LL Jan 30 05:00:38.275650 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 05:00:38.275650 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 05:00:38.277474 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 05:00:38.658909 systemd-networkd[752]: eth1: Gained IPv6LL Jan 30 05:00:38.757410 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 30 05:00:38.866598 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 05:00:38.866598 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 30 05:00:38.866598 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 05:00:38.866598 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 05:00:38.866598 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 05:00:38.866598 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 05:00:38.872506 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 05:00:38.872506 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 05:00:38.872506 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 05:00:38.872506 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 05:00:38.872506 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 05:00:38.872506 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:00:38.872506 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:00:38.872506 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:00:38.872506 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 05:00:39.315173 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 30 05:00:39.732440 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:00:39.732440 ignition[935]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 30 05:00:39.734258 ignition[935]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 05:00:39.734258 ignition[935]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 05:00:39.734258 ignition[935]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 30 05:00:39.734258 ignition[935]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 30 05:00:39.734258 ignition[935]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 05:00:39.734258 ignition[935]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 05:00:39.734258 ignition[935]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 30 05:00:39.741466 ignition[935]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 05:00:39.741466 ignition[935]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 05:00:39.741466 ignition[935]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 05:00:39.741466 ignition[935]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 05:00:39.741466 ignition[935]: INFO : files: files passed Jan 30 05:00:39.741466 ignition[935]: INFO : Ignition finished successfully Jan 30 05:00:39.737002 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 05:00:39.744971 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 05:00:39.750759 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 05:00:39.755434 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 05:00:39.755644 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 05:00:39.780324 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:00:39.781809 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:00:39.783545 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:00:39.784988 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 05:00:39.786377 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 05:00:39.792024 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 05:00:39.853241 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 05:00:39.853379 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 05:00:39.854998 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 05:00:39.856091 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 05:00:39.857213 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 05:00:39.872992 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 05:00:39.891997 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 05:00:39.897892 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 05:00:39.926596 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:00:39.927429 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:00:39.928171 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 05:00:39.929067 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 05:00:39.929303 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 05:00:39.930426 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 05:00:39.931359 systemd[1]: Stopped target basic.target - Basic System. Jan 30 05:00:39.932310 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 05:00:39.933090 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 05:00:39.934089 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 05:00:39.934873 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 05:00:39.935796 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 05:00:39.936666 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 05:00:39.937656 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 05:00:39.938727 systemd[1]: Stopped target swap.target - Swaps. Jan 30 05:00:39.939544 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 05:00:39.939847 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 05:00:39.940948 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:00:39.941898 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:00:39.942753 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 05:00:39.942900 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:00:39.943652 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 05:00:39.943866 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 05:00:39.945016 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 05:00:39.945289 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 05:00:39.946226 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 05:00:39.946433 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 05:00:39.947600 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 05:00:39.947776 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 05:00:39.960871 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 05:00:39.961463 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 05:00:39.961836 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:00:39.966048 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 05:00:39.967254 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 05:00:39.967535 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:00:39.976601 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 05:00:39.976802 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 05:00:39.984702 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 05:00:39.984873 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 05:00:40.005388 ignition[987]: INFO : Ignition 2.19.0 Jan 30 05:00:40.005388 ignition[987]: INFO : Stage: umount Jan 30 05:00:40.010352 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:00:40.010352 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:00:40.012629 ignition[987]: INFO : umount: umount passed Jan 30 05:00:40.012629 ignition[987]: INFO : Ignition finished successfully Jan 30 05:00:40.012592 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 05:00:40.012778 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 05:00:40.023084 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 05:00:40.024407 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 05:00:40.024622 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 05:00:40.026013 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 05:00:40.026126 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 05:00:40.036242 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 05:00:40.036360 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 05:00:40.037002 systemd[1]: Stopped target network.target - Network. Jan 30 05:00:40.037448 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 05:00:40.037573 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 05:00:40.059275 systemd[1]: Stopped target paths.target - Path Units. Jan 30 05:00:40.059626 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 05:00:40.060783 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:00:40.061851 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 05:00:40.062287 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 05:00:40.063456 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 05:00:40.063524 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 05:00:40.064744 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 05:00:40.064828 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 05:00:40.068344 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 05:00:40.068793 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 05:00:40.069348 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 05:00:40.069422 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 05:00:40.074466 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 05:00:40.075175 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 05:00:40.078779 systemd-networkd[752]: eth1: DHCPv6 lease lost Jan 30 05:00:40.079930 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 05:00:40.080141 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 05:00:40.082386 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 05:00:40.082571 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 05:00:40.086961 systemd-networkd[752]: eth0: DHCPv6 lease lost Jan 30 05:00:40.089884 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 05:00:40.090130 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 05:00:40.092454 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 05:00:40.092561 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:00:40.093258 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 05:00:40.093332 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 05:00:40.102998 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 05:00:40.104330 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 05:00:40.104476 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 05:00:40.105404 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 05:00:40.105496 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:00:40.109322 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 05:00:40.109416 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 05:00:40.111011 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 05:00:40.111128 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:00:40.112905 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:00:40.135140 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 05:00:40.136145 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:00:40.138779 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 05:00:40.138975 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 05:00:40.141001 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 05:00:40.141119 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 05:00:40.142425 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 05:00:40.142497 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:00:40.143470 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 05:00:40.143590 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 05:00:40.145153 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 05:00:40.145271 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 05:00:40.146203 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 05:00:40.146296 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:00:40.154192 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 05:00:40.155878 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 05:00:40.156833 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:00:40.159068 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:00:40.159194 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:00:40.168438 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 05:00:40.169874 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 05:00:40.172466 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 05:00:40.181999 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 05:00:40.199689 systemd[1]: Switching root. Jan 30 05:00:40.232956 systemd-journald[184]: Journal stopped Jan 30 05:00:41.867287 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 30 05:00:41.867463 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 05:00:41.867487 kernel: SELinux: policy capability open_perms=1 Jan 30 05:00:41.867515 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 05:00:41.871609 kernel: SELinux: policy capability always_check_network=0 Jan 30 05:00:41.871653 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 05:00:41.871672 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 05:00:41.871700 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 05:00:41.871715 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 05:00:41.871758 kernel: audit: type=1403 audit(1738213240.483:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 05:00:41.871796 systemd[1]: Successfully loaded SELinux policy in 54.815ms. Jan 30 05:00:41.871835 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.176ms. Jan 30 05:00:41.871860 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 05:00:41.871874 systemd[1]: Detected virtualization kvm. Jan 30 05:00:41.871890 systemd[1]: Detected architecture x86-64. Jan 30 05:00:41.871907 systemd[1]: Detected first boot. Jan 30 05:00:41.871923 systemd[1]: Hostname set to . Jan 30 05:00:41.871940 systemd[1]: Initializing machine ID from VM UUID. Jan 30 05:00:41.871955 zram_generator::config[1055]: No configuration found. Jan 30 05:00:41.871971 systemd[1]: Populated /etc with preset unit settings. Jan 30 05:00:41.871992 systemd[1]: Queued start job for default target multi-user.target. Jan 30 05:00:41.872006 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 05:00:41.872021 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 05:00:41.872039 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 05:00:41.872059 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 05:00:41.872074 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 05:00:41.872087 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 05:00:41.872102 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 05:00:41.872118 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 05:00:41.872136 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 05:00:41.872150 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:00:41.872163 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:00:41.872179 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 05:00:41.872194 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 05:00:41.872207 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 05:00:41.872240 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 05:00:41.872261 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 05:00:41.872277 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:00:41.872302 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 05:00:41.872320 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:00:41.872345 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 05:00:41.872364 systemd[1]: Reached target slices.target - Slice Units. Jan 30 05:00:41.872387 systemd[1]: Reached target swap.target - Swaps. Jan 30 05:00:41.872404 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 05:00:41.872425 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 05:00:41.872452 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 05:00:41.872468 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 05:00:41.872480 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:00:41.872493 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 05:00:41.872506 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:00:41.872519 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 05:00:41.872532 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 05:00:41.872544 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 05:00:41.872577 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 05:00:41.872597 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:00:41.872611 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 05:00:41.872624 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 05:00:41.872639 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 05:00:41.872655 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 05:00:41.872667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:00:41.872680 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 05:00:41.872693 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 05:00:41.872714 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:00:41.872727 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 05:00:41.872740 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:00:41.872753 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 05:00:41.872766 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:00:41.872782 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 05:00:41.872797 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 05:00:41.872813 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 05:00:41.872835 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 05:00:41.872851 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 05:00:41.872866 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 05:00:41.872879 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 05:00:41.872895 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 05:00:41.872911 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:00:41.872923 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 05:00:41.872939 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 05:00:41.872960 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 05:00:41.872977 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 05:00:41.872990 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 05:00:41.873003 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 05:00:41.873020 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:00:41.873033 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 05:00:41.873045 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 05:00:41.873061 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 05:00:41.873078 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:00:41.873141 systemd-journald[1146]: Collecting audit messages is disabled. Jan 30 05:00:41.873190 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:00:41.873203 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:00:41.873216 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:00:41.873234 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 05:00:41.873250 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 05:00:41.873265 systemd-journald[1146]: Journal started Jan 30 05:00:41.873292 systemd-journald[1146]: Runtime Journal (/run/log/journal/2695a9b383b14564b66fb8a88d92f938) is 4.9M, max 39.3M, 34.4M free. Jan 30 05:00:41.875631 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 05:00:41.882080 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 05:00:41.891582 kernel: fuse: init (API version 7.39) Jan 30 05:00:41.900254 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 05:00:41.900525 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 05:00:41.907513 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 05:00:41.910626 kernel: loop: module loaded Jan 30 05:00:41.919992 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 05:00:41.927747 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 05:00:41.928310 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 05:00:41.946782 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 05:00:41.964012 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 05:00:41.964716 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:00:41.968405 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 05:00:41.982798 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:00:41.991875 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 05:00:41.995795 systemd-journald[1146]: Time spent on flushing to /var/log/journal/2695a9b383b14564b66fb8a88d92f938 is 72.388ms for 971 entries. Jan 30 05:00:41.995795 systemd-journald[1146]: System Journal (/var/log/journal/2695a9b383b14564b66fb8a88d92f938) is 8.0M, max 195.6M, 187.6M free. Jan 30 05:00:42.098079 systemd-journald[1146]: Received client request to flush runtime journal. Jan 30 05:00:42.098138 kernel: ACPI: bus type drm_connector registered Jan 30 05:00:41.997381 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:00:41.997675 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:00:42.000624 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 05:00:42.006760 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 05:00:42.009288 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 05:00:42.011971 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 05:00:42.033004 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:00:42.054147 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 05:00:42.054862 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 05:00:42.104190 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:00:42.107390 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 05:00:42.111770 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 30 05:00:42.111794 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 30 05:00:42.118961 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 05:00:42.131840 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 05:00:42.150899 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 05:00:42.154760 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:00:42.165342 udevadm[1206]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 05:00:42.210169 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 05:00:42.219767 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 05:00:42.243423 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jan 30 05:00:42.243445 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jan 30 05:00:42.253037 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:00:42.924311 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 05:00:42.943085 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:00:42.977157 systemd-udevd[1223]: Using default interface naming scheme 'v255'. Jan 30 05:00:43.012814 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:00:43.028076 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 05:00:43.054821 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 05:00:43.142620 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 05:00:43.171592 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1226) Jan 30 05:00:43.199182 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:00:43.200969 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:00:43.208853 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:00:43.217942 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:00:43.237782 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:00:43.242444 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 05:00:43.242545 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 05:00:43.245768 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:00:43.246425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:00:43.252293 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:00:43.283923 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 05:00:43.289518 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:00:43.290166 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:00:43.294723 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:00:43.297082 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:00:43.301009 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:00:43.320632 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 05:00:43.330786 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:00:43.344599 kernel: ACPI: button: Power Button [PWRF] Jan 30 05:00:43.347591 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 05:00:43.408644 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 05:00:43.430853 systemd-networkd[1227]: lo: Link UP Jan 30 05:00:43.431381 systemd-networkd[1227]: lo: Gained carrier Jan 30 05:00:43.435873 systemd-networkd[1227]: Enumeration completed Jan 30 05:00:43.436352 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 05:00:43.439162 systemd-networkd[1227]: eth0: Configuring with /run/systemd/network/10-3e:c3:63:b8:bf:e5.network. Jan 30 05:00:43.445160 systemd-networkd[1227]: eth1: Configuring with /run/systemd/network/10-8e:c1:ef:bb:90:2e.network. Jan 30 05:00:43.445787 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 05:00:43.448778 systemd-networkd[1227]: eth0: Link UP Jan 30 05:00:43.448794 systemd-networkd[1227]: eth0: Gained carrier Jan 30 05:00:43.455070 systemd-networkd[1227]: eth1: Link UP Jan 30 05:00:43.455085 systemd-networkd[1227]: eth1: Gained carrier Jan 30 05:00:43.487584 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 05:00:43.526971 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:00:43.548688 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 05:00:43.548163 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 05:00:43.551593 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 05:00:43.557579 kernel: Console: switching to colour dummy device 80x25 Jan 30 05:00:43.564979 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 05:00:43.565088 kernel: [drm] features: -context_init Jan 30 05:00:43.566574 kernel: [drm] number of scanouts: 1 Jan 30 05:00:43.566661 kernel: [drm] number of cap sets: 0 Jan 30 05:00:43.578606 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 05:00:43.586444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:00:43.586957 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:00:43.609534 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 05:00:43.609735 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 05:00:43.614168 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:00:43.638054 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 05:00:43.683257 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:00:43.683977 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:00:43.699912 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:00:43.735686 kernel: EDAC MC: Ver: 3.0.0 Jan 30 05:00:43.772064 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 05:00:43.783038 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 05:00:43.809595 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 05:00:43.825781 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:00:43.838947 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 05:00:43.840495 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:00:43.856223 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 05:00:43.864644 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 05:00:43.899386 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 05:00:43.903363 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 05:00:43.911078 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 30 05:00:43.914321 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 05:00:43.914675 systemd[1]: Reached target machines.target - Containers. Jan 30 05:00:43.922816 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 05:00:43.944001 kernel: ISO 9660 Extensions: RRIP_1991A Jan 30 05:00:43.948927 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 30 05:00:43.950820 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 05:00:43.955359 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 05:00:43.964913 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 05:00:43.971968 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 05:00:43.974071 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:00:43.981953 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 05:00:43.998325 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 05:00:44.013108 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 05:00:44.017058 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 05:00:44.035957 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 05:00:44.042512 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 05:00:44.052840 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 05:00:44.089627 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 05:00:44.119628 kernel: loop1: detected capacity change from 0 to 8 Jan 30 05:00:44.146115 kernel: loop2: detected capacity change from 0 to 210664 Jan 30 05:00:44.183725 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 05:00:44.233931 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 05:00:44.290421 kernel: loop5: detected capacity change from 0 to 8 Jan 30 05:00:44.294948 kernel: loop6: detected capacity change from 0 to 210664 Jan 30 05:00:44.313874 kernel: loop7: detected capacity change from 0 to 142488 Jan 30 05:00:44.352081 (sd-merge)[1318]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 30 05:00:44.354030 (sd-merge)[1318]: Merged extensions into '/usr'. Jan 30 05:00:44.365239 systemd[1]: Reloading requested from client PID 1307 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 05:00:44.365267 systemd[1]: Reloading... Jan 30 05:00:44.522963 zram_generator::config[1345]: No configuration found. Jan 30 05:00:44.612250 systemd-networkd[1227]: eth0: Gained IPv6LL Jan 30 05:00:44.675840 systemd-networkd[1227]: eth1: Gained IPv6LL Jan 30 05:00:44.742133 ldconfig[1304]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 05:00:44.756668 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:00:44.853374 systemd[1]: Reloading finished in 487 ms. Jan 30 05:00:44.874180 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 05:00:44.880402 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 05:00:44.883233 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 05:00:44.894852 systemd[1]: Starting ensure-sysext.service... Jan 30 05:00:44.908120 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 05:00:44.920615 systemd[1]: Reloading requested from client PID 1398 ('systemctl') (unit ensure-sysext.service)... Jan 30 05:00:44.920643 systemd[1]: Reloading... Jan 30 05:00:44.961017 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 05:00:44.963907 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 05:00:44.965816 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 05:00:44.966327 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Jan 30 05:00:44.966425 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Jan 30 05:00:44.974068 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 05:00:44.974296 systemd-tmpfiles[1399]: Skipping /boot Jan 30 05:00:44.996696 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 05:00:44.996947 systemd-tmpfiles[1399]: Skipping /boot Jan 30 05:00:45.069255 zram_generator::config[1427]: No configuration found. Jan 30 05:00:45.226028 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:00:45.305212 systemd[1]: Reloading finished in 384 ms. Jan 30 05:00:45.325208 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:00:45.348947 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 05:00:45.360823 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 05:00:45.367722 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 05:00:45.380906 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 05:00:45.392863 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 05:00:45.411409 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:00:45.411688 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:00:45.421033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:00:45.435505 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:00:45.450898 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:00:45.452895 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:00:45.453061 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:00:45.461309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:00:45.470532 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:00:45.471901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:00:45.472125 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:00:45.488238 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:00:45.488617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:00:45.510227 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 05:00:45.516696 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 05:00:45.531123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:00:45.531434 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:00:45.540950 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:00:45.551578 augenrules[1513]: No rules Jan 30 05:00:45.556012 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 05:00:45.565908 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:00:45.580709 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:00:45.582653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:00:45.597913 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 05:00:45.601081 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:00:45.603867 systemd-resolved[1488]: Positive Trust Anchors: Jan 30 05:00:45.603880 systemd-resolved[1488]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 05:00:45.603917 systemd-resolved[1488]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 05:00:45.605196 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 05:00:45.611274 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 05:00:45.612596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:00:45.612926 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:00:45.617939 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 05:00:45.618152 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 05:00:45.622529 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:00:45.622820 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:00:45.625067 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:00:45.625312 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:00:45.633375 systemd-resolved[1488]: Using system hostname 'ci-4081.3.0-d-9062e890fd'. Jan 30 05:00:45.638207 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 05:00:45.642286 systemd[1]: Finished ensure-sysext.service. Jan 30 05:00:45.648181 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 05:00:45.653890 systemd[1]: Reached target network.target - Network. Jan 30 05:00:45.655662 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 05:00:45.656498 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:00:45.658230 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:00:45.658342 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:00:45.664922 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 05:00:45.665667 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 05:00:45.743073 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 05:00:45.745416 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 05:00:45.746161 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 05:00:45.747576 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 05:00:45.748186 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 05:00:45.748750 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 05:00:45.748793 systemd[1]: Reached target paths.target - Path Units. Jan 30 05:00:45.749283 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 05:00:45.750522 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 05:00:45.751350 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 05:00:45.752088 systemd[1]: Reached target timers.target - Timer Units. Jan 30 05:00:45.754119 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 05:00:45.760482 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 05:00:45.765899 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 05:00:45.768807 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 05:00:45.771000 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 05:00:45.772527 systemd[1]: Reached target basic.target - Basic System. Jan 30 05:00:45.774871 systemd[1]: System is tainted: cgroupsv1 Jan 30 05:00:45.774971 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 05:00:45.775008 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 05:00:45.780761 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 05:00:45.797947 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 05:00:45.805390 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 05:00:45.822720 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 05:00:45.827218 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 05:00:45.829961 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 05:00:45.841848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:00:45.845720 jq[1550]: false Jan 30 05:00:45.851757 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 05:00:45.867901 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 05:00:45.885814 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 05:00:45.892522 dbus-daemon[1547]: [system] SELinux support is enabled Jan 30 05:00:45.900846 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 05:00:45.917341 coreos-metadata[1545]: Jan 30 05:00:45.917 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 05:00:45.923926 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 05:00:45.934432 coreos-metadata[1545]: Jan 30 05:00:45.933 INFO Fetch successful Jan 30 05:00:46.933627 systemd-resolved[1488]: Clock change detected. Flushing caches. Jan 30 05:00:46.934223 systemd-timesyncd[1540]: Contacted time server 168.61.215.74:123 (0.flatcar.pool.ntp.org). Jan 30 05:00:46.934305 systemd-timesyncd[1540]: Initial clock synchronization to Thu 2025-01-30 05:00:46.933542 UTC. Jan 30 05:00:46.950733 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 05:00:46.954328 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 05:00:46.970976 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 05:00:46.983874 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 05:00:46.992558 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 05:00:46.999790 extend-filesystems[1551]: Found loop4 Jan 30 05:00:46.999790 extend-filesystems[1551]: Found loop5 Jan 30 05:00:46.999790 extend-filesystems[1551]: Found loop6 Jan 30 05:00:46.999790 extend-filesystems[1551]: Found loop7 Jan 30 05:00:46.999790 extend-filesystems[1551]: Found vda Jan 30 05:00:46.999790 extend-filesystems[1551]: Found vda1 Jan 30 05:00:46.999790 extend-filesystems[1551]: Found vda2 Jan 30 05:00:46.999790 extend-filesystems[1551]: Found vda3 Jan 30 05:00:46.999790 extend-filesystems[1551]: Found usr Jan 30 05:00:46.999790 extend-filesystems[1551]: Found vda4 Jan 30 05:00:46.999790 extend-filesystems[1551]: Found vda6 Jan 30 05:00:46.999790 extend-filesystems[1551]: Found vda7 Jan 30 05:00:47.087207 extend-filesystems[1551]: Found vda9 Jan 30 05:00:47.087207 extend-filesystems[1551]: Checking size of /dev/vda9 Jan 30 05:00:47.087207 extend-filesystems[1551]: Resized partition /dev/vda9 Jan 30 05:00:47.139204 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 30 05:00:47.027806 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 05:00:47.139363 extend-filesystems[1592]: resize2fs 1.47.1 (20-May-2024) Jan 30 05:00:47.028190 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 05:00:47.153350 jq[1580]: true Jan 30 05:00:47.050486 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 05:00:47.153707 update_engine[1578]: I20250130 05:00:47.150703 1578 main.cc:92] Flatcar Update Engine starting Jan 30 05:00:47.056096 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 05:00:47.171546 update_engine[1578]: I20250130 05:00:47.164005 1578 update_check_scheduler.cc:74] Next update check in 2m19s Jan 30 05:00:47.094786 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 05:00:47.099501 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 05:00:47.101632 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 05:00:47.172355 jq[1595]: true Jan 30 05:00:47.179310 (ntainerd)[1596]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 05:00:47.205906 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 05:00:47.236502 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 05:00:47.237124 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 05:00:47.237176 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 05:00:47.239559 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 05:00:47.240593 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 30 05:00:47.240646 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 05:00:47.241425 systemd[1]: Started update-engine.service - Update Engine. Jan 30 05:00:47.249312 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 05:00:47.253317 systemd-logind[1574]: New seat seat0. Jan 30 05:00:47.256291 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 05:00:47.266227 tar[1593]: linux-amd64/helm Jan 30 05:00:47.264565 systemd-logind[1574]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 05:00:47.264592 systemd-logind[1574]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 05:00:47.278061 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 05:00:47.346311 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1615) Jan 30 05:00:47.365073 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 05:00:47.388041 extend-filesystems[1592]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 05:00:47.388041 extend-filesystems[1592]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 05:00:47.388041 extend-filesystems[1592]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 05:00:47.387496 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 05:00:47.405637 extend-filesystems[1551]: Resized filesystem in /dev/vda9 Jan 30 05:00:47.405637 extend-filesystems[1551]: Found vdb Jan 30 05:00:47.387859 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 05:00:47.447736 bash[1640]: Updated "/home/core/.ssh/authorized_keys" Jan 30 05:00:47.452390 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 05:00:47.486596 systemd[1]: Starting sshkeys.service... Jan 30 05:00:47.539235 sshd_keygen[1582]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 05:00:47.543859 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 05:00:47.558959 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 05:00:47.723277 coreos-metadata[1650]: Jan 30 05:00:47.722 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 05:00:47.739498 coreos-metadata[1650]: Jan 30 05:00:47.738 INFO Fetch successful Jan 30 05:00:47.764026 unknown[1650]: wrote ssh authorized keys file for user: core Jan 30 05:00:47.770447 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 05:00:47.790608 locksmithd[1616]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 05:00:47.809499 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 05:00:47.827448 containerd[1596]: time="2025-01-30T05:00:47.827223372Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 05:00:47.839741 update-ssh-keys[1671]: Updated "/home/core/.ssh/authorized_keys" Jan 30 05:00:47.837452 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 05:00:47.843199 systemd[1]: Finished sshkeys.service. Jan 30 05:00:47.869190 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 05:00:47.869613 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 05:00:47.888917 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 05:00:47.926076 containerd[1596]: time="2025-01-30T05:00:47.925985699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:00:47.934883 containerd[1596]: time="2025-01-30T05:00:47.933563361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:00:47.934883 containerd[1596]: time="2025-01-30T05:00:47.933632543Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 05:00:47.934883 containerd[1596]: time="2025-01-30T05:00:47.933663264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 05:00:47.934883 containerd[1596]: time="2025-01-30T05:00:47.934022012Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 05:00:47.934883 containerd[1596]: time="2025-01-30T05:00:47.934065354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 05:00:47.934883 containerd[1596]: time="2025-01-30T05:00:47.934168864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:00:47.934883 containerd[1596]: time="2025-01-30T05:00:47.934191048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:00:47.934883 containerd[1596]: time="2025-01-30T05:00:47.934610343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:00:47.934883 containerd[1596]: time="2025-01-30T05:00:47.934640147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 05:00:47.934883 containerd[1596]: time="2025-01-30T05:00:47.934660184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:00:47.938311 containerd[1596]: time="2025-01-30T05:00:47.934673536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 05:00:47.938311 containerd[1596]: time="2025-01-30T05:00:47.938183403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:00:47.942133 containerd[1596]: time="2025-01-30T05:00:47.941858359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:00:47.942319 containerd[1596]: time="2025-01-30T05:00:47.942270891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:00:47.942366 containerd[1596]: time="2025-01-30T05:00:47.942318949Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 05:00:47.942554 containerd[1596]: time="2025-01-30T05:00:47.942525348Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 05:00:47.942640 containerd[1596]: time="2025-01-30T05:00:47.942619267Z" level=info msg="metadata content store policy set" policy=shared Jan 30 05:00:47.955363 containerd[1596]: time="2025-01-30T05:00:47.954217539Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 05:00:47.955363 containerd[1596]: time="2025-01-30T05:00:47.954313800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 05:00:47.955363 containerd[1596]: time="2025-01-30T05:00:47.954341905Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 05:00:47.955363 containerd[1596]: time="2025-01-30T05:00:47.954370171Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 05:00:47.955363 containerd[1596]: time="2025-01-30T05:00:47.954409807Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 05:00:47.955363 containerd[1596]: time="2025-01-30T05:00:47.954658772Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 05:00:47.955363 containerd[1596]: time="2025-01-30T05:00:47.955147989Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 05:00:47.956020 containerd[1596]: time="2025-01-30T05:00:47.955428638Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 05:00:47.956020 containerd[1596]: time="2025-01-30T05:00:47.955459123Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 05:00:47.956020 containerd[1596]: time="2025-01-30T05:00:47.955477541Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 05:00:47.956020 containerd[1596]: time="2025-01-30T05:00:47.955498678Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 05:00:47.956020 containerd[1596]: time="2025-01-30T05:00:47.955533980Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 05:00:47.956020 containerd[1596]: time="2025-01-30T05:00:47.955557753Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 05:00:47.956020 containerd[1596]: time="2025-01-30T05:00:47.955576734Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 05:00:47.956020 containerd[1596]: time="2025-01-30T05:00:47.955597673Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 05:00:47.956020 containerd[1596]: time="2025-01-30T05:00:47.955615701Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 05:00:47.956020 containerd[1596]: time="2025-01-30T05:00:47.955631939Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 05:00:47.956020 containerd[1596]: time="2025-01-30T05:00:47.955656243Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 05:00:47.957073 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 05:00:47.965937 containerd[1596]: time="2025-01-30T05:00:47.963854953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.965937 containerd[1596]: time="2025-01-30T05:00:47.963920755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.965937 containerd[1596]: time="2025-01-30T05:00:47.963945351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.965937 containerd[1596]: time="2025-01-30T05:00:47.963973017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.965937 containerd[1596]: time="2025-01-30T05:00:47.963994402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.965937 containerd[1596]: time="2025-01-30T05:00:47.964017064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.965937 containerd[1596]: time="2025-01-30T05:00:47.964036721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.965937 containerd[1596]: time="2025-01-30T05:00:47.964084117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.965937 containerd[1596]: time="2025-01-30T05:00:47.964114523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.965937 containerd[1596]: time="2025-01-30T05:00:47.964140868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.965937 containerd[1596]: time="2025-01-30T05:00:47.964158531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.965937 containerd[1596]: time="2025-01-30T05:00:47.964178951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.965937 containerd[1596]: time="2025-01-30T05:00:47.964202298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.965937 containerd[1596]: time="2025-01-30T05:00:47.964226261Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 05:00:47.965937 containerd[1596]: time="2025-01-30T05:00:47.964266984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.966545 containerd[1596]: time="2025-01-30T05:00:47.964319950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.966545 containerd[1596]: time="2025-01-30T05:00:47.964340496Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 05:00:47.966545 containerd[1596]: time="2025-01-30T05:00:47.964407430Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 05:00:47.966545 containerd[1596]: time="2025-01-30T05:00:47.964435679Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 05:00:47.966545 containerd[1596]: time="2025-01-30T05:00:47.964456755Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 05:00:47.966545 containerd[1596]: time="2025-01-30T05:00:47.964474811Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 05:00:47.966545 containerd[1596]: time="2025-01-30T05:00:47.964488451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.966545 containerd[1596]: time="2025-01-30T05:00:47.964507757Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 05:00:47.966545 containerd[1596]: time="2025-01-30T05:00:47.964522861Z" level=info msg="NRI interface is disabled by configuration." Jan 30 05:00:47.966545 containerd[1596]: time="2025-01-30T05:00:47.964558730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 05:00:47.967657 containerd[1596]: time="2025-01-30T05:00:47.967095330Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 05:00:47.970090 containerd[1596]: time="2025-01-30T05:00:47.969757259Z" level=info msg="Connect containerd service" Jan 30 05:00:47.970090 containerd[1596]: time="2025-01-30T05:00:47.969889383Z" level=info msg="using legacy CRI server" Jan 30 05:00:47.970090 containerd[1596]: time="2025-01-30T05:00:47.969905296Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 05:00:47.970090 containerd[1596]: time="2025-01-30T05:00:47.970072391Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 05:00:47.977738 containerd[1596]: time="2025-01-30T05:00:47.974221876Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 05:00:47.977738 containerd[1596]: time="2025-01-30T05:00:47.977050495Z" level=info msg="Start subscribing containerd event" Jan 30 05:00:47.977738 containerd[1596]: time="2025-01-30T05:00:47.977145877Z" level=info msg="Start recovering state" Jan 30 05:00:47.977738 containerd[1596]: time="2025-01-30T05:00:47.977242950Z" level=info msg="Start event monitor" Jan 30 05:00:47.977738 containerd[1596]: time="2025-01-30T05:00:47.977260443Z" level=info msg="Start snapshots syncer" Jan 30 05:00:47.977738 containerd[1596]: time="2025-01-30T05:00:47.977270737Z" level=info msg="Start cni network conf syncer for default" Jan 30 05:00:47.977738 containerd[1596]: time="2025-01-30T05:00:47.977278414Z" level=info msg="Start streaming server" Jan 30 05:00:47.974534 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 05:00:47.978430 containerd[1596]: time="2025-01-30T05:00:47.978009854Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 05:00:47.978430 containerd[1596]: time="2025-01-30T05:00:47.978069305Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 05:00:47.978430 containerd[1596]: time="2025-01-30T05:00:47.978125623Z" level=info msg="containerd successfully booted in 0.153708s" Jan 30 05:00:47.991389 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 05:00:47.997933 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 05:00:48.002857 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 05:00:48.457918 tar[1593]: linux-amd64/LICENSE Jan 30 05:00:48.460348 tar[1593]: linux-amd64/README.md Jan 30 05:00:48.479335 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 05:00:48.820014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:00:48.825615 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 05:00:48.832721 systemd[1]: Startup finished in 8.117s (kernel) + 7.404s (userspace) = 15.521s. Jan 30 05:00:48.837276 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:00:49.713297 kubelet[1707]: E0130 05:00:49.713205 1707 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:00:49.715503 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:00:49.715828 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:00:55.114603 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 05:00:55.124093 systemd[1]: Started sshd@0-146.190.174.183:22-147.75.109.163:34442.service - OpenSSH per-connection server daemon (147.75.109.163:34442). Jan 30 05:00:55.200168 sshd[1720]: Accepted publickey for core from 147.75.109.163 port 34442 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:00:55.204141 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:00:55.216194 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 05:00:55.225155 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 05:00:55.228493 systemd-logind[1574]: New session 1 of user core. Jan 30 05:00:55.246038 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 05:00:55.255220 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 05:00:55.263351 (systemd)[1726]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 05:00:55.391320 systemd[1726]: Queued start job for default target default.target. Jan 30 05:00:55.391870 systemd[1726]: Created slice app.slice - User Application Slice. Jan 30 05:00:55.391896 systemd[1726]: Reached target paths.target - Paths. Jan 30 05:00:55.391916 systemd[1726]: Reached target timers.target - Timers. Jan 30 05:00:55.403950 systemd[1726]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 05:00:55.419169 systemd[1726]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 05:00:55.419477 systemd[1726]: Reached target sockets.target - Sockets. Jan 30 05:00:55.419595 systemd[1726]: Reached target basic.target - Basic System. Jan 30 05:00:55.420101 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 05:00:55.420265 systemd[1726]: Reached target default.target - Main User Target. Jan 30 05:00:55.420330 systemd[1726]: Startup finished in 147ms. Jan 30 05:00:55.426133 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 05:00:55.494926 systemd[1]: Started sshd@1-146.190.174.183:22-147.75.109.163:34458.service - OpenSSH per-connection server daemon (147.75.109.163:34458). Jan 30 05:00:55.550543 sshd[1738]: Accepted publickey for core from 147.75.109.163 port 34458 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:00:55.552945 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:00:55.559766 systemd-logind[1574]: New session 2 of user core. Jan 30 05:00:55.568318 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 05:00:55.640980 sshd[1738]: pam_unix(sshd:session): session closed for user core Jan 30 05:00:55.647250 systemd[1]: Started sshd@2-146.190.174.183:22-147.75.109.163:34462.service - OpenSSH per-connection server daemon (147.75.109.163:34462). Jan 30 05:00:55.647993 systemd[1]: sshd@1-146.190.174.183:22-147.75.109.163:34458.service: Deactivated successfully. Jan 30 05:00:55.656617 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 05:00:55.657492 systemd-logind[1574]: Session 2 logged out. Waiting for processes to exit. Jan 30 05:00:55.660563 systemd-logind[1574]: Removed session 2. Jan 30 05:00:55.702400 sshd[1743]: Accepted publickey for core from 147.75.109.163 port 34462 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:00:55.705476 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:00:55.714140 systemd-logind[1574]: New session 3 of user core. Jan 30 05:00:55.721331 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 05:00:55.781722 sshd[1743]: pam_unix(sshd:session): session closed for user core Jan 30 05:00:55.799242 systemd[1]: Started sshd@3-146.190.174.183:22-147.75.109.163:34472.service - OpenSSH per-connection server daemon (147.75.109.163:34472). Jan 30 05:00:55.800035 systemd[1]: sshd@2-146.190.174.183:22-147.75.109.163:34462.service: Deactivated successfully. Jan 30 05:00:55.807622 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 05:00:55.810241 systemd-logind[1574]: Session 3 logged out. Waiting for processes to exit. Jan 30 05:00:55.812004 systemd-logind[1574]: Removed session 3. Jan 30 05:00:55.851866 sshd[1751]: Accepted publickey for core from 147.75.109.163 port 34472 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:00:55.855208 sshd[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:00:55.863547 systemd-logind[1574]: New session 4 of user core. Jan 30 05:00:55.871265 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 05:00:55.943340 sshd[1751]: pam_unix(sshd:session): session closed for user core Jan 30 05:00:55.954236 systemd[1]: Started sshd@4-146.190.174.183:22-147.75.109.163:34474.service - OpenSSH per-connection server daemon (147.75.109.163:34474). Jan 30 05:00:55.955120 systemd[1]: sshd@3-146.190.174.183:22-147.75.109.163:34472.service: Deactivated successfully. Jan 30 05:00:55.962991 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 05:00:55.963098 systemd-logind[1574]: Session 4 logged out. Waiting for processes to exit. Jan 30 05:00:55.971191 systemd-logind[1574]: Removed session 4. Jan 30 05:00:56.008457 sshd[1759]: Accepted publickey for core from 147.75.109.163 port 34474 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:00:56.011568 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:00:56.019375 systemd-logind[1574]: New session 5 of user core. Jan 30 05:00:56.026316 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 05:00:56.101717 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 05:00:56.102292 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:00:56.119943 sudo[1766]: pam_unix(sudo:session): session closed for user root Jan 30 05:00:56.125055 sshd[1759]: pam_unix(sshd:session): session closed for user core Jan 30 05:00:56.135124 systemd[1]: Started sshd@5-146.190.174.183:22-147.75.109.163:34476.service - OpenSSH per-connection server daemon (147.75.109.163:34476). Jan 30 05:00:56.136864 systemd[1]: sshd@4-146.190.174.183:22-147.75.109.163:34474.service: Deactivated successfully. Jan 30 05:00:56.139533 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 05:00:56.144852 systemd-logind[1574]: Session 5 logged out. Waiting for processes to exit. Jan 30 05:00:56.146984 systemd-logind[1574]: Removed session 5. Jan 30 05:00:56.188733 sshd[1768]: Accepted publickey for core from 147.75.109.163 port 34476 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:00:56.190804 sshd[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:00:56.197001 systemd-logind[1574]: New session 6 of user core. Jan 30 05:00:56.205199 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 05:00:56.270368 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 05:00:56.270945 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:00:56.276763 sudo[1776]: pam_unix(sudo:session): session closed for user root Jan 30 05:00:56.285944 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 05:00:56.286860 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:00:56.305142 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 05:00:56.321614 auditctl[1779]: No rules Jan 30 05:00:56.322575 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 05:00:56.322963 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 05:00:56.340457 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 05:00:56.380842 augenrules[1798]: No rules Jan 30 05:00:56.384499 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 05:00:56.388281 sudo[1775]: pam_unix(sudo:session): session closed for user root Jan 30 05:00:56.394948 sshd[1768]: pam_unix(sshd:session): session closed for user core Jan 30 05:00:56.415103 systemd[1]: Started sshd@6-146.190.174.183:22-147.75.109.163:34490.service - OpenSSH per-connection server daemon (147.75.109.163:34490). Jan 30 05:00:56.415836 systemd[1]: sshd@5-146.190.174.183:22-147.75.109.163:34476.service: Deactivated successfully. Jan 30 05:00:56.424335 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 05:00:56.428122 systemd-logind[1574]: Session 6 logged out. Waiting for processes to exit. Jan 30 05:00:56.430655 systemd-logind[1574]: Removed session 6. Jan 30 05:00:56.462936 sshd[1804]: Accepted publickey for core from 147.75.109.163 port 34490 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:00:56.465782 sshd[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:00:56.473400 systemd-logind[1574]: New session 7 of user core. Jan 30 05:00:56.484251 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 05:00:56.549143 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 05:00:56.549619 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:00:57.131127 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 05:00:57.140490 (dockerd)[1828]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 05:00:57.676523 dockerd[1828]: time="2025-01-30T05:00:57.676432368Z" level=info msg="Starting up" Jan 30 05:00:57.891227 dockerd[1828]: time="2025-01-30T05:00:57.890952307Z" level=info msg="Loading containers: start." Jan 30 05:00:58.048727 kernel: Initializing XFRM netlink socket Jan 30 05:00:58.164982 systemd-networkd[1227]: docker0: Link UP Jan 30 05:00:58.184154 dockerd[1828]: time="2025-01-30T05:00:58.184094685Z" level=info msg="Loading containers: done." Jan 30 05:00:58.205704 dockerd[1828]: time="2025-01-30T05:00:58.205179677Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 05:00:58.205704 dockerd[1828]: time="2025-01-30T05:00:58.205374897Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 05:00:58.205704 dockerd[1828]: time="2025-01-30T05:00:58.205585491Z" level=info msg="Daemon has completed initialization" Jan 30 05:00:58.258651 dockerd[1828]: time="2025-01-30T05:00:58.258569672Z" level=info msg="API listen on /run/docker.sock" Jan 30 05:00:58.259290 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 05:00:59.434699 containerd[1596]: time="2025-01-30T05:00:59.434620738Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 05:00:59.772307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 05:00:59.781115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:00:59.946301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:00:59.960415 (kubelet)[1992]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:01:00.044231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2694716431.mount: Deactivated successfully. Jan 30 05:01:00.096469 kubelet[1992]: E0130 05:01:00.096328 1992 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:01:00.103003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:01:00.103308 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:01:02.123398 containerd[1596]: time="2025-01-30T05:01:02.123219752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:02.126157 containerd[1596]: time="2025-01-30T05:01:02.126074001Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 05:01:02.128353 containerd[1596]: time="2025-01-30T05:01:02.128259401Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:02.135141 containerd[1596]: time="2025-01-30T05:01:02.135073734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:02.137627 containerd[1596]: time="2025-01-30T05:01:02.137055010Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.702364172s" Jan 30 05:01:02.137627 containerd[1596]: time="2025-01-30T05:01:02.137137060Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 05:01:02.192882 containerd[1596]: time="2025-01-30T05:01:02.192817819Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 05:01:04.084844 containerd[1596]: time="2025-01-30T05:01:04.084765853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:04.086906 containerd[1596]: time="2025-01-30T05:01:04.086727123Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 05:01:04.087861 containerd[1596]: time="2025-01-30T05:01:04.087731081Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:04.092911 containerd[1596]: time="2025-01-30T05:01:04.092795699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:04.096758 containerd[1596]: time="2025-01-30T05:01:04.095782446Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.902891853s" Jan 30 05:01:04.096758 containerd[1596]: time="2025-01-30T05:01:04.095872327Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 05:01:04.139075 containerd[1596]: time="2025-01-30T05:01:04.138983546Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 05:01:05.266622 containerd[1596]: time="2025-01-30T05:01:05.266535845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:05.268418 containerd[1596]: time="2025-01-30T05:01:05.267953807Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 05:01:05.269295 containerd[1596]: time="2025-01-30T05:01:05.269237502Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:05.274610 containerd[1596]: time="2025-01-30T05:01:05.274506877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:05.277090 containerd[1596]: time="2025-01-30T05:01:05.276470536Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.137113365s" Jan 30 05:01:05.277090 containerd[1596]: time="2025-01-30T05:01:05.276544604Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 05:01:05.318716 containerd[1596]: time="2025-01-30T05:01:05.318450090Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 05:01:05.661857 systemd-resolved[1488]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 30 05:01:06.427004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2472555533.mount: Deactivated successfully. Jan 30 05:01:07.021887 containerd[1596]: time="2025-01-30T05:01:07.021830590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:07.023635 containerd[1596]: time="2025-01-30T05:01:07.023572240Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 05:01:07.024465 containerd[1596]: time="2025-01-30T05:01:07.024408206Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:07.027731 containerd[1596]: time="2025-01-30T05:01:07.027116503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:07.028526 containerd[1596]: time="2025-01-30T05:01:07.028232540Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.709729527s" Jan 30 05:01:07.028526 containerd[1596]: time="2025-01-30T05:01:07.028300400Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 05:01:07.060589 containerd[1596]: time="2025-01-30T05:01:07.060537509Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 05:01:07.592173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580455531.mount: Deactivated successfully. Jan 30 05:01:08.510786 containerd[1596]: time="2025-01-30T05:01:08.510706951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:08.512601 containerd[1596]: time="2025-01-30T05:01:08.512531379Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 05:01:08.513214 containerd[1596]: time="2025-01-30T05:01:08.513166741Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:08.516928 containerd[1596]: time="2025-01-30T05:01:08.516815439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:08.518483 containerd[1596]: time="2025-01-30T05:01:08.518322635Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.457275876s" Jan 30 05:01:08.518483 containerd[1596]: time="2025-01-30T05:01:08.518370255Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 05:01:08.547033 containerd[1596]: time="2025-01-30T05:01:08.546985447Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 05:01:08.772957 systemd-resolved[1488]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 30 05:01:08.999428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1313673333.mount: Deactivated successfully. Jan 30 05:01:09.005828 containerd[1596]: time="2025-01-30T05:01:09.005728138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:09.007947 containerd[1596]: time="2025-01-30T05:01:09.007855337Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 05:01:09.009084 containerd[1596]: time="2025-01-30T05:01:09.009018168Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:09.014716 containerd[1596]: time="2025-01-30T05:01:09.013435587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:09.019176 containerd[1596]: time="2025-01-30T05:01:09.019101170Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 472.01788ms" Jan 30 05:01:09.019176 containerd[1596]: time="2025-01-30T05:01:09.019177410Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 05:01:09.054333 containerd[1596]: time="2025-01-30T05:01:09.054185987Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 05:01:09.581582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1019135294.mount: Deactivated successfully. Jan 30 05:01:10.272026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 05:01:10.283141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:01:10.463477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:01:10.475266 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:01:10.564045 kubelet[2201]: E0130 05:01:10.563874 2201 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:01:10.566609 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:01:10.566861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:01:11.621781 containerd[1596]: time="2025-01-30T05:01:11.621697813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:11.627429 containerd[1596]: time="2025-01-30T05:01:11.627320319Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 05:01:11.631659 containerd[1596]: time="2025-01-30T05:01:11.631576309Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:11.635358 containerd[1596]: time="2025-01-30T05:01:11.635262193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:11.637657 containerd[1596]: time="2025-01-30T05:01:11.637385303Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.583135333s" Jan 30 05:01:11.637657 containerd[1596]: time="2025-01-30T05:01:11.637462869Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 05:01:15.199092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:01:15.210164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:01:15.242996 systemd[1]: Reloading requested from client PID 2276 ('systemctl') (unit session-7.scope)... Jan 30 05:01:15.243021 systemd[1]: Reloading... Jan 30 05:01:15.395761 zram_generator::config[2316]: No configuration found. Jan 30 05:01:15.566928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:01:15.654146 systemd[1]: Reloading finished in 410 ms. Jan 30 05:01:15.720908 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 05:01:15.721030 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 05:01:15.721459 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:01:15.736629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:01:15.870013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:01:15.883325 (kubelet)[2382]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 05:01:15.939309 kubelet[2382]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:01:15.939309 kubelet[2382]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 05:01:15.939309 kubelet[2382]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:01:15.940966 kubelet[2382]: I0130 05:01:15.940807 2382 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 05:01:16.822753 kubelet[2382]: I0130 05:01:16.822465 2382 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 05:01:16.822753 kubelet[2382]: I0130 05:01:16.822507 2382 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 05:01:16.823026 kubelet[2382]: I0130 05:01:16.822848 2382 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 05:01:16.846137 kubelet[2382]: I0130 05:01:16.846085 2382 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:01:16.847354 kubelet[2382]: E0130 05:01:16.847250 2382 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.174.183:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:16.861648 kubelet[2382]: I0130 05:01:16.861592 2382 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 05:01:16.863752 kubelet[2382]: I0130 05:01:16.863640 2382 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 05:01:16.864260 kubelet[2382]: I0130 05:01:16.863937 2382 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-d-9062e890fd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 05:01:16.865206 kubelet[2382]: I0130 05:01:16.865085 2382 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 05:01:16.865716 kubelet[2382]: I0130 05:01:16.865341 2382 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 05:01:16.865716 kubelet[2382]: I0130 05:01:16.865533 2382 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:01:16.867719 kubelet[2382]: W0130 05:01:16.867158 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.174.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-d-9062e890fd&limit=500&resourceVersion=0": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:16.867719 kubelet[2382]: E0130 05:01:16.867246 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.174.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-d-9062e890fd&limit=500&resourceVersion=0": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:16.868058 kubelet[2382]: I0130 05:01:16.868030 2382 kubelet.go:400] "Attempting to sync node with API server" Jan 30 05:01:16.868095 kubelet[2382]: I0130 05:01:16.868066 2382 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 05:01:16.868123 kubelet[2382]: I0130 05:01:16.868102 2382 kubelet.go:312] "Adding apiserver pod source" Jan 30 05:01:16.868123 kubelet[2382]: I0130 05:01:16.868118 2382 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 05:01:16.871593 kubelet[2382]: W0130 05:01:16.871311 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.174.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:16.871593 kubelet[2382]: E0130 05:01:16.871378 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.174.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:16.873626 kubelet[2382]: I0130 05:01:16.873351 2382 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 05:01:16.875714 kubelet[2382]: I0130 05:01:16.875165 2382 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 05:01:16.875714 kubelet[2382]: W0130 05:01:16.875273 2382 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 05:01:16.876311 kubelet[2382]: I0130 05:01:16.876290 2382 server.go:1264] "Started kubelet" Jan 30 05:01:16.879142 kubelet[2382]: I0130 05:01:16.879078 2382 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 05:01:16.880432 kubelet[2382]: I0130 05:01:16.880370 2382 server.go:455] "Adding debug handlers to kubelet server" Jan 30 05:01:16.884910 kubelet[2382]: I0130 05:01:16.884215 2382 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 05:01:16.884910 kubelet[2382]: I0130 05:01:16.884551 2382 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 05:01:16.884910 kubelet[2382]: E0130 05:01:16.884779 2382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.174.183:6443/api/v1/namespaces/default/events\": dial tcp 146.190.174.183:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-d-9062e890fd.181f5fc41013c61c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-d-9062e890fd,UID:ci-4081.3.0-d-9062e890fd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-d-9062e890fd,},FirstTimestamp:2025-01-30 05:01:16.876260892 +0000 UTC m=+0.985992601,LastTimestamp:2025-01-30 05:01:16.876260892 +0000 UTC m=+0.985992601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-d-9062e890fd,}" Jan 30 05:01:16.888452 kubelet[2382]: I0130 05:01:16.887196 2382 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 05:01:16.899215 kubelet[2382]: I0130 05:01:16.898536 2382 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 05:01:16.900787 kubelet[2382]: E0130 05:01:16.900444 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.174.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-d-9062e890fd?timeout=10s\": dial tcp 146.190.174.183:6443: connect: connection refused" interval="200ms" Jan 30 05:01:16.900787 kubelet[2382]: I0130 05:01:16.900634 2382 factory.go:221] Registration of the systemd container factory successfully Jan 30 05:01:16.901043 kubelet[2382]: I0130 05:01:16.901019 2382 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 05:01:16.902923 kubelet[2382]: I0130 05:01:16.902896 2382 factory.go:221] Registration of the containerd container factory successfully Jan 30 05:01:16.903354 kubelet[2382]: I0130 05:01:16.903328 2382 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 05:01:16.903417 kubelet[2382]: I0130 05:01:16.903399 2382 reconciler.go:26] "Reconciler: start to sync state" Jan 30 05:01:16.920486 kubelet[2382]: E0130 05:01:16.920452 2382 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 05:01:16.920636 kubelet[2382]: I0130 05:01:16.920535 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 05:01:16.922294 kubelet[2382]: I0130 05:01:16.922262 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 05:01:16.922294 kubelet[2382]: I0130 05:01:16.922300 2382 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 05:01:16.922452 kubelet[2382]: I0130 05:01:16.922321 2382 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 05:01:16.922452 kubelet[2382]: E0130 05:01:16.922405 2382 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 05:01:16.922822 kubelet[2382]: W0130 05:01:16.922707 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.174.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:16.922822 kubelet[2382]: E0130 05:01:16.922764 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.174.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:16.929387 kubelet[2382]: W0130 05:01:16.929317 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.174.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:16.929717 kubelet[2382]: E0130 05:01:16.929595 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.174.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:16.938933 kubelet[2382]: I0130 05:01:16.938810 2382 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 05:01:16.938933 kubelet[2382]: I0130 05:01:16.938836 2382 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 05:01:16.939435 kubelet[2382]: I0130 05:01:16.939231 2382 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:01:16.944504 kubelet[2382]: I0130 05:01:16.944460 2382 policy_none.go:49] "None policy: Start" Jan 30 05:01:16.945674 kubelet[2382]: I0130 05:01:16.945643 2382 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 05:01:16.945836 kubelet[2382]: I0130 05:01:16.945746 2382 state_mem.go:35] "Initializing new in-memory state store" Jan 30 05:01:16.951966 kubelet[2382]: I0130 05:01:16.951927 2382 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 05:01:16.952209 kubelet[2382]: I0130 05:01:16.952170 2382 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 05:01:16.952402 kubelet[2382]: I0130 05:01:16.952304 2382 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 05:01:16.957593 kubelet[2382]: E0130 05:01:16.956306 2382 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-d-9062e890fd\" not found" Jan 30 05:01:17.000754 kubelet[2382]: I0130 05:01:17.000669 2382 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.001207 kubelet[2382]: E0130 05:01:17.001177 2382 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.174.183:6443/api/v1/nodes\": dial tcp 146.190.174.183:6443: connect: connection refused" node="ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.020846 kubelet[2382]: E0130 05:01:17.020640 2382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.174.183:6443/api/v1/namespaces/default/events\": dial tcp 146.190.174.183:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-d-9062e890fd.181f5fc41013c61c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-d-9062e890fd,UID:ci-4081.3.0-d-9062e890fd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-d-9062e890fd,},FirstTimestamp:2025-01-30 05:01:16.876260892 +0000 UTC m=+0.985992601,LastTimestamp:2025-01-30 05:01:16.876260892 +0000 UTC m=+0.985992601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-d-9062e890fd,}" Jan 30 05:01:17.022851 kubelet[2382]: I0130 05:01:17.022790 2382 topology_manager.go:215] "Topology Admit Handler" podUID="af25ee8176027a494d9f0ca70800e768" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.024422 kubelet[2382]: I0130 05:01:17.024231 2382 topology_manager.go:215] "Topology Admit Handler" podUID="bccfda94d5e0641407ba92575ecd8cdc" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.026716 kubelet[2382]: I0130 05:01:17.026420 2382 topology_manager.go:215] "Topology Admit Handler" podUID="2acd30be7dfac548312b8863d9ffd74a" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.103303 kubelet[2382]: E0130 05:01:17.101562 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.174.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-d-9062e890fd?timeout=10s\": dial tcp 146.190.174.183:6443: connect: connection refused" interval="400ms" Jan 30 05:01:17.202817 kubelet[2382]: I0130 05:01:17.202741 2382 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.203252 kubelet[2382]: E0130 05:01:17.203192 2382 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.174.183:6443/api/v1/nodes\": dial tcp 146.190.174.183:6443: connect: connection refused" node="ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.204608 kubelet[2382]: I0130 05:01:17.204544 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2acd30be7dfac548312b8863d9ffd74a-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-d-9062e890fd\" (UID: \"2acd30be7dfac548312b8863d9ffd74a\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.205010 kubelet[2382]: I0130 05:01:17.204798 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2acd30be7dfac548312b8863d9ffd74a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-d-9062e890fd\" (UID: \"2acd30be7dfac548312b8863d9ffd74a\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.205010 kubelet[2382]: I0130 05:01:17.204862 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af25ee8176027a494d9f0ca70800e768-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-d-9062e890fd\" (UID: \"af25ee8176027a494d9f0ca70800e768\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.205010 kubelet[2382]: I0130 05:01:17.204888 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bccfda94d5e0641407ba92575ecd8cdc-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-d-9062e890fd\" (UID: \"bccfda94d5e0641407ba92575ecd8cdc\") " pod="kube-system/kube-scheduler-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.205010 kubelet[2382]: I0130 05:01:17.204914 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af25ee8176027a494d9f0ca70800e768-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-d-9062e890fd\" (UID: \"af25ee8176027a494d9f0ca70800e768\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.205010 kubelet[2382]: I0130 05:01:17.204948 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/af25ee8176027a494d9f0ca70800e768-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-d-9062e890fd\" (UID: \"af25ee8176027a494d9f0ca70800e768\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.205226 kubelet[2382]: I0130 05:01:17.204972 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af25ee8176027a494d9f0ca70800e768-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-d-9062e890fd\" (UID: \"af25ee8176027a494d9f0ca70800e768\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.205380 kubelet[2382]: I0130 05:01:17.204996 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af25ee8176027a494d9f0ca70800e768-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-d-9062e890fd\" (UID: \"af25ee8176027a494d9f0ca70800e768\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.205380 kubelet[2382]: I0130 05:01:17.205325 2382 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2acd30be7dfac548312b8863d9ffd74a-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-d-9062e890fd\" (UID: \"2acd30be7dfac548312b8863d9ffd74a\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.329855 kubelet[2382]: E0130 05:01:17.329570 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:17.330660 kubelet[2382]: E0130 05:01:17.330410 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:17.330903 containerd[1596]: time="2025-01-30T05:01:17.330826107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-d-9062e890fd,Uid:af25ee8176027a494d9f0ca70800e768,Namespace:kube-system,Attempt:0,}" Jan 30 05:01:17.331870 containerd[1596]: time="2025-01-30T05:01:17.331572406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-d-9062e890fd,Uid:bccfda94d5e0641407ba92575ecd8cdc,Namespace:kube-system,Attempt:0,}" Jan 30 05:01:17.333599 systemd-resolved[1488]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 30 05:01:17.334742 kubelet[2382]: E0130 05:01:17.334424 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:17.336013 containerd[1596]: time="2025-01-30T05:01:17.335962673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-d-9062e890fd,Uid:2acd30be7dfac548312b8863d9ffd74a,Namespace:kube-system,Attempt:0,}" Jan 30 05:01:17.502517 kubelet[2382]: E0130 05:01:17.502383 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.174.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-d-9062e890fd?timeout=10s\": dial tcp 146.190.174.183:6443: connect: connection refused" interval="800ms" Jan 30 05:01:17.605367 kubelet[2382]: I0130 05:01:17.605264 2382 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.606221 kubelet[2382]: E0130 05:01:17.606159 2382 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.174.183:6443/api/v1/nodes\": dial tcp 146.190.174.183:6443: connect: connection refused" node="ci-4081.3.0-d-9062e890fd" Jan 30 05:01:17.782911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2067863823.mount: Deactivated successfully. Jan 30 05:01:17.787655 containerd[1596]: time="2025-01-30T05:01:17.787580322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:01:17.789092 containerd[1596]: time="2025-01-30T05:01:17.789003828Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 05:01:17.790257 containerd[1596]: time="2025-01-30T05:01:17.790063200Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:01:17.794432 containerd[1596]: time="2025-01-30T05:01:17.794362345Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 05:01:17.796453 containerd[1596]: time="2025-01-30T05:01:17.795334513Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 05:01:17.796453 containerd[1596]: time="2025-01-30T05:01:17.796362486Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:01:17.801785 containerd[1596]: time="2025-01-30T05:01:17.801730794Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 470.792488ms" Jan 30 05:01:17.802305 containerd[1596]: time="2025-01-30T05:01:17.802261086Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:01:17.805188 containerd[1596]: time="2025-01-30T05:01:17.805143964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:01:17.806089 kubelet[2382]: W0130 05:01:17.806016 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.174.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:17.806297 kubelet[2382]: E0130 05:01:17.806282 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.174.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:17.808454 containerd[1596]: time="2025-01-30T05:01:17.808337002Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 476.679774ms" Jan 30 05:01:17.815219 containerd[1596]: time="2025-01-30T05:01:17.815162416Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 479.067421ms" Jan 30 05:01:17.960725 containerd[1596]: time="2025-01-30T05:01:17.960150967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:01:17.960725 containerd[1596]: time="2025-01-30T05:01:17.960214696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:01:17.960725 containerd[1596]: time="2025-01-30T05:01:17.960230349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:01:17.960725 containerd[1596]: time="2025-01-30T05:01:17.960337994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:01:17.978900 containerd[1596]: time="2025-01-30T05:01:17.976701949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:01:17.979243 containerd[1596]: time="2025-01-30T05:01:17.978729159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:01:17.979574 containerd[1596]: time="2025-01-30T05:01:17.979225066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:01:17.981614 containerd[1596]: time="2025-01-30T05:01:17.980215555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:01:18.009612 containerd[1596]: time="2025-01-30T05:01:18.008985988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:01:18.009842 containerd[1596]: time="2025-01-30T05:01:18.009520859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:01:18.009842 containerd[1596]: time="2025-01-30T05:01:18.009613799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:01:18.010412 containerd[1596]: time="2025-01-30T05:01:18.009828872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:01:18.128627 containerd[1596]: time="2025-01-30T05:01:18.127538913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-d-9062e890fd,Uid:af25ee8176027a494d9f0ca70800e768,Namespace:kube-system,Attempt:0,} returns sandbox id \"949f2879fe109736d6fc65982a765d126fb0966ff93bbdaa098df1c51ed7389e\"" Jan 30 05:01:18.134018 kubelet[2382]: E0130 05:01:18.133975 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:18.136052 containerd[1596]: time="2025-01-30T05:01:18.134854470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-d-9062e890fd,Uid:bccfda94d5e0641407ba92575ecd8cdc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d1a716876e4efbc31bbce1612a9e05dc9fc0de82b5028e5f4f3daefd602d37a\"" Jan 30 05:01:18.137058 kubelet[2382]: E0130 05:01:18.136931 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:18.148167 containerd[1596]: time="2025-01-30T05:01:18.147950042Z" level=info msg="CreateContainer within sandbox \"949f2879fe109736d6fc65982a765d126fb0966ff93bbdaa098df1c51ed7389e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 05:01:18.148543 containerd[1596]: time="2025-01-30T05:01:18.148495500Z" level=info msg="CreateContainer within sandbox \"4d1a716876e4efbc31bbce1612a9e05dc9fc0de82b5028e5f4f3daefd602d37a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 05:01:18.155815 containerd[1596]: time="2025-01-30T05:01:18.155721039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-d-9062e890fd,Uid:2acd30be7dfac548312b8863d9ffd74a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d63eb6bdc65b4226efbee8e19415b07d53c2731a4308626e8447b5a5442e127\"" Jan 30 05:01:18.158344 kubelet[2382]: E0130 05:01:18.157531 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:18.162493 containerd[1596]: time="2025-01-30T05:01:18.162410733Z" level=info msg="CreateContainer within sandbox \"0d63eb6bdc65b4226efbee8e19415b07d53c2731a4308626e8447b5a5442e127\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 05:01:18.173648 containerd[1596]: time="2025-01-30T05:01:18.173586383Z" level=info msg="CreateContainer within sandbox \"949f2879fe109736d6fc65982a765d126fb0966ff93bbdaa098df1c51ed7389e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c5bb838a4f334ec3f6951dbf19eb89792d84e8f40b839ed4f544cfaae5a49364\"" Jan 30 05:01:18.174982 containerd[1596]: time="2025-01-30T05:01:18.174936161Z" level=info msg="StartContainer for \"c5bb838a4f334ec3f6951dbf19eb89792d84e8f40b839ed4f544cfaae5a49364\"" Jan 30 05:01:18.181489 containerd[1596]: time="2025-01-30T05:01:18.181328528Z" level=info msg="CreateContainer within sandbox \"4d1a716876e4efbc31bbce1612a9e05dc9fc0de82b5028e5f4f3daefd602d37a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"14395929c836c9ba4b2b804450f905071013c030eac5cdc3fa1f7b147e272713\"" Jan 30 05:01:18.182113 containerd[1596]: time="2025-01-30T05:01:18.182078610Z" level=info msg="StartContainer for \"14395929c836c9ba4b2b804450f905071013c030eac5cdc3fa1f7b147e272713\"" Jan 30 05:01:18.184336 containerd[1596]: time="2025-01-30T05:01:18.184184546Z" level=info msg="CreateContainer within sandbox \"0d63eb6bdc65b4226efbee8e19415b07d53c2731a4308626e8447b5a5442e127\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6379a2ea5f9765d9bb1dc8939c4b58bb552bf6e3eb4192684e8b8b8707eaec37\"" Jan 30 05:01:18.185717 containerd[1596]: time="2025-01-30T05:01:18.185107654Z" level=info msg="StartContainer for \"6379a2ea5f9765d9bb1dc8939c4b58bb552bf6e3eb4192684e8b8b8707eaec37\"" Jan 30 05:01:18.277940 kubelet[2382]: W0130 05:01:18.277848 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.174.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:18.277940 kubelet[2382]: E0130 05:01:18.277940 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.174.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:18.304122 kubelet[2382]: E0130 05:01:18.304025 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.174.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-d-9062e890fd?timeout=10s\": dial tcp 146.190.174.183:6443: connect: connection refused" interval="1.6s" Jan 30 05:01:18.324936 kubelet[2382]: W0130 05:01:18.323765 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.174.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-d-9062e890fd&limit=500&resourceVersion=0": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:18.324936 kubelet[2382]: E0130 05:01:18.323833 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.174.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-d-9062e890fd&limit=500&resourceVersion=0": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:18.340789 containerd[1596]: time="2025-01-30T05:01:18.340466298Z" level=info msg="StartContainer for \"6379a2ea5f9765d9bb1dc8939c4b58bb552bf6e3eb4192684e8b8b8707eaec37\" returns successfully" Jan 30 05:01:18.370781 containerd[1596]: time="2025-01-30T05:01:18.370006642Z" level=info msg="StartContainer for \"c5bb838a4f334ec3f6951dbf19eb89792d84e8f40b839ed4f544cfaae5a49364\" returns successfully" Jan 30 05:01:18.387602 containerd[1596]: time="2025-01-30T05:01:18.386248311Z" level=info msg="StartContainer for \"14395929c836c9ba4b2b804450f905071013c030eac5cdc3fa1f7b147e272713\" returns successfully" Jan 30 05:01:18.410818 kubelet[2382]: I0130 05:01:18.410770 2382 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-9062e890fd" Jan 30 05:01:18.412158 kubelet[2382]: E0130 05:01:18.412098 2382 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.174.183:6443/api/v1/nodes\": dial tcp 146.190.174.183:6443: connect: connection refused" node="ci-4081.3.0-d-9062e890fd" Jan 30 05:01:18.465136 kubelet[2382]: W0130 05:01:18.465029 2382 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.174.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:18.465136 kubelet[2382]: E0130 05:01:18.465127 2382 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.174.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.174.183:6443: connect: connection refused Jan 30 05:01:18.966651 kubelet[2382]: E0130 05:01:18.965048 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:18.971656 kubelet[2382]: E0130 05:01:18.971602 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:18.980774 kubelet[2382]: E0130 05:01:18.979540 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:19.982889 kubelet[2382]: E0130 05:01:19.982830 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:20.013819 kubelet[2382]: I0130 05:01:20.013737 2382 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-9062e890fd" Jan 30 05:01:20.027250 kubelet[2382]: E0130 05:01:20.027215 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:20.873264 kubelet[2382]: I0130 05:01:20.873199 2382 apiserver.go:52] "Watching apiserver" Jan 30 05:01:20.903892 kubelet[2382]: I0130 05:01:20.903837 2382 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 05:01:20.933853 kubelet[2382]: E0130 05:01:20.933801 2382 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-d-9062e890fd\" not found" node="ci-4081.3.0-d-9062e890fd" Jan 30 05:01:20.963832 kubelet[2382]: I0130 05:01:20.963775 2382 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-d-9062e890fd" Jan 30 05:01:23.849151 systemd[1]: Reloading requested from client PID 2655 ('systemctl') (unit session-7.scope)... Jan 30 05:01:23.849179 systemd[1]: Reloading... Jan 30 05:01:24.060753 zram_generator::config[2697]: No configuration found. Jan 30 05:01:24.275499 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:01:24.375004 kubelet[2382]: W0130 05:01:24.374030 2382 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:01:24.375004 kubelet[2382]: E0130 05:01:24.374654 2382 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:24.422108 systemd[1]: Reloading finished in 572 ms. Jan 30 05:01:24.467902 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:01:24.468937 kubelet[2382]: E0130 05:01:24.468561 2382 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081.3.0-d-9062e890fd.181f5fc41013c61c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-d-9062e890fd,UID:ci-4081.3.0-d-9062e890fd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-d-9062e890fd,},FirstTimestamp:2025-01-30 05:01:16.876260892 +0000 UTC m=+0.985992601,LastTimestamp:2025-01-30 05:01:16.876260892 +0000 UTC m=+0.985992601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-d-9062e890fd,}" Jan 30 05:01:24.468937 kubelet[2382]: I0130 05:01:24.468857 2382 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:01:24.482494 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 05:01:24.482857 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:01:24.492202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:01:24.677057 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:01:24.689153 (kubelet)[2755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 05:01:24.812462 kubelet[2755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:01:24.812462 kubelet[2755]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 05:01:24.812462 kubelet[2755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:01:24.813049 kubelet[2755]: I0130 05:01:24.812541 2755 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 05:01:24.824498 kubelet[2755]: I0130 05:01:24.824421 2755 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 05:01:24.824835 kubelet[2755]: I0130 05:01:24.824605 2755 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 05:01:24.826011 kubelet[2755]: I0130 05:01:24.825978 2755 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 05:01:24.828415 kubelet[2755]: I0130 05:01:24.828109 2755 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 05:01:24.834213 kubelet[2755]: I0130 05:01:24.834154 2755 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:01:24.864571 kubelet[2755]: I0130 05:01:24.857347 2755 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 05:01:24.864571 kubelet[2755]: I0130 05:01:24.858144 2755 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 05:01:24.864571 kubelet[2755]: I0130 05:01:24.858184 2755 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-d-9062e890fd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 05:01:24.864571 kubelet[2755]: I0130 05:01:24.858462 2755 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 05:01:24.865420 kubelet[2755]: I0130 05:01:24.858480 2755 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 05:01:24.865420 kubelet[2755]: I0130 05:01:24.858538 2755 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:01:24.865420 kubelet[2755]: I0130 05:01:24.858670 2755 kubelet.go:400] "Attempting to sync node with API server" Jan 30 05:01:24.865420 kubelet[2755]: I0130 05:01:24.858735 2755 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 05:01:24.865420 kubelet[2755]: I0130 05:01:24.858768 2755 kubelet.go:312] "Adding apiserver pod source" Jan 30 05:01:24.865420 kubelet[2755]: I0130 05:01:24.860765 2755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 05:01:24.865420 kubelet[2755]: I0130 05:01:24.862153 2755 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 05:01:24.866366 kubelet[2755]: I0130 05:01:24.866317 2755 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 05:01:24.869932 kubelet[2755]: I0130 05:01:24.869066 2755 server.go:1264] "Started kubelet" Jan 30 05:01:24.871822 sudo[2769]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 05:01:24.874516 sudo[2769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 05:01:24.879723 kubelet[2755]: I0130 05:01:24.878524 2755 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 05:01:24.879876 kubelet[2755]: I0130 05:01:24.879781 2755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 05:01:24.884660 kubelet[2755]: I0130 05:01:24.883929 2755 server.go:455] "Adding debug handlers to kubelet server" Jan 30 05:01:24.891972 kubelet[2755]: I0130 05:01:24.891097 2755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 05:01:24.891972 kubelet[2755]: I0130 05:01:24.891435 2755 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 05:01:24.898441 kubelet[2755]: I0130 05:01:24.898211 2755 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 05:01:24.902146 kubelet[2755]: I0130 05:01:24.902102 2755 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 05:01:24.902372 kubelet[2755]: I0130 05:01:24.902358 2755 reconciler.go:26] "Reconciler: start to sync state" Jan 30 05:01:24.912421 kubelet[2755]: I0130 05:01:24.912344 2755 factory.go:221] Registration of the systemd container factory successfully Jan 30 05:01:24.912625 kubelet[2755]: I0130 05:01:24.912516 2755 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 05:01:24.922765 kubelet[2755]: E0130 05:01:24.922719 2755 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 05:01:24.926966 kubelet[2755]: I0130 05:01:24.925724 2755 factory.go:221] Registration of the containerd container factory successfully Jan 30 05:01:24.954593 kubelet[2755]: I0130 05:01:24.954147 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 05:01:24.972293 kubelet[2755]: I0130 05:01:24.971862 2755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 05:01:24.972293 kubelet[2755]: I0130 05:01:24.971936 2755 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 05:01:24.972293 kubelet[2755]: I0130 05:01:24.971992 2755 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 05:01:24.972293 kubelet[2755]: E0130 05:01:24.972075 2755 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 05:01:25.007968 kubelet[2755]: I0130 05:01:25.005933 2755 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-9062e890fd" Jan 30 05:01:25.037177 kubelet[2755]: I0130 05:01:25.037020 2755 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-d-9062e890fd" Jan 30 05:01:25.037177 kubelet[2755]: I0130 05:01:25.037138 2755 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-d-9062e890fd" Jan 30 05:01:25.072536 kubelet[2755]: E0130 05:01:25.072465 2755 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 05:01:25.142515 kubelet[2755]: I0130 05:01:25.142240 2755 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 05:01:25.142515 kubelet[2755]: I0130 05:01:25.142266 2755 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 05:01:25.142515 kubelet[2755]: I0130 05:01:25.142343 2755 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:01:25.143661 kubelet[2755]: I0130 05:01:25.143127 2755 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 05:01:25.143661 kubelet[2755]: I0130 05:01:25.143146 2755 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 05:01:25.143661 kubelet[2755]: I0130 05:01:25.143174 2755 policy_none.go:49] "None policy: Start" Jan 30 05:01:25.144756 kubelet[2755]: I0130 05:01:25.144501 2755 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 05:01:25.144756 kubelet[2755]: I0130 05:01:25.144537 2755 state_mem.go:35] "Initializing new in-memory state store" Jan 30 05:01:25.145185 kubelet[2755]: I0130 05:01:25.144956 2755 state_mem.go:75] "Updated machine memory state" Jan 30 05:01:25.148618 kubelet[2755]: I0130 05:01:25.148575 2755 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 05:01:25.149467 kubelet[2755]: I0130 05:01:25.148955 2755 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 05:01:25.154227 kubelet[2755]: I0130 05:01:25.152102 2755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 05:01:25.273581 kubelet[2755]: I0130 05:01:25.273519 2755 topology_manager.go:215] "Topology Admit Handler" podUID="af25ee8176027a494d9f0ca70800e768" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:25.273786 kubelet[2755]: I0130 05:01:25.273760 2755 topology_manager.go:215] "Topology Admit Handler" podUID="bccfda94d5e0641407ba92575ecd8cdc" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:25.273918 kubelet[2755]: I0130 05:01:25.273894 2755 topology_manager.go:215] "Topology Admit Handler" podUID="2acd30be7dfac548312b8863d9ffd74a" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:25.292615 kubelet[2755]: W0130 05:01:25.292538 2755 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:01:25.296832 kubelet[2755]: W0130 05:01:25.296395 2755 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:01:25.298642 kubelet[2755]: W0130 05:01:25.298362 2755 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:01:25.298642 kubelet[2755]: E0130 05:01:25.298494 2755 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-d-9062e890fd\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:25.305734 kubelet[2755]: I0130 05:01:25.303953 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af25ee8176027a494d9f0ca70800e768-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-d-9062e890fd\" (UID: \"af25ee8176027a494d9f0ca70800e768\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:25.305734 kubelet[2755]: I0130 05:01:25.304009 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/af25ee8176027a494d9f0ca70800e768-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-d-9062e890fd\" (UID: \"af25ee8176027a494d9f0ca70800e768\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:25.305734 kubelet[2755]: I0130 05:01:25.304039 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af25ee8176027a494d9f0ca70800e768-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-d-9062e890fd\" (UID: \"af25ee8176027a494d9f0ca70800e768\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:25.305734 kubelet[2755]: I0130 05:01:25.304066 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af25ee8176027a494d9f0ca70800e768-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-d-9062e890fd\" (UID: \"af25ee8176027a494d9f0ca70800e768\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:25.305734 kubelet[2755]: I0130 05:01:25.304094 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bccfda94d5e0641407ba92575ecd8cdc-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-d-9062e890fd\" (UID: \"bccfda94d5e0641407ba92575ecd8cdc\") " pod="kube-system/kube-scheduler-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:25.306110 kubelet[2755]: I0130 05:01:25.304119 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2acd30be7dfac548312b8863d9ffd74a-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-d-9062e890fd\" (UID: \"2acd30be7dfac548312b8863d9ffd74a\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:25.306110 kubelet[2755]: I0130 05:01:25.304223 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2acd30be7dfac548312b8863d9ffd74a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-d-9062e890fd\" (UID: \"2acd30be7dfac548312b8863d9ffd74a\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:25.306110 kubelet[2755]: I0130 05:01:25.304272 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af25ee8176027a494d9f0ca70800e768-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-d-9062e890fd\" (UID: \"af25ee8176027a494d9f0ca70800e768\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:25.306110 kubelet[2755]: I0130 05:01:25.304300 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2acd30be7dfac548312b8863d9ffd74a-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-d-9062e890fd\" (UID: \"2acd30be7dfac548312b8863d9ffd74a\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-9062e890fd" Jan 30 05:01:25.599161 kubelet[2755]: E0130 05:01:25.595741 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:25.599161 kubelet[2755]: E0130 05:01:25.599100 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:25.600765 kubelet[2755]: E0130 05:01:25.600717 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:25.823264 sudo[2769]: pam_unix(sudo:session): session closed for user root Jan 30 05:01:25.862812 kubelet[2755]: I0130 05:01:25.861810 2755 apiserver.go:52] "Watching apiserver" Jan 30 05:01:25.902745 kubelet[2755]: I0130 05:01:25.902662 2755 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 05:01:26.044295 kubelet[2755]: E0130 05:01:26.042369 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:26.044295 kubelet[2755]: E0130 05:01:26.043444 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:26.044295 kubelet[2755]: E0130 05:01:26.043996 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:26.087434 kubelet[2755]: I0130 05:01:26.086044 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-d-9062e890fd" podStartSLOduration=2.086012429 podStartE2EDuration="2.086012429s" podCreationTimestamp="2025-01-30 05:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:01:26.084385278 +0000 UTC m=+1.383816888" watchObservedRunningTime="2025-01-30 05:01:26.086012429 +0000 UTC m=+1.385444038" Jan 30 05:01:26.120566 kubelet[2755]: I0130 05:01:26.119724 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-d-9062e890fd" podStartSLOduration=1.119697084 podStartE2EDuration="1.119697084s" podCreationTimestamp="2025-01-30 05:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:01:26.105076667 +0000 UTC m=+1.404508294" watchObservedRunningTime="2025-01-30 05:01:26.119697084 +0000 UTC m=+1.419128688" Jan 30 05:01:26.140922 kubelet[2755]: I0130 05:01:26.140823 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-d-9062e890fd" podStartSLOduration=1.140800314 podStartE2EDuration="1.140800314s" podCreationTimestamp="2025-01-30 05:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:01:26.120133847 +0000 UTC m=+1.419565455" watchObservedRunningTime="2025-01-30 05:01:26.140800314 +0000 UTC m=+1.440231919" Jan 30 05:01:27.044061 kubelet[2755]: E0130 05:01:27.043607 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:27.768737 sudo[1811]: pam_unix(sudo:session): session closed for user root Jan 30 05:01:27.773278 sshd[1804]: pam_unix(sshd:session): session closed for user core Jan 30 05:01:27.778339 systemd[1]: sshd@6-146.190.174.183:22-147.75.109.163:34490.service: Deactivated successfully. Jan 30 05:01:27.782887 systemd-logind[1574]: Session 7 logged out. Waiting for processes to exit. Jan 30 05:01:27.784072 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 05:01:27.785715 systemd-logind[1574]: Removed session 7. Jan 30 05:01:29.351061 kubelet[2755]: E0130 05:01:29.351007 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:30.050145 kubelet[2755]: E0130 05:01:30.050041 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:30.836718 kubelet[2755]: E0130 05:01:30.836551 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:31.052005 kubelet[2755]: E0130 05:01:31.051965 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:31.231070 kubelet[2755]: E0130 05:01:31.229359 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:32.053766 kubelet[2755]: E0130 05:01:32.053730 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:32.056828 kubelet[2755]: E0130 05:01:32.056781 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:32.316013 update_engine[1578]: I20250130 05:01:32.315083 1578 update_attempter.cc:509] Updating boot flags... Jan 30 05:01:32.361779 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2829) Jan 30 05:01:32.439802 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2827) Jan 30 05:01:32.525746 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2827) Jan 30 05:01:37.148193 kubelet[2755]: I0130 05:01:37.148000 2755 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 05:01:37.148653 containerd[1596]: time="2025-01-30T05:01:37.148580902Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 05:01:37.148918 kubelet[2755]: I0130 05:01:37.148872 2755 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 05:01:38.123893 kubelet[2755]: I0130 05:01:38.123835 2755 topology_manager.go:215] "Topology Admit Handler" podUID="d41b7f7e-d604-4b4a-95a1-f98cfb21202b" podNamespace="kube-system" podName="kube-proxy-qbszz" Jan 30 05:01:38.130966 kubelet[2755]: I0130 05:01:38.130925 2755 topology_manager.go:215] "Topology Admit Handler" podUID="51133790-9284-44a2-b5e7-702b91c05960" podNamespace="kube-system" podName="cilium-dsq6s" Jan 30 05:01:38.183537 kubelet[2755]: I0130 05:01:38.183461 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbv8d\" (UniqueName: \"kubernetes.io/projected/d41b7f7e-d604-4b4a-95a1-f98cfb21202b-kube-api-access-xbv8d\") pod \"kube-proxy-qbszz\" (UID: \"d41b7f7e-d604-4b4a-95a1-f98cfb21202b\") " pod="kube-system/kube-proxy-qbszz" Jan 30 05:01:38.184074 kubelet[2755]: I0130 05:01:38.183562 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-cilium-run\") pod \"cilium-dsq6s\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " pod="kube-system/cilium-dsq6s" Jan 30 05:01:38.184074 kubelet[2755]: I0130 05:01:38.183591 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-cilium-cgroup\") pod \"cilium-dsq6s\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " pod="kube-system/cilium-dsq6s" Jan 30 05:01:38.184074 kubelet[2755]: I0130 05:01:38.183614 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51133790-9284-44a2-b5e7-702b91c05960-hubble-tls\") pod \"cilium-dsq6s\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " pod="kube-system/cilium-dsq6s" Jan 30 05:01:38.184074 kubelet[2755]: I0130 05:01:38.183638 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51133790-9284-44a2-b5e7-702b91c05960-clustermesh-secrets\") pod \"cilium-dsq6s\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " pod="kube-system/cilium-dsq6s" Jan 30 05:01:38.184074 kubelet[2755]: I0130 05:01:38.183660 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-host-proc-sys-net\") pod \"cilium-dsq6s\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " pod="kube-system/cilium-dsq6s" Jan 30 05:01:38.184831 kubelet[2755]: I0130 05:01:38.184783 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d41b7f7e-d604-4b4a-95a1-f98cfb21202b-lib-modules\") pod \"kube-proxy-qbszz\" (UID: \"d41b7f7e-d604-4b4a-95a1-f98cfb21202b\") " pod="kube-system/kube-proxy-qbszz" Jan 30 05:01:38.185609 kubelet[2755]: I0130 05:01:38.185029 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d41b7f7e-d604-4b4a-95a1-f98cfb21202b-kube-proxy\") pod \"kube-proxy-qbszz\" (UID: \"d41b7f7e-d604-4b4a-95a1-f98cfb21202b\") " pod="kube-system/kube-proxy-qbszz" Jan 30 05:01:38.185609 kubelet[2755]: I0130 05:01:38.185153 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-xtables-lock\") pod \"cilium-dsq6s\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " pod="kube-system/cilium-dsq6s" Jan 30 05:01:38.185609 kubelet[2755]: I0130 05:01:38.185214 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjgsn\" (UniqueName: \"kubernetes.io/projected/51133790-9284-44a2-b5e7-702b91c05960-kube-api-access-rjgsn\") pod \"cilium-dsq6s\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " pod="kube-system/cilium-dsq6s" Jan 30 05:01:38.185609 kubelet[2755]: I0130 05:01:38.185300 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-hostproc\") pod \"cilium-dsq6s\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " pod="kube-system/cilium-dsq6s" Jan 30 05:01:38.185609 kubelet[2755]: I0130 05:01:38.185336 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-cni-path\") pod \"cilium-dsq6s\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " pod="kube-system/cilium-dsq6s" Jan 30 05:01:38.186707 kubelet[2755]: I0130 05:01:38.186215 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d41b7f7e-d604-4b4a-95a1-f98cfb21202b-xtables-lock\") pod \"kube-proxy-qbszz\" (UID: \"d41b7f7e-d604-4b4a-95a1-f98cfb21202b\") " pod="kube-system/kube-proxy-qbszz" Jan 30 05:01:38.186707 kubelet[2755]: I0130 05:01:38.186369 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-host-proc-sys-kernel\") pod \"cilium-dsq6s\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " pod="kube-system/cilium-dsq6s" Jan 30 05:01:38.186707 kubelet[2755]: I0130 05:01:38.186411 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-etc-cni-netd\") pod \"cilium-dsq6s\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " pod="kube-system/cilium-dsq6s" Jan 30 05:01:38.186707 kubelet[2755]: I0130 05:01:38.186490 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-lib-modules\") pod \"cilium-dsq6s\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " pod="kube-system/cilium-dsq6s" Jan 30 05:01:38.186707 kubelet[2755]: I0130 05:01:38.186546 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51133790-9284-44a2-b5e7-702b91c05960-cilium-config-path\") pod \"cilium-dsq6s\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " pod="kube-system/cilium-dsq6s" Jan 30 05:01:38.186707 kubelet[2755]: I0130 05:01:38.186563 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-bpf-maps\") pod \"cilium-dsq6s\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " pod="kube-system/cilium-dsq6s" Jan 30 05:01:38.253337 kubelet[2755]: I0130 05:01:38.253273 2755 topology_manager.go:215] "Topology Admit Handler" podUID="602a7ae4-0558-41a6-9042-1ffabd97b3fd" podNamespace="kube-system" podName="cilium-operator-599987898-hmmrx" Jan 30 05:01:38.289015 kubelet[2755]: I0130 05:01:38.288972 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2pjw\" (UniqueName: \"kubernetes.io/projected/602a7ae4-0558-41a6-9042-1ffabd97b3fd-kube-api-access-q2pjw\") pod \"cilium-operator-599987898-hmmrx\" (UID: \"602a7ae4-0558-41a6-9042-1ffabd97b3fd\") " pod="kube-system/cilium-operator-599987898-hmmrx" Jan 30 05:01:38.289917 kubelet[2755]: I0130 05:01:38.289269 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/602a7ae4-0558-41a6-9042-1ffabd97b3fd-cilium-config-path\") pod \"cilium-operator-599987898-hmmrx\" (UID: \"602a7ae4-0558-41a6-9042-1ffabd97b3fd\") " pod="kube-system/cilium-operator-599987898-hmmrx" Jan 30 05:01:38.443186 kubelet[2755]: E0130 05:01:38.441924 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:38.444060 containerd[1596]: time="2025-01-30T05:01:38.443972416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbszz,Uid:d41b7f7e-d604-4b4a-95a1-f98cfb21202b,Namespace:kube-system,Attempt:0,}" Jan 30 05:01:38.446512 kubelet[2755]: E0130 05:01:38.446008 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:38.447321 containerd[1596]: time="2025-01-30T05:01:38.446996944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dsq6s,Uid:51133790-9284-44a2-b5e7-702b91c05960,Namespace:kube-system,Attempt:0,}" Jan 30 05:01:38.494759 containerd[1596]: time="2025-01-30T05:01:38.494576504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:01:38.495500 containerd[1596]: time="2025-01-30T05:01:38.495146337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:01:38.495500 containerd[1596]: time="2025-01-30T05:01:38.495270950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:01:38.495500 containerd[1596]: time="2025-01-30T05:01:38.495407674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:01:38.498241 containerd[1596]: time="2025-01-30T05:01:38.496963211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:01:38.498241 containerd[1596]: time="2025-01-30T05:01:38.497083327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:01:38.498241 containerd[1596]: time="2025-01-30T05:01:38.497134674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:01:38.498241 containerd[1596]: time="2025-01-30T05:01:38.498171437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:01:38.564971 kubelet[2755]: E0130 05:01:38.563503 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:38.567063 containerd[1596]: time="2025-01-30T05:01:38.567013073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-hmmrx,Uid:602a7ae4-0558-41a6-9042-1ffabd97b3fd,Namespace:kube-system,Attempt:0,}" Jan 30 05:01:38.570079 containerd[1596]: time="2025-01-30T05:01:38.569909988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dsq6s,Uid:51133790-9284-44a2-b5e7-702b91c05960,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c\"" Jan 30 05:01:38.571545 kubelet[2755]: E0130 05:01:38.571499 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:38.577202 containerd[1596]: time="2025-01-30T05:01:38.576946302Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 05:01:38.585575 containerd[1596]: time="2025-01-30T05:01:38.585409902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbszz,Uid:d41b7f7e-d604-4b4a-95a1-f98cfb21202b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f8e673e8b42e3e3cd44e19f048ddd8ddc8824e39d3788fe006c23f67acd4576\"" Jan 30 05:01:38.588880 kubelet[2755]: E0130 05:01:38.588708 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:38.594829 containerd[1596]: time="2025-01-30T05:01:38.593764599Z" level=info msg="CreateContainer within sandbox \"6f8e673e8b42e3e3cd44e19f048ddd8ddc8824e39d3788fe006c23f67acd4576\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 05:01:38.621486 containerd[1596]: time="2025-01-30T05:01:38.621278125Z" level=info msg="CreateContainer within sandbox \"6f8e673e8b42e3e3cd44e19f048ddd8ddc8824e39d3788fe006c23f67acd4576\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"72c163f3f08a1346d9b20492649480accde7a5c4ba92beacca30a045f4bffdc1\"" Jan 30 05:01:38.623140 containerd[1596]: time="2025-01-30T05:01:38.623078086Z" level=info msg="StartContainer for \"72c163f3f08a1346d9b20492649480accde7a5c4ba92beacca30a045f4bffdc1\"" Jan 30 05:01:38.632652 containerd[1596]: time="2025-01-30T05:01:38.632254521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:01:38.632652 containerd[1596]: time="2025-01-30T05:01:38.632351123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:01:38.632652 containerd[1596]: time="2025-01-30T05:01:38.632376128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:01:38.632652 containerd[1596]: time="2025-01-30T05:01:38.632501754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:01:38.725793 containerd[1596]: time="2025-01-30T05:01:38.725261191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-hmmrx,Uid:602a7ae4-0558-41a6-9042-1ffabd97b3fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"028b7f72601bf30746f8f120aaa1b415d5cb0cb2cc4ae09415e491fa7fc27914\"" Jan 30 05:01:38.730045 kubelet[2755]: E0130 05:01:38.729982 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:38.742351 containerd[1596]: time="2025-01-30T05:01:38.742111086Z" level=info msg="StartContainer for \"72c163f3f08a1346d9b20492649480accde7a5c4ba92beacca30a045f4bffdc1\" returns successfully" Jan 30 05:01:39.076051 kubelet[2755]: E0130 05:01:39.075948 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:43.673221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2940678108.mount: Deactivated successfully. Jan 30 05:01:45.026581 kubelet[2755]: I0130 05:01:45.026421 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qbszz" podStartSLOduration=7.026395237 podStartE2EDuration="7.026395237s" podCreationTimestamp="2025-01-30 05:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:01:39.090830683 +0000 UTC m=+14.390262291" watchObservedRunningTime="2025-01-30 05:01:45.026395237 +0000 UTC m=+20.325826841" Jan 30 05:01:46.353852 containerd[1596]: time="2025-01-30T05:01:46.353772398Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:46.356039 containerd[1596]: time="2025-01-30T05:01:46.355911495Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 05:01:46.357823 containerd[1596]: time="2025-01-30T05:01:46.356417735Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:46.359342 containerd[1596]: time="2025-01-30T05:01:46.359162101Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.782154791s" Jan 30 05:01:46.359342 containerd[1596]: time="2025-01-30T05:01:46.359230637Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 05:01:46.361126 containerd[1596]: time="2025-01-30T05:01:46.361075684Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 05:01:46.363923 containerd[1596]: time="2025-01-30T05:01:46.363706459Z" level=info msg="CreateContainer within sandbox \"c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 05:01:46.467347 containerd[1596]: time="2025-01-30T05:01:46.467170452Z" level=info msg="CreateContainer within sandbox \"c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905\"" Jan 30 05:01:46.468286 containerd[1596]: time="2025-01-30T05:01:46.468026996Z" level=info msg="StartContainer for \"6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905\"" Jan 30 05:01:46.607024 systemd[1]: run-containerd-runc-k8s.io-6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905-runc.f9sESm.mount: Deactivated successfully. Jan 30 05:01:46.665878 containerd[1596]: time="2025-01-30T05:01:46.663877876Z" level=info msg="StartContainer for \"6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905\" returns successfully" Jan 30 05:01:46.861032 containerd[1596]: time="2025-01-30T05:01:46.842645977Z" level=info msg="shim disconnected" id=6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905 namespace=k8s.io Jan 30 05:01:46.861032 containerd[1596]: time="2025-01-30T05:01:46.860803665Z" level=warning msg="cleaning up after shim disconnected" id=6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905 namespace=k8s.io Jan 30 05:01:46.861032 containerd[1596]: time="2025-01-30T05:01:46.860822286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:01:46.881105 containerd[1596]: time="2025-01-30T05:01:46.881014851Z" level=warning msg="cleanup warnings time=\"2025-01-30T05:01:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 05:01:47.104865 kubelet[2755]: E0130 05:01:47.104827 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:47.119795 containerd[1596]: time="2025-01-30T05:01:47.116546934Z" level=info msg="CreateContainer within sandbox \"c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 05:01:47.145969 containerd[1596]: time="2025-01-30T05:01:47.145850208Z" level=info msg="CreateContainer within sandbox \"c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee\"" Jan 30 05:01:47.147584 containerd[1596]: time="2025-01-30T05:01:47.146886933Z" level=info msg="StartContainer for \"09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee\"" Jan 30 05:01:47.216522 containerd[1596]: time="2025-01-30T05:01:47.216388575Z" level=info msg="StartContainer for \"09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee\" returns successfully" Jan 30 05:01:47.230614 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 05:01:47.231289 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:01:47.231482 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:01:47.245143 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:01:47.270623 containerd[1596]: time="2025-01-30T05:01:47.270535218Z" level=info msg="shim disconnected" id=09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee namespace=k8s.io Jan 30 05:01:47.271336 containerd[1596]: time="2025-01-30T05:01:47.271032615Z" level=warning msg="cleaning up after shim disconnected" id=09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee namespace=k8s.io Jan 30 05:01:47.271336 containerd[1596]: time="2025-01-30T05:01:47.271266917Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:01:47.284744 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:01:47.459278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905-rootfs.mount: Deactivated successfully. Jan 30 05:01:48.109939 kubelet[2755]: E0130 05:01:48.109897 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:48.127912 containerd[1596]: time="2025-01-30T05:01:48.127703205Z" level=info msg="CreateContainer within sandbox \"c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 05:01:48.166395 containerd[1596]: time="2025-01-30T05:01:48.166319677Z" level=info msg="CreateContainer within sandbox \"c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60\"" Jan 30 05:01:48.167673 containerd[1596]: time="2025-01-30T05:01:48.167620020Z" level=info msg="StartContainer for \"6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60\"" Jan 30 05:01:48.314729 containerd[1596]: time="2025-01-30T05:01:48.314499547Z" level=info msg="StartContainer for \"6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60\" returns successfully" Jan 30 05:01:48.380058 containerd[1596]: time="2025-01-30T05:01:48.379443162Z" level=info msg="shim disconnected" id=6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60 namespace=k8s.io Jan 30 05:01:48.380058 containerd[1596]: time="2025-01-30T05:01:48.379573484Z" level=warning msg="cleaning up after shim disconnected" id=6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60 namespace=k8s.io Jan 30 05:01:48.380058 containerd[1596]: time="2025-01-30T05:01:48.379589114Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:01:48.458485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60-rootfs.mount: Deactivated successfully. Jan 30 05:01:49.060390 containerd[1596]: time="2025-01-30T05:01:49.060311967Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:49.061217 containerd[1596]: time="2025-01-30T05:01:49.061154827Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 05:01:49.062190 containerd[1596]: time="2025-01-30T05:01:49.062143384Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:01:49.064610 containerd[1596]: time="2025-01-30T05:01:49.064534534Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.703403917s" Jan 30 05:01:49.064860 containerd[1596]: time="2025-01-30T05:01:49.064830584Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 05:01:49.081701 containerd[1596]: time="2025-01-30T05:01:49.081585008Z" level=info msg="CreateContainer within sandbox \"028b7f72601bf30746f8f120aaa1b415d5cb0cb2cc4ae09415e491fa7fc27914\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 05:01:49.099604 containerd[1596]: time="2025-01-30T05:01:49.099528234Z" level=info msg="CreateContainer within sandbox \"028b7f72601bf30746f8f120aaa1b415d5cb0cb2cc4ae09415e491fa7fc27914\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a\"" Jan 30 05:01:49.102185 containerd[1596]: time="2025-01-30T05:01:49.100358478Z" level=info msg="StartContainer for \"3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a\"" Jan 30 05:01:49.119849 kubelet[2755]: E0130 05:01:49.117256 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:49.128098 containerd[1596]: time="2025-01-30T05:01:49.127881218Z" level=info msg="CreateContainer within sandbox \"c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 05:01:49.154718 containerd[1596]: time="2025-01-30T05:01:49.152937226Z" level=info msg="CreateContainer within sandbox \"c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb\"" Jan 30 05:01:49.158905 containerd[1596]: time="2025-01-30T05:01:49.158217721Z" level=info msg="StartContainer for \"e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb\"" Jan 30 05:01:49.287956 containerd[1596]: time="2025-01-30T05:01:49.286534683Z" level=info msg="StartContainer for \"3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a\" returns successfully" Jan 30 05:01:49.304289 containerd[1596]: time="2025-01-30T05:01:49.304218823Z" level=info msg="StartContainer for \"e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb\" returns successfully" Jan 30 05:01:49.386156 containerd[1596]: time="2025-01-30T05:01:49.385773370Z" level=info msg="shim disconnected" id=e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb namespace=k8s.io Jan 30 05:01:49.386156 containerd[1596]: time="2025-01-30T05:01:49.385856174Z" level=warning msg="cleaning up after shim disconnected" id=e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb namespace=k8s.io Jan 30 05:01:49.386156 containerd[1596]: time="2025-01-30T05:01:49.385869935Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:01:50.138624 kubelet[2755]: E0130 05:01:50.137939 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:50.148451 kubelet[2755]: E0130 05:01:50.146172 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:50.152777 containerd[1596]: time="2025-01-30T05:01:50.152617736Z" level=info msg="CreateContainer within sandbox \"c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 05:01:50.203593 containerd[1596]: time="2025-01-30T05:01:50.203487782Z" level=info msg="CreateContainer within sandbox \"c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365\"" Jan 30 05:01:50.207797 containerd[1596]: time="2025-01-30T05:01:50.206703753Z" level=info msg="StartContainer for \"12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365\"" Jan 30 05:01:50.470198 containerd[1596]: time="2025-01-30T05:01:50.470038343Z" level=info msg="StartContainer for \"12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365\" returns successfully" Jan 30 05:01:50.781834 kubelet[2755]: I0130 05:01:50.780049 2755 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 05:01:50.909773 kubelet[2755]: I0130 05:01:50.909436 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-hmmrx" podStartSLOduration=2.577140983 podStartE2EDuration="12.909400933s" podCreationTimestamp="2025-01-30 05:01:38 +0000 UTC" firstStartedPulling="2025-01-30 05:01:38.733993899 +0000 UTC m=+14.033425483" lastFinishedPulling="2025-01-30 05:01:49.066253846 +0000 UTC m=+24.365685433" observedRunningTime="2025-01-30 05:01:50.399382265 +0000 UTC m=+25.698813881" watchObservedRunningTime="2025-01-30 05:01:50.909400933 +0000 UTC m=+26.208832541" Jan 30 05:01:50.911022 kubelet[2755]: I0130 05:01:50.910676 2755 topology_manager.go:215] "Topology Admit Handler" podUID="ec5e7277-fb85-4648-b7e0-c5d7ca726f33" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rhf8b" Jan 30 05:01:50.923255 kubelet[2755]: I0130 05:01:50.923186 2755 topology_manager.go:215] "Topology Admit Handler" podUID="6e42bfc6-a5b5-4e9a-b11b-24b639dd35c6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bnjwq" Jan 30 05:01:51.012175 kubelet[2755]: I0130 05:01:51.011446 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv6zs\" (UniqueName: \"kubernetes.io/projected/ec5e7277-fb85-4648-b7e0-c5d7ca726f33-kube-api-access-xv6zs\") pod \"coredns-7db6d8ff4d-rhf8b\" (UID: \"ec5e7277-fb85-4648-b7e0-c5d7ca726f33\") " pod="kube-system/coredns-7db6d8ff4d-rhf8b" Jan 30 05:01:51.012175 kubelet[2755]: I0130 05:01:51.012048 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec5e7277-fb85-4648-b7e0-c5d7ca726f33-config-volume\") pod \"coredns-7db6d8ff4d-rhf8b\" (UID: \"ec5e7277-fb85-4648-b7e0-c5d7ca726f33\") " pod="kube-system/coredns-7db6d8ff4d-rhf8b" Jan 30 05:01:51.116015 kubelet[2755]: I0130 05:01:51.114957 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e42bfc6-a5b5-4e9a-b11b-24b639dd35c6-config-volume\") pod \"coredns-7db6d8ff4d-bnjwq\" (UID: \"6e42bfc6-a5b5-4e9a-b11b-24b639dd35c6\") " pod="kube-system/coredns-7db6d8ff4d-bnjwq" Jan 30 05:01:51.116015 kubelet[2755]: I0130 05:01:51.115263 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29tmc\" (UniqueName: \"kubernetes.io/projected/6e42bfc6-a5b5-4e9a-b11b-24b639dd35c6-kube-api-access-29tmc\") pod \"coredns-7db6d8ff4d-bnjwq\" (UID: \"6e42bfc6-a5b5-4e9a-b11b-24b639dd35c6\") " pod="kube-system/coredns-7db6d8ff4d-bnjwq" Jan 30 05:01:51.174721 kubelet[2755]: E0130 05:01:51.172812 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:51.174721 kubelet[2755]: E0130 05:01:51.172971 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:51.218791 kubelet[2755]: E0130 05:01:51.218134 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:51.224502 containerd[1596]: time="2025-01-30T05:01:51.224415497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rhf8b,Uid:ec5e7277-fb85-4648-b7e0-c5d7ca726f33,Namespace:kube-system,Attempt:0,}" Jan 30 05:01:51.533204 kubelet[2755]: E0130 05:01:51.531651 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:51.535914 containerd[1596]: time="2025-01-30T05:01:51.535219360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bnjwq,Uid:6e42bfc6-a5b5-4e9a-b11b-24b639dd35c6,Namespace:kube-system,Attempt:0,}" Jan 30 05:01:52.171007 kubelet[2755]: E0130 05:01:52.170954 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:53.173476 kubelet[2755]: E0130 05:01:53.173342 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:53.197278 systemd-networkd[1227]: cilium_host: Link UP Jan 30 05:01:53.199092 systemd-networkd[1227]: cilium_net: Link UP Jan 30 05:01:53.201643 systemd-networkd[1227]: cilium_net: Gained carrier Jan 30 05:01:53.202469 systemd-networkd[1227]: cilium_host: Gained carrier Jan 30 05:01:53.350532 systemd-networkd[1227]: cilium_vxlan: Link UP Jan 30 05:01:53.350544 systemd-networkd[1227]: cilium_vxlan: Gained carrier Jan 30 05:01:53.452922 systemd-networkd[1227]: cilium_net: Gained IPv6LL Jan 30 05:01:53.800943 kernel: NET: Registered PF_ALG protocol family Jan 30 05:01:54.150722 systemd-networkd[1227]: cilium_host: Gained IPv6LL Jan 30 05:01:54.177356 kubelet[2755]: E0130 05:01:54.177310 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:54.840784 systemd-networkd[1227]: lxc_health: Link UP Jan 30 05:01:54.850488 systemd-networkd[1227]: lxc_health: Gained carrier Jan 30 05:01:54.980936 systemd-networkd[1227]: cilium_vxlan: Gained IPv6LL Jan 30 05:01:55.102378 systemd-networkd[1227]: lxc5340ed6da5e4: Link UP Jan 30 05:01:55.110009 kernel: eth0: renamed from tmpfaf7b Jan 30 05:01:55.118262 systemd-networkd[1227]: lxc5340ed6da5e4: Gained carrier Jan 30 05:01:55.460734 kernel: eth0: renamed from tmpe9ff9 Jan 30 05:01:55.458281 systemd-networkd[1227]: lxcd607fca3c744: Link UP Jan 30 05:01:55.471413 systemd-networkd[1227]: lxcd607fca3c744: Gained carrier Jan 30 05:01:56.004864 systemd-networkd[1227]: lxc_health: Gained IPv6LL Jan 30 05:01:56.448850 kubelet[2755]: E0130 05:01:56.448809 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:01:56.477555 kubelet[2755]: I0130 05:01:56.477477 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dsq6s" podStartSLOduration=10.690154024 podStartE2EDuration="18.477453536s" podCreationTimestamp="2025-01-30 05:01:38 +0000 UTC" firstStartedPulling="2025-01-30 05:01:38.573372297 +0000 UTC m=+13.872803883" lastFinishedPulling="2025-01-30 05:01:46.360671812 +0000 UTC m=+21.660103395" observedRunningTime="2025-01-30 05:01:51.21189454 +0000 UTC m=+26.511326145" watchObservedRunningTime="2025-01-30 05:01:56.477453536 +0000 UTC m=+31.776885141" Jan 30 05:01:56.901166 systemd-networkd[1227]: lxcd607fca3c744: Gained IPv6LL Jan 30 05:01:56.966049 systemd-networkd[1227]: lxc5340ed6da5e4: Gained IPv6LL Jan 30 05:01:57.189569 kubelet[2755]: E0130 05:01:57.189320 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:00.837845 containerd[1596]: time="2025-01-30T05:02:00.836869451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:00.844748 containerd[1596]: time="2025-01-30T05:02:00.837648766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:00.844748 containerd[1596]: time="2025-01-30T05:02:00.842630491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:00.845548 containerd[1596]: time="2025-01-30T05:02:00.844949035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:00.927724 containerd[1596]: time="2025-01-30T05:02:00.927162326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:00.927724 containerd[1596]: time="2025-01-30T05:02:00.927250624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:00.927724 containerd[1596]: time="2025-01-30T05:02:00.927262913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:00.929830 containerd[1596]: time="2025-01-30T05:02:00.929625120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:01.067718 containerd[1596]: time="2025-01-30T05:02:01.067592610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bnjwq,Uid:6e42bfc6-a5b5-4e9a-b11b-24b639dd35c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"faf7b1fc38b1bec174704d80bf2923d53260da122ebb67dd7e00e15150f3f33d\"" Jan 30 05:02:01.074333 kubelet[2755]: E0130 05:02:01.073378 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:01.093480 containerd[1596]: time="2025-01-30T05:02:01.090171838Z" level=info msg="CreateContainer within sandbox \"faf7b1fc38b1bec174704d80bf2923d53260da122ebb67dd7e00e15150f3f33d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 05:02:01.130384 containerd[1596]: time="2025-01-30T05:02:01.130286553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rhf8b,Uid:ec5e7277-fb85-4648-b7e0-c5d7ca726f33,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9ff9716493852b8782d4185f4976b2be4fda5cc74a76daadaaaeb85d2d472b5\"" Jan 30 05:02:01.135308 kubelet[2755]: E0130 05:02:01.135046 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:01.149564 containerd[1596]: time="2025-01-30T05:02:01.147882090Z" level=info msg="CreateContainer within sandbox \"e9ff9716493852b8782d4185f4976b2be4fda5cc74a76daadaaaeb85d2d472b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 05:02:01.164748 containerd[1596]: time="2025-01-30T05:02:01.164664240Z" level=info msg="CreateContainer within sandbox \"faf7b1fc38b1bec174704d80bf2923d53260da122ebb67dd7e00e15150f3f33d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41cf2ad0d8a14a9147f2a918c0499310f17832fc90c284b7f02a4a3fb4c9bb3e\"" Jan 30 05:02:01.166883 containerd[1596]: time="2025-01-30T05:02:01.166837892Z" level=info msg="StartContainer for \"41cf2ad0d8a14a9147f2a918c0499310f17832fc90c284b7f02a4a3fb4c9bb3e\"" Jan 30 05:02:01.176255 containerd[1596]: time="2025-01-30T05:02:01.175382830Z" level=info msg="CreateContainer within sandbox \"e9ff9716493852b8782d4185f4976b2be4fda5cc74a76daadaaaeb85d2d472b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70124fe7dcaaac57f65efbe45c7d203b36f5c8904f13038b655bf989f5c66cb3\"" Jan 30 05:02:01.179956 containerd[1596]: time="2025-01-30T05:02:01.179904355Z" level=info msg="StartContainer for \"70124fe7dcaaac57f65efbe45c7d203b36f5c8904f13038b655bf989f5c66cb3\"" Jan 30 05:02:01.286658 containerd[1596]: time="2025-01-30T05:02:01.286584838Z" level=info msg="StartContainer for \"41cf2ad0d8a14a9147f2a918c0499310f17832fc90c284b7f02a4a3fb4c9bb3e\" returns successfully" Jan 30 05:02:01.322787 containerd[1596]: time="2025-01-30T05:02:01.322632589Z" level=info msg="StartContainer for \"70124fe7dcaaac57f65efbe45c7d203b36f5c8904f13038b655bf989f5c66cb3\" returns successfully" Jan 30 05:02:01.855864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount494497163.mount: Deactivated successfully. Jan 30 05:02:02.222495 kubelet[2755]: E0130 05:02:02.219775 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:02.222495 kubelet[2755]: E0130 05:02:02.221627 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:02.262849 kubelet[2755]: I0130 05:02:02.260376 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bnjwq" podStartSLOduration=24.260347977 podStartE2EDuration="24.260347977s" podCreationTimestamp="2025-01-30 05:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:02:02.255332189 +0000 UTC m=+37.554763842" watchObservedRunningTime="2025-01-30 05:02:02.260347977 +0000 UTC m=+37.559779589" Jan 30 05:02:02.298754 kubelet[2755]: I0130 05:02:02.296431 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rhf8b" podStartSLOduration=24.296402979 podStartE2EDuration="24.296402979s" podCreationTimestamp="2025-01-30 05:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:02:02.292788919 +0000 UTC m=+37.592220527" watchObservedRunningTime="2025-01-30 05:02:02.296402979 +0000 UTC m=+37.595834589" Jan 30 05:02:02.701512 systemd[1]: Started sshd@7-146.190.174.183:22-147.75.109.163:37706.service - OpenSSH per-connection server daemon (147.75.109.163:37706). Jan 30 05:02:02.800968 sshd[4133]: Accepted publickey for core from 147.75.109.163 port 37706 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:02.803996 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:02.815171 systemd-logind[1574]: New session 8 of user core. Jan 30 05:02:02.823193 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 05:02:03.226002 kubelet[2755]: E0130 05:02:03.225762 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:03.226002 kubelet[2755]: E0130 05:02:03.225866 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:03.498396 sshd[4133]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:03.503767 systemd[1]: sshd@7-146.190.174.183:22-147.75.109.163:37706.service: Deactivated successfully. Jan 30 05:02:03.515663 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 05:02:03.516865 systemd-logind[1574]: Session 8 logged out. Waiting for processes to exit. Jan 30 05:02:03.518588 systemd-logind[1574]: Removed session 8. Jan 30 05:02:04.227738 kubelet[2755]: E0130 05:02:04.227324 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:04.230008 kubelet[2755]: E0130 05:02:04.229900 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:05.229055 kubelet[2755]: E0130 05:02:05.229013 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:08.511752 systemd[1]: Started sshd@8-146.190.174.183:22-147.75.109.163:56076.service - OpenSSH per-connection server daemon (147.75.109.163:56076). Jan 30 05:02:08.564056 sshd[4153]: Accepted publickey for core from 147.75.109.163 port 56076 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:08.566586 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:08.574085 systemd-logind[1574]: New session 9 of user core. Jan 30 05:02:08.577150 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 05:02:08.749791 sshd[4153]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:08.755545 systemd[1]: sshd@8-146.190.174.183:22-147.75.109.163:56076.service: Deactivated successfully. Jan 30 05:02:08.763915 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 05:02:08.766447 systemd-logind[1574]: Session 9 logged out. Waiting for processes to exit. Jan 30 05:02:08.768313 systemd-logind[1574]: Removed session 9. Jan 30 05:02:13.761079 systemd[1]: Started sshd@9-146.190.174.183:22-147.75.109.163:56092.service - OpenSSH per-connection server daemon (147.75.109.163:56092). Jan 30 05:02:13.816499 sshd[4170]: Accepted publickey for core from 147.75.109.163 port 56092 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:13.818802 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:13.826718 systemd-logind[1574]: New session 10 of user core. Jan 30 05:02:13.832761 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 05:02:13.999529 sshd[4170]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:14.005900 systemd[1]: sshd@9-146.190.174.183:22-147.75.109.163:56092.service: Deactivated successfully. Jan 30 05:02:14.007143 systemd-logind[1574]: Session 10 logged out. Waiting for processes to exit. Jan 30 05:02:14.014343 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 05:02:14.016137 systemd-logind[1574]: Removed session 10. Jan 30 05:02:19.013086 systemd[1]: Started sshd@10-146.190.174.183:22-147.75.109.163:40810.service - OpenSSH per-connection server daemon (147.75.109.163:40810). Jan 30 05:02:19.057856 sshd[4184]: Accepted publickey for core from 147.75.109.163 port 40810 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:19.060191 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:19.066960 systemd-logind[1574]: New session 11 of user core. Jan 30 05:02:19.072404 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 05:02:19.226052 sshd[4184]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:19.237299 systemd[1]: Started sshd@11-146.190.174.183:22-147.75.109.163:40812.service - OpenSSH per-connection server daemon (147.75.109.163:40812). Jan 30 05:02:19.238008 systemd[1]: sshd@10-146.190.174.183:22-147.75.109.163:40810.service: Deactivated successfully. Jan 30 05:02:19.251040 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 05:02:19.252557 systemd-logind[1574]: Session 11 logged out. Waiting for processes to exit. Jan 30 05:02:19.254146 systemd-logind[1574]: Removed session 11. Jan 30 05:02:19.292754 sshd[4196]: Accepted publickey for core from 147.75.109.163 port 40812 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:19.295072 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:19.301038 systemd-logind[1574]: New session 12 of user core. Jan 30 05:02:19.315357 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 05:02:19.557034 sshd[4196]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:19.576715 systemd[1]: Started sshd@12-146.190.174.183:22-147.75.109.163:40828.service - OpenSSH per-connection server daemon (147.75.109.163:40828). Jan 30 05:02:19.578841 systemd[1]: sshd@11-146.190.174.183:22-147.75.109.163:40812.service: Deactivated successfully. Jan 30 05:02:19.593612 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 05:02:19.610370 systemd-logind[1574]: Session 12 logged out. Waiting for processes to exit. Jan 30 05:02:19.623133 systemd-logind[1574]: Removed session 12. Jan 30 05:02:19.683635 sshd[4208]: Accepted publickey for core from 147.75.109.163 port 40828 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:19.684518 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:19.692790 systemd-logind[1574]: New session 13 of user core. Jan 30 05:02:19.699145 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 05:02:19.869020 sshd[4208]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:19.875229 systemd[1]: sshd@12-146.190.174.183:22-147.75.109.163:40828.service: Deactivated successfully. Jan 30 05:02:19.882385 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 05:02:19.883621 systemd-logind[1574]: Session 13 logged out. Waiting for processes to exit. Jan 30 05:02:19.885565 systemd-logind[1574]: Removed session 13. Jan 30 05:02:24.881130 systemd[1]: Started sshd@13-146.190.174.183:22-147.75.109.163:40834.service - OpenSSH per-connection server daemon (147.75.109.163:40834). Jan 30 05:02:24.971395 sshd[4224]: Accepted publickey for core from 147.75.109.163 port 40834 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:24.977978 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:25.009827 systemd-logind[1574]: New session 14 of user core. Jan 30 05:02:25.012928 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 05:02:25.180186 sshd[4224]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:25.186223 systemd[1]: sshd@13-146.190.174.183:22-147.75.109.163:40834.service: Deactivated successfully. Jan 30 05:02:25.192276 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 05:02:25.196194 systemd-logind[1574]: Session 14 logged out. Waiting for processes to exit. Jan 30 05:02:25.198258 systemd-logind[1574]: Removed session 14. Jan 30 05:02:30.194314 systemd[1]: Started sshd@14-146.190.174.183:22-147.75.109.163:36680.service - OpenSSH per-connection server daemon (147.75.109.163:36680). Jan 30 05:02:30.262133 sshd[4240]: Accepted publickey for core from 147.75.109.163 port 36680 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:30.264779 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:30.272340 systemd-logind[1574]: New session 15 of user core. Jan 30 05:02:30.281449 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 05:02:30.441104 sshd[4240]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:30.453440 systemd[1]: Started sshd@15-146.190.174.183:22-147.75.109.163:36684.service - OpenSSH per-connection server daemon (147.75.109.163:36684). Jan 30 05:02:30.455075 systemd[1]: sshd@14-146.190.174.183:22-147.75.109.163:36680.service: Deactivated successfully. Jan 30 05:02:30.465834 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 05:02:30.469603 systemd-logind[1574]: Session 15 logged out. Waiting for processes to exit. Jan 30 05:02:30.472795 systemd-logind[1574]: Removed session 15. Jan 30 05:02:30.509955 sshd[4251]: Accepted publickey for core from 147.75.109.163 port 36684 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:30.512136 sshd[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:30.519638 systemd-logind[1574]: New session 16 of user core. Jan 30 05:02:30.526432 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 05:02:30.877110 sshd[4251]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:30.886305 systemd[1]: Started sshd@16-146.190.174.183:22-147.75.109.163:36688.service - OpenSSH per-connection server daemon (147.75.109.163:36688). Jan 30 05:02:30.887079 systemd[1]: sshd@15-146.190.174.183:22-147.75.109.163:36684.service: Deactivated successfully. Jan 30 05:02:30.894508 systemd-logind[1574]: Session 16 logged out. Waiting for processes to exit. Jan 30 05:02:30.897762 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 05:02:30.899325 systemd-logind[1574]: Removed session 16. Jan 30 05:02:30.956858 sshd[4262]: Accepted publickey for core from 147.75.109.163 port 36688 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:30.959202 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:30.968489 systemd-logind[1574]: New session 17 of user core. Jan 30 05:02:30.974878 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 05:02:33.152392 sshd[4262]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:33.176117 systemd[1]: Started sshd@17-146.190.174.183:22-147.75.109.163:36696.service - OpenSSH per-connection server daemon (147.75.109.163:36696). Jan 30 05:02:33.176735 systemd[1]: sshd@16-146.190.174.183:22-147.75.109.163:36688.service: Deactivated successfully. Jan 30 05:02:33.200587 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 05:02:33.200910 systemd-logind[1574]: Session 17 logged out. Waiting for processes to exit. Jan 30 05:02:33.205864 systemd-logind[1574]: Removed session 17. Jan 30 05:02:33.262197 sshd[4281]: Accepted publickey for core from 147.75.109.163 port 36696 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:33.264394 sshd[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:33.270321 systemd-logind[1574]: New session 18 of user core. Jan 30 05:02:33.276248 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 05:02:33.655907 sshd[4281]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:33.669752 systemd[1]: Started sshd@18-146.190.174.183:22-147.75.109.163:36706.service - OpenSSH per-connection server daemon (147.75.109.163:36706). Jan 30 05:02:33.673578 systemd[1]: sshd@17-146.190.174.183:22-147.75.109.163:36696.service: Deactivated successfully. Jan 30 05:02:33.684294 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 05:02:33.686367 systemd-logind[1574]: Session 18 logged out. Waiting for processes to exit. Jan 30 05:02:33.690601 systemd-logind[1574]: Removed session 18. Jan 30 05:02:33.723979 sshd[4293]: Accepted publickey for core from 147.75.109.163 port 36706 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:33.727145 sshd[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:33.735011 systemd-logind[1574]: New session 19 of user core. Jan 30 05:02:33.744255 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 05:02:33.904071 sshd[4293]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:33.909597 systemd[1]: sshd@18-146.190.174.183:22-147.75.109.163:36706.service: Deactivated successfully. Jan 30 05:02:33.916046 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 05:02:33.917512 systemd-logind[1574]: Session 19 logged out. Waiting for processes to exit. Jan 30 05:02:33.919023 systemd-logind[1574]: Removed session 19. Jan 30 05:02:33.973892 kubelet[2755]: E0130 05:02:33.973709 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:34.973615 kubelet[2755]: E0130 05:02:34.973055 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:38.913053 systemd[1]: Started sshd@19-146.190.174.183:22-147.75.109.163:36972.service - OpenSSH per-connection server daemon (147.75.109.163:36972). Jan 30 05:02:38.962257 sshd[4313]: Accepted publickey for core from 147.75.109.163 port 36972 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:38.964468 sshd[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:38.971794 systemd-logind[1574]: New session 20 of user core. Jan 30 05:02:38.975505 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 05:02:39.120426 sshd[4313]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:39.125995 systemd[1]: sshd@19-146.190.174.183:22-147.75.109.163:36972.service: Deactivated successfully. Jan 30 05:02:39.131052 systemd-logind[1574]: Session 20 logged out. Waiting for processes to exit. Jan 30 05:02:39.131709 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 05:02:39.133473 systemd-logind[1574]: Removed session 20. Jan 30 05:02:44.135168 systemd[1]: Started sshd@20-146.190.174.183:22-147.75.109.163:36986.service - OpenSSH per-connection server daemon (147.75.109.163:36986). Jan 30 05:02:44.183716 sshd[4329]: Accepted publickey for core from 147.75.109.163 port 36986 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:44.185443 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:44.194932 systemd-logind[1574]: New session 21 of user core. Jan 30 05:02:44.204253 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 05:02:44.362966 sshd[4329]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:44.367784 systemd[1]: sshd@20-146.190.174.183:22-147.75.109.163:36986.service: Deactivated successfully. Jan 30 05:02:44.376019 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 05:02:44.377531 systemd-logind[1574]: Session 21 logged out. Waiting for processes to exit. Jan 30 05:02:44.379185 systemd-logind[1574]: Removed session 21. Jan 30 05:02:48.974792 kubelet[2755]: E0130 05:02:48.974670 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:49.374250 systemd[1]: Started sshd@21-146.190.174.183:22-147.75.109.163:52514.service - OpenSSH per-connection server daemon (147.75.109.163:52514). Jan 30 05:02:49.424726 sshd[4343]: Accepted publickey for core from 147.75.109.163 port 52514 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:49.427981 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:49.435378 systemd-logind[1574]: New session 22 of user core. Jan 30 05:02:49.441236 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 05:02:49.590093 sshd[4343]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:49.597731 systemd[1]: sshd@21-146.190.174.183:22-147.75.109.163:52514.service: Deactivated successfully. Jan 30 05:02:49.605798 systemd-logind[1574]: Session 22 logged out. Waiting for processes to exit. Jan 30 05:02:49.606539 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 05:02:49.610244 systemd-logind[1574]: Removed session 22. Jan 30 05:02:49.973791 kubelet[2755]: E0130 05:02:49.973668 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:54.606361 systemd[1]: Started sshd@22-146.190.174.183:22-147.75.109.163:52530.service - OpenSSH per-connection server daemon (147.75.109.163:52530). Jan 30 05:02:54.652657 sshd[4357]: Accepted publickey for core from 147.75.109.163 port 52530 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:54.655065 sshd[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:54.662268 systemd-logind[1574]: New session 23 of user core. Jan 30 05:02:54.666292 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 05:02:54.818551 sshd[4357]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:54.831211 systemd[1]: Started sshd@23-146.190.174.183:22-147.75.109.163:52546.service - OpenSSH per-connection server daemon (147.75.109.163:52546). Jan 30 05:02:54.835080 systemd[1]: sshd@22-146.190.174.183:22-147.75.109.163:52530.service: Deactivated successfully. Jan 30 05:02:54.843260 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 05:02:54.846348 systemd-logind[1574]: Session 23 logged out. Waiting for processes to exit. Jan 30 05:02:54.848068 systemd-logind[1574]: Removed session 23. Jan 30 05:02:54.885364 sshd[4368]: Accepted publickey for core from 147.75.109.163 port 52546 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:54.887908 sshd[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:54.895915 systemd-logind[1574]: New session 24 of user core. Jan 30 05:02:54.906311 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 05:02:56.531847 systemd[1]: run-containerd-runc-k8s.io-12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365-runc.E0ZaoM.mount: Deactivated successfully. Jan 30 05:02:56.557310 containerd[1596]: time="2025-01-30T05:02:56.556994234Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 05:02:56.594005 containerd[1596]: time="2025-01-30T05:02:56.593886591Z" level=info msg="StopContainer for \"12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365\" with timeout 2 (s)" Jan 30 05:02:56.595532 containerd[1596]: time="2025-01-30T05:02:56.593890273Z" level=info msg="StopContainer for \"3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a\" with timeout 30 (s)" Jan 30 05:02:56.596193 containerd[1596]: time="2025-01-30T05:02:56.596163576Z" level=info msg="Stop container \"3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a\" with signal terminated" Jan 30 05:02:56.597007 containerd[1596]: time="2025-01-30T05:02:56.596979690Z" level=info msg="Stop container \"12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365\" with signal terminated" Jan 30 05:02:56.611904 systemd-networkd[1227]: lxc_health: Link DOWN Jan 30 05:02:56.613319 systemd-networkd[1227]: lxc_health: Lost carrier Jan 30 05:02:56.675686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a-rootfs.mount: Deactivated successfully. Jan 30 05:02:56.684085 containerd[1596]: time="2025-01-30T05:02:56.683944997Z" level=info msg="shim disconnected" id=3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a namespace=k8s.io Jan 30 05:02:56.685417 containerd[1596]: time="2025-01-30T05:02:56.685338263Z" level=warning msg="cleaning up after shim disconnected" id=3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a namespace=k8s.io Jan 30 05:02:56.685417 containerd[1596]: time="2025-01-30T05:02:56.685412102Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:02:56.689485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365-rootfs.mount: Deactivated successfully. Jan 30 05:02:56.695709 containerd[1596]: time="2025-01-30T05:02:56.695597684Z" level=info msg="shim disconnected" id=12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365 namespace=k8s.io Jan 30 05:02:56.695709 containerd[1596]: time="2025-01-30T05:02:56.695675930Z" level=warning msg="cleaning up after shim disconnected" id=12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365 namespace=k8s.io Jan 30 05:02:56.695709 containerd[1596]: time="2025-01-30T05:02:56.695708018Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:02:56.726474 containerd[1596]: time="2025-01-30T05:02:56.726417381Z" level=info msg="StopContainer for \"12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365\" returns successfully" Jan 30 05:02:56.727279 containerd[1596]: time="2025-01-30T05:02:56.727242197Z" level=info msg="StopContainer for \"3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a\" returns successfully" Jan 30 05:02:56.727976 containerd[1596]: time="2025-01-30T05:02:56.727842991Z" level=info msg="StopPodSandbox for \"028b7f72601bf30746f8f120aaa1b415d5cb0cb2cc4ae09415e491fa7fc27914\"" Jan 30 05:02:56.727976 containerd[1596]: time="2025-01-30T05:02:56.727881337Z" level=info msg="Container to stop \"3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:02:56.728287 containerd[1596]: time="2025-01-30T05:02:56.728165417Z" level=info msg="StopPodSandbox for \"c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c\"" Jan 30 05:02:56.728287 containerd[1596]: time="2025-01-30T05:02:56.728192060Z" level=info msg="Container to stop \"09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:02:56.728287 containerd[1596]: time="2025-01-30T05:02:56.728206104Z" level=info msg="Container to stop \"12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:02:56.728287 containerd[1596]: time="2025-01-30T05:02:56.728215402Z" level=info msg="Container to stop \"6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:02:56.728287 containerd[1596]: time="2025-01-30T05:02:56.728224332Z" level=info msg="Container to stop \"6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:02:56.728287 containerd[1596]: time="2025-01-30T05:02:56.728233456Z" level=info msg="Container to stop \"e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:02:56.733388 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-028b7f72601bf30746f8f120aaa1b415d5cb0cb2cc4ae09415e491fa7fc27914-shm.mount: Deactivated successfully. Jan 30 05:02:56.733884 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c-shm.mount: Deactivated successfully. Jan 30 05:02:56.782621 containerd[1596]: time="2025-01-30T05:02:56.782480496Z" level=info msg="shim disconnected" id=c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c namespace=k8s.io Jan 30 05:02:56.782621 containerd[1596]: time="2025-01-30T05:02:56.782535078Z" level=warning msg="cleaning up after shim disconnected" id=c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c namespace=k8s.io Jan 30 05:02:56.782621 containerd[1596]: time="2025-01-30T05:02:56.782543555Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:02:56.790660 containerd[1596]: time="2025-01-30T05:02:56.790485486Z" level=info msg="shim disconnected" id=028b7f72601bf30746f8f120aaa1b415d5cb0cb2cc4ae09415e491fa7fc27914 namespace=k8s.io Jan 30 05:02:56.790660 containerd[1596]: time="2025-01-30T05:02:56.790547966Z" level=warning msg="cleaning up after shim disconnected" id=028b7f72601bf30746f8f120aaa1b415d5cb0cb2cc4ae09415e491fa7fc27914 namespace=k8s.io Jan 30 05:02:56.790660 containerd[1596]: time="2025-01-30T05:02:56.790558031Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:02:56.807015 containerd[1596]: time="2025-01-30T05:02:56.805394113Z" level=info msg="TearDown network for sandbox \"c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c\" successfully" Jan 30 05:02:56.807015 containerd[1596]: time="2025-01-30T05:02:56.805431586Z" level=info msg="StopPodSandbox for \"c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c\" returns successfully" Jan 30 05:02:56.825397 containerd[1596]: time="2025-01-30T05:02:56.824276306Z" level=warning msg="cleanup warnings time=\"2025-01-30T05:02:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 05:02:56.830812 containerd[1596]: time="2025-01-30T05:02:56.830765998Z" level=info msg="TearDown network for sandbox \"028b7f72601bf30746f8f120aaa1b415d5cb0cb2cc4ae09415e491fa7fc27914\" successfully" Jan 30 05:02:56.830812 containerd[1596]: time="2025-01-30T05:02:56.830800878Z" level=info msg="StopPodSandbox for \"028b7f72601bf30746f8f120aaa1b415d5cb0cb2cc4ae09415e491fa7fc27914\" returns successfully" Jan 30 05:02:56.856745 kubelet[2755]: I0130 05:02:56.856304 2755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-lib-modules\") pod \"51133790-9284-44a2-b5e7-702b91c05960\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " Jan 30 05:02:56.856745 kubelet[2755]: I0130 05:02:56.856385 2755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjgsn\" (UniqueName: \"kubernetes.io/projected/51133790-9284-44a2-b5e7-702b91c05960-kube-api-access-rjgsn\") pod \"51133790-9284-44a2-b5e7-702b91c05960\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " Jan 30 05:02:56.856745 kubelet[2755]: I0130 05:02:56.856410 2755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51133790-9284-44a2-b5e7-702b91c05960-cilium-config-path\") pod \"51133790-9284-44a2-b5e7-702b91c05960\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " Jan 30 05:02:56.856745 kubelet[2755]: I0130 05:02:56.856431 2755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51133790-9284-44a2-b5e7-702b91c05960-hubble-tls\") pod \"51133790-9284-44a2-b5e7-702b91c05960\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " Jan 30 05:02:56.856745 kubelet[2755]: I0130 05:02:56.856419 2755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "51133790-9284-44a2-b5e7-702b91c05960" (UID: "51133790-9284-44a2-b5e7-702b91c05960"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:02:56.856745 kubelet[2755]: I0130 05:02:56.856453 2755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-bpf-maps\") pod \"51133790-9284-44a2-b5e7-702b91c05960\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " Jan 30 05:02:56.857590 kubelet[2755]: I0130 05:02:56.856467 2755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-xtables-lock\") pod \"51133790-9284-44a2-b5e7-702b91c05960\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " Jan 30 05:02:56.857590 kubelet[2755]: I0130 05:02:56.856484 2755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-etc-cni-netd\") pod \"51133790-9284-44a2-b5e7-702b91c05960\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " Jan 30 05:02:56.857590 kubelet[2755]: I0130 05:02:56.856501 2755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-host-proc-sys-net\") pod \"51133790-9284-44a2-b5e7-702b91c05960\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " Jan 30 05:02:56.857590 kubelet[2755]: I0130 05:02:56.856520 2755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-host-proc-sys-kernel\") pod \"51133790-9284-44a2-b5e7-702b91c05960\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " Jan 30 05:02:56.857590 kubelet[2755]: I0130 05:02:56.856534 2755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-cilium-cgroup\") pod \"51133790-9284-44a2-b5e7-702b91c05960\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " Jan 30 05:02:56.857590 kubelet[2755]: I0130 05:02:56.856548 2755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-cni-path\") pod \"51133790-9284-44a2-b5e7-702b91c05960\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " Jan 30 05:02:56.858744 kubelet[2755]: I0130 05:02:56.856565 2755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-cilium-run\") pod \"51133790-9284-44a2-b5e7-702b91c05960\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " Jan 30 05:02:56.858744 kubelet[2755]: I0130 05:02:56.856584 2755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51133790-9284-44a2-b5e7-702b91c05960-clustermesh-secrets\") pod \"51133790-9284-44a2-b5e7-702b91c05960\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " Jan 30 05:02:56.858744 kubelet[2755]: I0130 05:02:56.856597 2755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-hostproc\") pod \"51133790-9284-44a2-b5e7-702b91c05960\" (UID: \"51133790-9284-44a2-b5e7-702b91c05960\") " Jan 30 05:02:56.858744 kubelet[2755]: I0130 05:02:56.856641 2755 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-lib-modules\") on node \"ci-4081.3.0-d-9062e890fd\" DevicePath \"\"" Jan 30 05:02:56.858744 kubelet[2755]: I0130 05:02:56.857910 2755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-hostproc" (OuterVolumeSpecName: "hostproc") pod "51133790-9284-44a2-b5e7-702b91c05960" (UID: "51133790-9284-44a2-b5e7-702b91c05960"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:02:56.858744 kubelet[2755]: I0130 05:02:56.858009 2755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "51133790-9284-44a2-b5e7-702b91c05960" (UID: "51133790-9284-44a2-b5e7-702b91c05960"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:02:56.858917 kubelet[2755]: I0130 05:02:56.858036 2755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "51133790-9284-44a2-b5e7-702b91c05960" (UID: "51133790-9284-44a2-b5e7-702b91c05960"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:02:56.858917 kubelet[2755]: I0130 05:02:56.858060 2755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "51133790-9284-44a2-b5e7-702b91c05960" (UID: "51133790-9284-44a2-b5e7-702b91c05960"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:02:56.858917 kubelet[2755]: I0130 05:02:56.858079 2755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "51133790-9284-44a2-b5e7-702b91c05960" (UID: "51133790-9284-44a2-b5e7-702b91c05960"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:02:56.858917 kubelet[2755]: I0130 05:02:56.858100 2755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "51133790-9284-44a2-b5e7-702b91c05960" (UID: "51133790-9284-44a2-b5e7-702b91c05960"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:02:56.858917 kubelet[2755]: I0130 05:02:56.858123 2755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "51133790-9284-44a2-b5e7-702b91c05960" (UID: "51133790-9284-44a2-b5e7-702b91c05960"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:02:56.859070 kubelet[2755]: I0130 05:02:56.858145 2755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-cni-path" (OuterVolumeSpecName: "cni-path") pod "51133790-9284-44a2-b5e7-702b91c05960" (UID: "51133790-9284-44a2-b5e7-702b91c05960"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:02:56.859070 kubelet[2755]: I0130 05:02:56.858163 2755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "51133790-9284-44a2-b5e7-702b91c05960" (UID: "51133790-9284-44a2-b5e7-702b91c05960"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:02:56.861756 kubelet[2755]: I0130 05:02:56.861285 2755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51133790-9284-44a2-b5e7-702b91c05960-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "51133790-9284-44a2-b5e7-702b91c05960" (UID: "51133790-9284-44a2-b5e7-702b91c05960"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:02:56.862811 kubelet[2755]: I0130 05:02:56.862771 2755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51133790-9284-44a2-b5e7-702b91c05960-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "51133790-9284-44a2-b5e7-702b91c05960" (UID: "51133790-9284-44a2-b5e7-702b91c05960"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:02:56.864397 kubelet[2755]: I0130 05:02:56.864353 2755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51133790-9284-44a2-b5e7-702b91c05960-kube-api-access-rjgsn" (OuterVolumeSpecName: "kube-api-access-rjgsn") pod "51133790-9284-44a2-b5e7-702b91c05960" (UID: "51133790-9284-44a2-b5e7-702b91c05960"). InnerVolumeSpecName "kube-api-access-rjgsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:02:56.865521 kubelet[2755]: I0130 05:02:56.865464 2755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51133790-9284-44a2-b5e7-702b91c05960-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "51133790-9284-44a2-b5e7-702b91c05960" (UID: "51133790-9284-44a2-b5e7-702b91c05960"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:02:56.956945 kubelet[2755]: I0130 05:02:56.956886 2755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2pjw\" (UniqueName: \"kubernetes.io/projected/602a7ae4-0558-41a6-9042-1ffabd97b3fd-kube-api-access-q2pjw\") pod \"602a7ae4-0558-41a6-9042-1ffabd97b3fd\" (UID: \"602a7ae4-0558-41a6-9042-1ffabd97b3fd\") " Jan 30 05:02:56.958429 kubelet[2755]: I0130 05:02:56.957242 2755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/602a7ae4-0558-41a6-9042-1ffabd97b3fd-cilium-config-path\") pod \"602a7ae4-0558-41a6-9042-1ffabd97b3fd\" (UID: \"602a7ae4-0558-41a6-9042-1ffabd97b3fd\") " Jan 30 05:02:56.958429 kubelet[2755]: I0130 05:02:56.957354 2755 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51133790-9284-44a2-b5e7-702b91c05960-cilium-config-path\") on node \"ci-4081.3.0-d-9062e890fd\" DevicePath \"\"" Jan 30 05:02:56.958429 kubelet[2755]: I0130 05:02:56.957371 2755 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51133790-9284-44a2-b5e7-702b91c05960-hubble-tls\") on node \"ci-4081.3.0-d-9062e890fd\" DevicePath \"\"" Jan 30 05:02:56.958429 kubelet[2755]: I0130 05:02:56.957387 2755 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-bpf-maps\") on node \"ci-4081.3.0-d-9062e890fd\" DevicePath \"\"" Jan 30 05:02:56.958429 kubelet[2755]: I0130 05:02:56.957400 2755 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-xtables-lock\") on node \"ci-4081.3.0-d-9062e890fd\" DevicePath \"\"" Jan 30 05:02:56.958429 kubelet[2755]: I0130 05:02:56.957413 2755 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-etc-cni-netd\") on node \"ci-4081.3.0-d-9062e890fd\" DevicePath \"\"" Jan 30 05:02:56.958429 kubelet[2755]: I0130 05:02:56.957426 2755 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-host-proc-sys-net\") on node \"ci-4081.3.0-d-9062e890fd\" DevicePath \"\"" Jan 30 05:02:56.958909 kubelet[2755]: I0130 05:02:56.957441 2755 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-host-proc-sys-kernel\") on node \"ci-4081.3.0-d-9062e890fd\" DevicePath \"\"" Jan 30 05:02:56.958909 kubelet[2755]: I0130 05:02:56.957456 2755 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-cilium-cgroup\") on node \"ci-4081.3.0-d-9062e890fd\" DevicePath \"\"" Jan 30 05:02:56.958909 kubelet[2755]: I0130 05:02:56.957470 2755 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-cni-path\") on node \"ci-4081.3.0-d-9062e890fd\" DevicePath \"\"" Jan 30 05:02:56.958909 kubelet[2755]: I0130 05:02:56.957482 2755 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51133790-9284-44a2-b5e7-702b91c05960-clustermesh-secrets\") on node \"ci-4081.3.0-d-9062e890fd\" DevicePath \"\"" Jan 30 05:02:56.958909 kubelet[2755]: I0130 05:02:56.957495 2755 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-hostproc\") on node \"ci-4081.3.0-d-9062e890fd\" DevicePath \"\"" Jan 30 05:02:56.958909 kubelet[2755]: I0130 05:02:56.957509 2755 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51133790-9284-44a2-b5e7-702b91c05960-cilium-run\") on node \"ci-4081.3.0-d-9062e890fd\" DevicePath \"\"" Jan 30 05:02:56.958909 kubelet[2755]: I0130 05:02:56.957524 2755 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rjgsn\" (UniqueName: \"kubernetes.io/projected/51133790-9284-44a2-b5e7-702b91c05960-kube-api-access-rjgsn\") on node \"ci-4081.3.0-d-9062e890fd\" DevicePath \"\"" Jan 30 05:02:56.960730 kubelet[2755]: I0130 05:02:56.960666 2755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602a7ae4-0558-41a6-9042-1ffabd97b3fd-kube-api-access-q2pjw" (OuterVolumeSpecName: "kube-api-access-q2pjw") pod "602a7ae4-0558-41a6-9042-1ffabd97b3fd" (UID: "602a7ae4-0558-41a6-9042-1ffabd97b3fd"). InnerVolumeSpecName "kube-api-access-q2pjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:02:56.962069 kubelet[2755]: I0130 05:02:56.962013 2755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/602a7ae4-0558-41a6-9042-1ffabd97b3fd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "602a7ae4-0558-41a6-9042-1ffabd97b3fd" (UID: "602a7ae4-0558-41a6-9042-1ffabd97b3fd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:02:57.057904 kubelet[2755]: I0130 05:02:57.057719 2755 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-q2pjw\" (UniqueName: \"kubernetes.io/projected/602a7ae4-0558-41a6-9042-1ffabd97b3fd-kube-api-access-q2pjw\") on node \"ci-4081.3.0-d-9062e890fd\" DevicePath \"\"" Jan 30 05:02:57.057904 kubelet[2755]: I0130 05:02:57.057769 2755 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/602a7ae4-0558-41a6-9042-1ffabd97b3fd-cilium-config-path\") on node \"ci-4081.3.0-d-9062e890fd\" DevicePath \"\"" Jan 30 05:02:57.387902 kubelet[2755]: I0130 05:02:57.387717 2755 scope.go:117] "RemoveContainer" containerID="3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a" Jan 30 05:02:57.397182 containerd[1596]: time="2025-01-30T05:02:57.396558167Z" level=info msg="RemoveContainer for \"3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a\"" Jan 30 05:02:57.402370 containerd[1596]: time="2025-01-30T05:02:57.402242813Z" level=info msg="RemoveContainer for \"3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a\" returns successfully" Jan 30 05:02:57.403760 kubelet[2755]: I0130 05:02:57.403352 2755 scope.go:117] "RemoveContainer" containerID="3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a" Jan 30 05:02:57.419447 containerd[1596]: time="2025-01-30T05:02:57.406356336Z" level=error msg="ContainerStatus for \"3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a\": not found" Jan 30 05:02:57.426955 kubelet[2755]: E0130 05:02:57.426905 2755 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a\": not found" containerID="3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a" Jan 30 05:02:57.431627 kubelet[2755]: I0130 05:02:57.426954 2755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a"} err="failed to get container status \"3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c8797821997be745315383c1c34c9f0a876e6ac758073f8edb0299c2ba2347a\": not found" Jan 30 05:02:57.431627 kubelet[2755]: I0130 05:02:57.431322 2755 scope.go:117] "RemoveContainer" containerID="12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365" Jan 30 05:02:57.434595 containerd[1596]: time="2025-01-30T05:02:57.433664552Z" level=info msg="RemoveContainer for \"12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365\"" Jan 30 05:02:57.438074 containerd[1596]: time="2025-01-30T05:02:57.438023814Z" level=info msg="RemoveContainer for \"12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365\" returns successfully" Jan 30 05:02:57.438710 kubelet[2755]: I0130 05:02:57.438653 2755 scope.go:117] "RemoveContainer" containerID="e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb" Jan 30 05:02:57.443525 containerd[1596]: time="2025-01-30T05:02:57.443448337Z" level=info msg="RemoveContainer for \"e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb\"" Jan 30 05:02:57.448935 containerd[1596]: time="2025-01-30T05:02:57.448891066Z" level=info msg="RemoveContainer for \"e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb\" returns successfully" Jan 30 05:02:57.449430 kubelet[2755]: I0130 05:02:57.449192 2755 scope.go:117] "RemoveContainer" containerID="6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60" Jan 30 05:02:57.450437 containerd[1596]: time="2025-01-30T05:02:57.450398595Z" level=info msg="RemoveContainer for \"6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60\"" Jan 30 05:02:57.452530 containerd[1596]: time="2025-01-30T05:02:57.452489789Z" level=info msg="RemoveContainer for \"6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60\" returns successfully" Jan 30 05:02:57.453132 kubelet[2755]: I0130 05:02:57.452862 2755 scope.go:117] "RemoveContainer" containerID="09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee" Jan 30 05:02:57.455303 containerd[1596]: time="2025-01-30T05:02:57.454959816Z" level=info msg="RemoveContainer for \"09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee\"" Jan 30 05:02:57.457466 containerd[1596]: time="2025-01-30T05:02:57.457423578Z" level=info msg="RemoveContainer for \"09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee\" returns successfully" Jan 30 05:02:57.457953 kubelet[2755]: I0130 05:02:57.457930 2755 scope.go:117] "RemoveContainer" containerID="6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905" Jan 30 05:02:57.459731 containerd[1596]: time="2025-01-30T05:02:57.459566052Z" level=info msg="RemoveContainer for \"6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905\"" Jan 30 05:02:57.462113 containerd[1596]: time="2025-01-30T05:02:57.462055767Z" level=info msg="RemoveContainer for \"6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905\" returns successfully" Jan 30 05:02:57.462399 kubelet[2755]: I0130 05:02:57.462335 2755 scope.go:117] "RemoveContainer" containerID="12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365" Jan 30 05:02:57.462699 containerd[1596]: time="2025-01-30T05:02:57.462646509Z" level=error msg="ContainerStatus for \"12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365\": not found" Jan 30 05:02:57.462994 kubelet[2755]: E0130 05:02:57.462966 2755 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365\": not found" containerID="12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365" Jan 30 05:02:57.463061 kubelet[2755]: I0130 05:02:57.463005 2755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365"} err="failed to get container status \"12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365\": rpc error: code = NotFound desc = an error occurred when try to find container \"12bb6ebc291313e8c89c8139f82ff7d5030b3e4774fb38bf20ff5fd45199a365\": not found" Jan 30 05:02:57.463061 kubelet[2755]: I0130 05:02:57.463035 2755 scope.go:117] "RemoveContainer" containerID="e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb" Jan 30 05:02:57.463300 containerd[1596]: time="2025-01-30T05:02:57.463262175Z" level=error msg="ContainerStatus for \"e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb\": not found" Jan 30 05:02:57.463470 kubelet[2755]: E0130 05:02:57.463439 2755 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb\": not found" containerID="e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb" Jan 30 05:02:57.463517 kubelet[2755]: I0130 05:02:57.463475 2755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb"} err="failed to get container status \"e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb\": rpc error: code = NotFound desc = an error occurred when try to find container \"e87fb890c70e9e76ec55bfcce070032df58529d3802438bef55e70eb55057bfb\": not found" Jan 30 05:02:57.463517 kubelet[2755]: I0130 05:02:57.463499 2755 scope.go:117] "RemoveContainer" containerID="6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60" Jan 30 05:02:57.463897 containerd[1596]: time="2025-01-30T05:02:57.463830398Z" level=error msg="ContainerStatus for \"6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60\": not found" Jan 30 05:02:57.464055 kubelet[2755]: E0130 05:02:57.464035 2755 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60\": not found" containerID="6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60" Jan 30 05:02:57.464130 kubelet[2755]: I0130 05:02:57.464060 2755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60"} err="failed to get container status \"6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60\": rpc error: code = NotFound desc = an error occurred when try to find container \"6906ba90b2db1429092d9f2904b1b181a89563d5856b99bb97cb560c7002cc60\": not found" Jan 30 05:02:57.464130 kubelet[2755]: I0130 05:02:57.464076 2755 scope.go:117] "RemoveContainer" containerID="09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee" Jan 30 05:02:57.464302 containerd[1596]: time="2025-01-30T05:02:57.464251627Z" level=error msg="ContainerStatus for \"09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee\": not found" Jan 30 05:02:57.464403 kubelet[2755]: E0130 05:02:57.464385 2755 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee\": not found" containerID="09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee" Jan 30 05:02:57.464452 kubelet[2755]: I0130 05:02:57.464407 2755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee"} err="failed to get container status \"09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee\": rpc error: code = NotFound desc = an error occurred when try to find container \"09b71924b5173787153809d6152cb4b2f5f5446c5017891206e324a1ac44dbee\": not found" Jan 30 05:02:57.464452 kubelet[2755]: I0130 05:02:57.464422 2755 scope.go:117] "RemoveContainer" containerID="6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905" Jan 30 05:02:57.464898 containerd[1596]: time="2025-01-30T05:02:57.464705881Z" level=error msg="ContainerStatus for \"6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905\": not found" Jan 30 05:02:57.465119 kubelet[2755]: E0130 05:02:57.464946 2755 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905\": not found" containerID="6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905" Jan 30 05:02:57.465119 kubelet[2755]: I0130 05:02:57.464964 2755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905"} err="failed to get container status \"6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905\": rpc error: code = NotFound desc = an error occurred when try to find container \"6402b981c865304d80ecbbec56469ee3792ef5f94ef0b90bf488405df31d9905\": not found" Jan 30 05:02:57.527192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-028b7f72601bf30746f8f120aaa1b415d5cb0cb2cc4ae09415e491fa7fc27914-rootfs.mount: Deactivated successfully. Jan 30 05:02:57.527476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8944033a507d241f1831374a812b3ba74f41067d4658966513ce243dc991a3c-rootfs.mount: Deactivated successfully. Jan 30 05:02:57.527664 systemd[1]: var-lib-kubelet-pods-602a7ae4\x2d0558\x2d41a6\x2d9042\x2d1ffabd97b3fd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq2pjw.mount: Deactivated successfully. Jan 30 05:02:57.527984 systemd[1]: var-lib-kubelet-pods-51133790\x2d9284\x2d44a2\x2db5e7\x2d702b91c05960-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drjgsn.mount: Deactivated successfully. Jan 30 05:02:57.528399 systemd[1]: var-lib-kubelet-pods-51133790\x2d9284\x2d44a2\x2db5e7\x2d702b91c05960-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 05:02:57.528550 systemd[1]: var-lib-kubelet-pods-51133790\x2d9284\x2d44a2\x2db5e7\x2d702b91c05960-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 05:02:58.428051 sshd[4368]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:58.436691 systemd[1]: Started sshd@24-146.190.174.183:22-147.75.109.163:56094.service - OpenSSH per-connection server daemon (147.75.109.163:56094). Jan 30 05:02:58.437356 systemd[1]: sshd@23-146.190.174.183:22-147.75.109.163:52546.service: Deactivated successfully. Jan 30 05:02:58.443301 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 05:02:58.447796 systemd-logind[1574]: Session 24 logged out. Waiting for processes to exit. Jan 30 05:02:58.453954 systemd-logind[1574]: Removed session 24. Jan 30 05:02:58.494751 sshd[4535]: Accepted publickey for core from 147.75.109.163 port 56094 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:58.496800 sshd[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:58.503383 systemd-logind[1574]: New session 25 of user core. Jan 30 05:02:58.511979 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 05:02:58.976782 kubelet[2755]: I0130 05:02:58.976730 2755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51133790-9284-44a2-b5e7-702b91c05960" path="/var/lib/kubelet/pods/51133790-9284-44a2-b5e7-702b91c05960/volumes" Jan 30 05:02:58.978119 kubelet[2755]: I0130 05:02:58.978007 2755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602a7ae4-0558-41a6-9042-1ffabd97b3fd" path="/var/lib/kubelet/pods/602a7ae4-0558-41a6-9042-1ffabd97b3fd/volumes" Jan 30 05:02:59.216924 sshd[4535]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:59.234809 systemd[1]: Started sshd@25-146.190.174.183:22-147.75.109.163:56106.service - OpenSSH per-connection server daemon (147.75.109.163:56106). Jan 30 05:02:59.240994 systemd[1]: sshd@24-146.190.174.183:22-147.75.109.163:56094.service: Deactivated successfully. Jan 30 05:02:59.255466 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 05:02:59.276384 systemd-logind[1574]: Session 25 logged out. Waiting for processes to exit. Jan 30 05:02:59.282582 systemd-logind[1574]: Removed session 25. Jan 30 05:02:59.288410 kubelet[2755]: I0130 05:02:59.285447 2755 topology_manager.go:215] "Topology Admit Handler" podUID="194c17f7-42ba-43bc-8ce6-00d8077b4b3c" podNamespace="kube-system" podName="cilium-dcldm" Jan 30 05:02:59.300984 kubelet[2755]: E0130 05:02:59.299472 2755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51133790-9284-44a2-b5e7-702b91c05960" containerName="mount-cgroup" Jan 30 05:02:59.303721 kubelet[2755]: E0130 05:02:59.301918 2755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51133790-9284-44a2-b5e7-702b91c05960" containerName="mount-bpf-fs" Jan 30 05:02:59.303721 kubelet[2755]: E0130 05:02:59.301952 2755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="602a7ae4-0558-41a6-9042-1ffabd97b3fd" containerName="cilium-operator" Jan 30 05:02:59.303721 kubelet[2755]: E0130 05:02:59.301962 2755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51133790-9284-44a2-b5e7-702b91c05960" containerName="apply-sysctl-overwrites" Jan 30 05:02:59.303721 kubelet[2755]: E0130 05:02:59.301969 2755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51133790-9284-44a2-b5e7-702b91c05960" containerName="clean-cilium-state" Jan 30 05:02:59.303721 kubelet[2755]: E0130 05:02:59.301976 2755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="51133790-9284-44a2-b5e7-702b91c05960" containerName="cilium-agent" Jan 30 05:02:59.303721 kubelet[2755]: I0130 05:02:59.302033 2755 memory_manager.go:354] "RemoveStaleState removing state" podUID="51133790-9284-44a2-b5e7-702b91c05960" containerName="cilium-agent" Jan 30 05:02:59.303721 kubelet[2755]: I0130 05:02:59.302041 2755 memory_manager.go:354] "RemoveStaleState removing state" podUID="602a7ae4-0558-41a6-9042-1ffabd97b3fd" containerName="cilium-operator" Jan 30 05:02:59.353707 sshd[4547]: Accepted publickey for core from 147.75.109.163 port 56106 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:59.359033 sshd[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:59.372806 kubelet[2755]: I0130 05:02:59.371665 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/194c17f7-42ba-43bc-8ce6-00d8077b4b3c-clustermesh-secrets\") pod \"cilium-dcldm\" (UID: \"194c17f7-42ba-43bc-8ce6-00d8077b4b3c\") " pod="kube-system/cilium-dcldm" Jan 30 05:02:59.372806 kubelet[2755]: I0130 05:02:59.371742 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/194c17f7-42ba-43bc-8ce6-00d8077b4b3c-host-proc-sys-kernel\") pod \"cilium-dcldm\" (UID: \"194c17f7-42ba-43bc-8ce6-00d8077b4b3c\") " pod="kube-system/cilium-dcldm" Jan 30 05:02:59.372806 kubelet[2755]: I0130 05:02:59.371774 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9glm\" (UniqueName: \"kubernetes.io/projected/194c17f7-42ba-43bc-8ce6-00d8077b4b3c-kube-api-access-q9glm\") pod \"cilium-dcldm\" (UID: \"194c17f7-42ba-43bc-8ce6-00d8077b4b3c\") " pod="kube-system/cilium-dcldm" Jan 30 05:02:59.372806 kubelet[2755]: I0130 05:02:59.371819 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/194c17f7-42ba-43bc-8ce6-00d8077b4b3c-cilium-run\") pod \"cilium-dcldm\" (UID: \"194c17f7-42ba-43bc-8ce6-00d8077b4b3c\") " pod="kube-system/cilium-dcldm" Jan 30 05:02:59.372806 kubelet[2755]: I0130 05:02:59.371845 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/194c17f7-42ba-43bc-8ce6-00d8077b4b3c-host-proc-sys-net\") pod \"cilium-dcldm\" (UID: \"194c17f7-42ba-43bc-8ce6-00d8077b4b3c\") " pod="kube-system/cilium-dcldm" Jan 30 05:02:59.373142 kubelet[2755]: I0130 05:02:59.371871 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/194c17f7-42ba-43bc-8ce6-00d8077b4b3c-cilium-config-path\") pod \"cilium-dcldm\" (UID: \"194c17f7-42ba-43bc-8ce6-00d8077b4b3c\") " pod="kube-system/cilium-dcldm" Jan 30 05:02:59.373142 kubelet[2755]: I0130 05:02:59.371897 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/194c17f7-42ba-43bc-8ce6-00d8077b4b3c-hubble-tls\") pod \"cilium-dcldm\" (UID: \"194c17f7-42ba-43bc-8ce6-00d8077b4b3c\") " pod="kube-system/cilium-dcldm" Jan 30 05:02:59.373142 kubelet[2755]: I0130 05:02:59.371925 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/194c17f7-42ba-43bc-8ce6-00d8077b4b3c-etc-cni-netd\") pod \"cilium-dcldm\" (UID: \"194c17f7-42ba-43bc-8ce6-00d8077b4b3c\") " pod="kube-system/cilium-dcldm" Jan 30 05:02:59.373142 kubelet[2755]: I0130 05:02:59.371956 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/194c17f7-42ba-43bc-8ce6-00d8077b4b3c-bpf-maps\") pod \"cilium-dcldm\" (UID: \"194c17f7-42ba-43bc-8ce6-00d8077b4b3c\") " pod="kube-system/cilium-dcldm" Jan 30 05:02:59.373142 kubelet[2755]: I0130 05:02:59.371980 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/194c17f7-42ba-43bc-8ce6-00d8077b4b3c-cni-path\") pod \"cilium-dcldm\" (UID: \"194c17f7-42ba-43bc-8ce6-00d8077b4b3c\") " pod="kube-system/cilium-dcldm" Jan 30 05:02:59.373142 kubelet[2755]: I0130 05:02:59.372011 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/194c17f7-42ba-43bc-8ce6-00d8077b4b3c-cilium-cgroup\") pod \"cilium-dcldm\" (UID: \"194c17f7-42ba-43bc-8ce6-00d8077b4b3c\") " pod="kube-system/cilium-dcldm" Jan 30 05:02:59.373297 kubelet[2755]: I0130 05:02:59.372038 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/194c17f7-42ba-43bc-8ce6-00d8077b4b3c-lib-modules\") pod \"cilium-dcldm\" (UID: \"194c17f7-42ba-43bc-8ce6-00d8077b4b3c\") " pod="kube-system/cilium-dcldm" Jan 30 05:02:59.373297 kubelet[2755]: I0130 05:02:59.372066 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/194c17f7-42ba-43bc-8ce6-00d8077b4b3c-cilium-ipsec-secrets\") pod \"cilium-dcldm\" (UID: \"194c17f7-42ba-43bc-8ce6-00d8077b4b3c\") " pod="kube-system/cilium-dcldm" Jan 30 05:02:59.373297 kubelet[2755]: I0130 05:02:59.372091 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/194c17f7-42ba-43bc-8ce6-00d8077b4b3c-hostproc\") pod \"cilium-dcldm\" (UID: \"194c17f7-42ba-43bc-8ce6-00d8077b4b3c\") " pod="kube-system/cilium-dcldm" Jan 30 05:02:59.373297 kubelet[2755]: I0130 05:02:59.372116 2755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/194c17f7-42ba-43bc-8ce6-00d8077b4b3c-xtables-lock\") pod \"cilium-dcldm\" (UID: \"194c17f7-42ba-43bc-8ce6-00d8077b4b3c\") " pod="kube-system/cilium-dcldm" Jan 30 05:02:59.380028 systemd-logind[1574]: New session 26 of user core. Jan 30 05:02:59.388334 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 05:02:59.455401 sshd[4547]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:59.464538 systemd[1]: Started sshd@26-146.190.174.183:22-147.75.109.163:56110.service - OpenSSH per-connection server daemon (147.75.109.163:56110). Jan 30 05:02:59.466023 systemd[1]: sshd@25-146.190.174.183:22-147.75.109.163:56106.service: Deactivated successfully. Jan 30 05:02:59.470394 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 05:02:59.475395 systemd-logind[1574]: Session 26 logged out. Waiting for processes to exit. Jan 30 05:02:59.497837 systemd-logind[1574]: Removed session 26. Jan 30 05:02:59.555835 sshd[4555]: Accepted publickey for core from 147.75.109.163 port 56110 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:59.557257 sshd[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:59.563570 systemd-logind[1574]: New session 27 of user core. Jan 30 05:02:59.569328 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 05:02:59.662510 kubelet[2755]: E0130 05:02:59.662456 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:59.665211 containerd[1596]: time="2025-01-30T05:02:59.663631080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dcldm,Uid:194c17f7-42ba-43bc-8ce6-00d8077b4b3c,Namespace:kube-system,Attempt:0,}" Jan 30 05:02:59.704987 containerd[1596]: time="2025-01-30T05:02:59.703853360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:59.704987 containerd[1596]: time="2025-01-30T05:02:59.704868610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:59.704987 containerd[1596]: time="2025-01-30T05:02:59.704887460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:59.706361 containerd[1596]: time="2025-01-30T05:02:59.705196945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:59.796167 containerd[1596]: time="2025-01-30T05:02:59.796100184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dcldm,Uid:194c17f7-42ba-43bc-8ce6-00d8077b4b3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"938c3ecc633cf8644b370babdb2e9f85fa0f03341b63959af11ad78edde52e5b\"" Jan 30 05:02:59.797703 kubelet[2755]: E0130 05:02:59.797534 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:59.807734 containerd[1596]: time="2025-01-30T05:02:59.807358292Z" level=info msg="CreateContainer within sandbox \"938c3ecc633cf8644b370babdb2e9f85fa0f03341b63959af11ad78edde52e5b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 05:02:59.824676 containerd[1596]: time="2025-01-30T05:02:59.824614319Z" level=info msg="CreateContainer within sandbox \"938c3ecc633cf8644b370babdb2e9f85fa0f03341b63959af11ad78edde52e5b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c0dec24797dd9f4bfde66e13bda79ade0ee2212e581ac25290b502897cac07a3\"" Jan 30 05:02:59.826772 containerd[1596]: time="2025-01-30T05:02:59.825957949Z" level=info msg="StartContainer for \"c0dec24797dd9f4bfde66e13bda79ade0ee2212e581ac25290b502897cac07a3\"" Jan 30 05:02:59.955120 containerd[1596]: time="2025-01-30T05:02:59.955059840Z" level=info msg="StartContainer for \"c0dec24797dd9f4bfde66e13bda79ade0ee2212e581ac25290b502897cac07a3\" returns successfully" Jan 30 05:03:00.000517 containerd[1596]: time="2025-01-30T05:03:00.000454526Z" level=info msg="shim disconnected" id=c0dec24797dd9f4bfde66e13bda79ade0ee2212e581ac25290b502897cac07a3 namespace=k8s.io Jan 30 05:03:00.000517 containerd[1596]: time="2025-01-30T05:03:00.000510268Z" level=warning msg="cleaning up after shim disconnected" id=c0dec24797dd9f4bfde66e13bda79ade0ee2212e581ac25290b502897cac07a3 namespace=k8s.io Jan 30 05:03:00.001466 containerd[1596]: time="2025-01-30T05:03:00.000519044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:03:00.021255 containerd[1596]: time="2025-01-30T05:03:00.020930491Z" level=warning msg="cleanup warnings time=\"2025-01-30T05:03:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 05:03:00.191290 kubelet[2755]: E0130 05:03:00.190953 2755 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 05:03:00.428138 kubelet[2755]: E0130 05:03:00.428075 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:00.436490 containerd[1596]: time="2025-01-30T05:03:00.436396514Z" level=info msg="CreateContainer within sandbox \"938c3ecc633cf8644b370babdb2e9f85fa0f03341b63959af11ad78edde52e5b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 05:03:00.456015 containerd[1596]: time="2025-01-30T05:03:00.453711743Z" level=info msg="CreateContainer within sandbox \"938c3ecc633cf8644b370babdb2e9f85fa0f03341b63959af11ad78edde52e5b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5334d3fcee3a13ef844eacad62105c3b303be901b06ba02e3304e8344cd3734c\"" Jan 30 05:03:00.459028 containerd[1596]: time="2025-01-30T05:03:00.458052355Z" level=info msg="StartContainer for \"5334d3fcee3a13ef844eacad62105c3b303be901b06ba02e3304e8344cd3734c\"" Jan 30 05:03:00.537383 containerd[1596]: time="2025-01-30T05:03:00.537213318Z" level=info msg="StartContainer for \"5334d3fcee3a13ef844eacad62105c3b303be901b06ba02e3304e8344cd3734c\" returns successfully" Jan 30 05:03:00.569885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5334d3fcee3a13ef844eacad62105c3b303be901b06ba02e3304e8344cd3734c-rootfs.mount: Deactivated successfully. Jan 30 05:03:00.572022 containerd[1596]: time="2025-01-30T05:03:00.571955949Z" level=info msg="shim disconnected" id=5334d3fcee3a13ef844eacad62105c3b303be901b06ba02e3304e8344cd3734c namespace=k8s.io Jan 30 05:03:00.572022 containerd[1596]: time="2025-01-30T05:03:00.572012103Z" level=warning msg="cleaning up after shim disconnected" id=5334d3fcee3a13ef844eacad62105c3b303be901b06ba02e3304e8344cd3734c namespace=k8s.io Jan 30 05:03:00.572022 containerd[1596]: time="2025-01-30T05:03:00.572021510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:03:01.433221 kubelet[2755]: E0130 05:03:01.431913 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:01.438760 containerd[1596]: time="2025-01-30T05:03:01.437676000Z" level=info msg="CreateContainer within sandbox \"938c3ecc633cf8644b370babdb2e9f85fa0f03341b63959af11ad78edde52e5b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 05:03:01.466809 containerd[1596]: time="2025-01-30T05:03:01.466427764Z" level=info msg="CreateContainer within sandbox \"938c3ecc633cf8644b370babdb2e9f85fa0f03341b63959af11ad78edde52e5b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"69022ac4bca38cf9ff262a5ecb4a3722c86f2e645c1850363f6784c0326d88f0\"" Jan 30 05:03:01.468851 containerd[1596]: time="2025-01-30T05:03:01.468309445Z" level=info msg="StartContainer for \"69022ac4bca38cf9ff262a5ecb4a3722c86f2e645c1850363f6784c0326d88f0\"" Jan 30 05:03:01.579601 containerd[1596]: time="2025-01-30T05:03:01.579278320Z" level=info msg="StartContainer for \"69022ac4bca38cf9ff262a5ecb4a3722c86f2e645c1850363f6784c0326d88f0\" returns successfully" Jan 30 05:03:01.625530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69022ac4bca38cf9ff262a5ecb4a3722c86f2e645c1850363f6784c0326d88f0-rootfs.mount: Deactivated successfully. Jan 30 05:03:01.629894 containerd[1596]: time="2025-01-30T05:03:01.629780740Z" level=info msg="shim disconnected" id=69022ac4bca38cf9ff262a5ecb4a3722c86f2e645c1850363f6784c0326d88f0 namespace=k8s.io Jan 30 05:03:01.630317 containerd[1596]: time="2025-01-30T05:03:01.630129946Z" level=warning msg="cleaning up after shim disconnected" id=69022ac4bca38cf9ff262a5ecb4a3722c86f2e645c1850363f6784c0326d88f0 namespace=k8s.io Jan 30 05:03:01.630317 containerd[1596]: time="2025-01-30T05:03:01.630164060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:03:02.437886 kubelet[2755]: E0130 05:03:02.437704 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:02.451598 containerd[1596]: time="2025-01-30T05:03:02.449902301Z" level=info msg="CreateContainer within sandbox \"938c3ecc633cf8644b370babdb2e9f85fa0f03341b63959af11ad78edde52e5b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 05:03:02.480047 containerd[1596]: time="2025-01-30T05:03:02.478615471Z" level=info msg="CreateContainer within sandbox \"938c3ecc633cf8644b370babdb2e9f85fa0f03341b63959af11ad78edde52e5b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0600eb5da699665cb415beb7c841589a06e6cc77d854838cb3b5e9ad02c291fa\"" Jan 30 05:03:02.481839 containerd[1596]: time="2025-01-30T05:03:02.481805178Z" level=info msg="StartContainer for \"0600eb5da699665cb415beb7c841589a06e6cc77d854838cb3b5e9ad02c291fa\"" Jan 30 05:03:02.573685 containerd[1596]: time="2025-01-30T05:03:02.573346014Z" level=info msg="StartContainer for \"0600eb5da699665cb415beb7c841589a06e6cc77d854838cb3b5e9ad02c291fa\" returns successfully" Jan 30 05:03:02.602474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0600eb5da699665cb415beb7c841589a06e6cc77d854838cb3b5e9ad02c291fa-rootfs.mount: Deactivated successfully. Jan 30 05:03:02.604087 containerd[1596]: time="2025-01-30T05:03:02.604013469Z" level=info msg="shim disconnected" id=0600eb5da699665cb415beb7c841589a06e6cc77d854838cb3b5e9ad02c291fa namespace=k8s.io Jan 30 05:03:02.604087 containerd[1596]: time="2025-01-30T05:03:02.604084418Z" level=warning msg="cleaning up after shim disconnected" id=0600eb5da699665cb415beb7c841589a06e6cc77d854838cb3b5e9ad02c291fa namespace=k8s.io Jan 30 05:03:02.604286 containerd[1596]: time="2025-01-30T05:03:02.604098491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:03:03.443564 kubelet[2755]: E0130 05:03:03.443524 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:03.447614 containerd[1596]: time="2025-01-30T05:03:03.447109573Z" level=info msg="CreateContainer within sandbox \"938c3ecc633cf8644b370babdb2e9f85fa0f03341b63959af11ad78edde52e5b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 05:03:03.469777 containerd[1596]: time="2025-01-30T05:03:03.469246272Z" level=info msg="CreateContainer within sandbox \"938c3ecc633cf8644b370babdb2e9f85fa0f03341b63959af11ad78edde52e5b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1319a17fabf109bcb247661f8116b159aaf37b071818597ebc0264b2cce1799d\"" Jan 30 05:03:03.474289 containerd[1596]: time="2025-01-30T05:03:03.472879502Z" level=info msg="StartContainer for \"1319a17fabf109bcb247661f8116b159aaf37b071818597ebc0264b2cce1799d\"" Jan 30 05:03:03.577292 containerd[1596]: time="2025-01-30T05:03:03.576968490Z" level=info msg="StartContainer for \"1319a17fabf109bcb247661f8116b159aaf37b071818597ebc0264b2cce1799d\" returns successfully" Jan 30 05:03:04.245910 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 05:03:04.454990 kubelet[2755]: E0130 05:03:04.454315 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:04.486495 kubelet[2755]: I0130 05:03:04.485600 2755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dcldm" podStartSLOduration=5.485570546 podStartE2EDuration="5.485570546s" podCreationTimestamp="2025-01-30 05:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:03:04.481459212 +0000 UTC m=+99.780890819" watchObservedRunningTime="2025-01-30 05:03:04.485570546 +0000 UTC m=+99.785002202" Jan 30 05:03:05.664597 kubelet[2755]: E0130 05:03:05.664554 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:06.223525 systemd[1]: run-containerd-runc-k8s.io-1319a17fabf109bcb247661f8116b159aaf37b071818597ebc0264b2cce1799d-runc.mn0zEI.mount: Deactivated successfully. Jan 30 05:03:06.382054 update_engine[1578]: I20250130 05:03:06.381968 1578 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 05:03:06.382754 update_engine[1578]: I20250130 05:03:06.382553 1578 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 05:03:06.388532 update_engine[1578]: I20250130 05:03:06.388432 1578 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 05:03:06.390720 update_engine[1578]: I20250130 05:03:06.389847 1578 omaha_request_params.cc:62] Current group set to lts Jan 30 05:03:06.390720 update_engine[1578]: I20250130 05:03:06.390005 1578 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 05:03:06.390720 update_engine[1578]: I20250130 05:03:06.390017 1578 update_attempter.cc:643] Scheduling an action processor start. Jan 30 05:03:06.390720 update_engine[1578]: I20250130 05:03:06.390038 1578 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 05:03:06.390720 update_engine[1578]: I20250130 05:03:06.390097 1578 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 05:03:06.390720 update_engine[1578]: I20250130 05:03:06.390187 1578 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 05:03:06.390720 update_engine[1578]: I20250130 05:03:06.390195 1578 omaha_request_action.cc:272] Request: Jan 30 05:03:06.390720 update_engine[1578]: Jan 30 05:03:06.390720 update_engine[1578]: Jan 30 05:03:06.390720 update_engine[1578]: Jan 30 05:03:06.390720 update_engine[1578]: Jan 30 05:03:06.390720 update_engine[1578]: Jan 30 05:03:06.390720 update_engine[1578]: Jan 30 05:03:06.390720 update_engine[1578]: Jan 30 05:03:06.390720 update_engine[1578]: Jan 30 05:03:06.390720 update_engine[1578]: I20250130 05:03:06.390203 1578 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 05:03:06.401565 update_engine[1578]: I20250130 05:03:06.401077 1578 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 05:03:06.401565 update_engine[1578]: I20250130 05:03:06.401443 1578 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 05:03:06.403726 update_engine[1578]: E20250130 05:03:06.403544 1578 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 05:03:06.403726 update_engine[1578]: I20250130 05:03:06.403651 1578 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 05:03:06.417262 locksmithd[1616]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 05:03:08.065532 systemd-networkd[1227]: lxc_health: Link UP Jan 30 05:03:08.072822 systemd-networkd[1227]: lxc_health: Gained carrier Jan 30 05:03:09.666708 kubelet[2755]: E0130 05:03:09.665661 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:09.797977 systemd-networkd[1227]: lxc_health: Gained IPv6LL Jan 30 05:03:10.469455 kubelet[2755]: E0130 05:03:10.469398 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:11.478859 kubelet[2755]: E0130 05:03:11.475286 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:11.975287 kubelet[2755]: E0130 05:03:11.973709 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:15.199559 systemd[1]: run-containerd-runc-k8s.io-1319a17fabf109bcb247661f8116b159aaf37b071818597ebc0264b2cce1799d-runc.EVvbws.mount: Deactivated successfully. Jan 30 05:03:15.267549 sshd[4555]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:15.278000 systemd-logind[1574]: Session 27 logged out. Waiting for processes to exit. Jan 30 05:03:15.279460 systemd[1]: sshd@26-146.190.174.183:22-147.75.109.163:56110.service: Deactivated successfully. Jan 30 05:03:15.284626 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 05:03:15.287195 systemd-logind[1574]: Removed session 27. Jan 30 05:03:15.974247 kubelet[2755]: E0130 05:03:15.974151 2755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:16.321839 update_engine[1578]: I20250130 05:03:16.312659 1578 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 05:03:16.321839 update_engine[1578]: I20250130 05:03:16.313155 1578 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 05:03:16.321839 update_engine[1578]: I20250130 05:03:16.313505 1578 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 05:03:16.321839 update_engine[1578]: E20250130 05:03:16.314383 1578 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 05:03:16.321839 update_engine[1578]: I20250130 05:03:16.314467 1578 libcurl_http_fetcher.cc:283] No HTTP response, retry 2