Jan 30 05:04:48.994928 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 05:04:48.997041 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:04:48.997065 kernel: BIOS-provided physical RAM map: Jan 30 05:04:48.997075 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 05:04:48.997082 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 05:04:48.997089 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 05:04:48.997097 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 30 05:04:48.997104 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 30 05:04:48.997111 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 05:04:48.997139 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 05:04:48.997147 kernel: NX (Execute Disable) protection: active Jan 30 05:04:48.997154 kernel: APIC: Static calls initialized Jan 30 05:04:48.997168 kernel: SMBIOS 2.8 present. Jan 30 05:04:48.997176 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 30 05:04:48.997184 kernel: Hypervisor detected: KVM Jan 30 05:04:48.997196 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 05:04:48.997206 kernel: kvm-clock: using sched offset of 3428267392 cycles Jan 30 05:04:48.997215 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 05:04:48.997224 kernel: tsc: Detected 2494.172 MHz processor Jan 30 05:04:48.997232 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 05:04:48.997241 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 05:04:48.997249 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 30 05:04:48.997257 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 05:04:48.997265 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 05:04:48.997276 kernel: ACPI: Early table checksum verification disabled Jan 30 05:04:48.997284 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 30 05:04:48.997292 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:04:48.997301 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:04:48.997309 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:04:48.997317 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 30 05:04:48.997325 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:04:48.997333 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:04:48.997341 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:04:48.997352 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:04:48.997360 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 30 05:04:48.997368 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 30 05:04:48.997376 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 30 05:04:48.997384 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 30 05:04:48.997392 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 30 05:04:48.997400 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 30 05:04:48.997414 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 30 05:04:48.997423 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 05:04:48.997431 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 05:04:48.997440 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 05:04:48.997448 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 05:04:48.997459 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 30 05:04:48.997467 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 30 05:04:48.997479 kernel: Zone ranges: Jan 30 05:04:48.997487 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 05:04:48.997495 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 30 05:04:48.997504 kernel: Normal empty Jan 30 05:04:48.997512 kernel: Movable zone start for each node Jan 30 05:04:48.997521 kernel: Early memory node ranges Jan 30 05:04:48.997529 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 05:04:48.997539 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 30 05:04:48.997553 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 30 05:04:48.997568 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 05:04:48.997579 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 05:04:48.997593 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 30 05:04:48.997604 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 05:04:48.997615 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 05:04:48.997627 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 05:04:48.997638 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 05:04:48.997652 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 05:04:48.997661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 05:04:48.997673 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 05:04:48.997681 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 05:04:48.997690 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 05:04:48.997698 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 05:04:48.997707 kernel: TSC deadline timer available Jan 30 05:04:48.997715 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 05:04:48.997724 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 05:04:48.997732 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 30 05:04:48.997743 kernel: Booting paravirtualized kernel on KVM Jan 30 05:04:48.997751 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 05:04:48.997763 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 05:04:48.997772 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 05:04:48.997780 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 05:04:48.997789 kernel: pcpu-alloc: [0] 0 1 Jan 30 05:04:48.997797 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 05:04:48.997807 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:04:48.997816 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 05:04:48.997833 kernel: random: crng init done Jan 30 05:04:48.997846 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 05:04:48.997858 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 05:04:48.997869 kernel: Fallback order for Node 0: 0 Jan 30 05:04:48.997881 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 30 05:04:48.997893 kernel: Policy zone: DMA32 Jan 30 05:04:48.997906 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 05:04:48.997919 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Jan 30 05:04:48.997931 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 05:04:48.997979 kernel: Kernel/User page tables isolation: enabled Jan 30 05:04:48.997992 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 05:04:48.998000 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 05:04:48.998009 kernel: Dynamic Preempt: voluntary Jan 30 05:04:48.998017 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 05:04:48.998027 kernel: rcu: RCU event tracing is enabled. Jan 30 05:04:48.998035 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 05:04:48.998044 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 05:04:48.998053 kernel: Rude variant of Tasks RCU enabled. Jan 30 05:04:48.998065 kernel: Tracing variant of Tasks RCU enabled. Jan 30 05:04:48.998074 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 05:04:48.998083 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 05:04:48.998092 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 05:04:48.998100 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 05:04:48.998113 kernel: Console: colour VGA+ 80x25 Jan 30 05:04:48.998121 kernel: printk: console [tty0] enabled Jan 30 05:04:48.998130 kernel: printk: console [ttyS0] enabled Jan 30 05:04:48.998138 kernel: ACPI: Core revision 20230628 Jan 30 05:04:48.998147 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 05:04:48.998158 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 05:04:48.998167 kernel: x2apic enabled Jan 30 05:04:48.998176 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 05:04:48.998184 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 05:04:48.998193 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f3b868b6c, max_idle_ns: 440795251212 ns Jan 30 05:04:48.998202 kernel: Calibrating delay loop (skipped) preset value.. 4988.34 BogoMIPS (lpj=2494172) Jan 30 05:04:48.998210 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 05:04:48.998219 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 05:04:48.998258 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 05:04:48.998271 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 05:04:48.998284 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 05:04:48.998300 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 05:04:48.998314 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 05:04:48.998327 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 05:04:48.998336 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 05:04:48.998345 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 05:04:48.998354 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 05:04:48.998369 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 05:04:48.998378 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 05:04:48.998387 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 05:04:48.998396 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 05:04:48.998405 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 05:04:48.998414 kernel: Freeing SMP alternatives memory: 32K Jan 30 05:04:48.998423 kernel: pid_max: default: 32768 minimum: 301 Jan 30 05:04:48.998432 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 05:04:48.998444 kernel: landlock: Up and running. Jan 30 05:04:48.998453 kernel: SELinux: Initializing. Jan 30 05:04:48.998462 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 05:04:48.998471 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 05:04:48.998480 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 30 05:04:48.998489 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:04:48.998498 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:04:48.998507 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:04:48.998519 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 30 05:04:48.998528 kernel: signal: max sigframe size: 1776 Jan 30 05:04:48.998537 kernel: rcu: Hierarchical SRCU implementation. Jan 30 05:04:48.998547 kernel: rcu: Max phase no-delay instances is 400. Jan 30 05:04:48.998556 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 05:04:48.998565 kernel: smp: Bringing up secondary CPUs ... Jan 30 05:04:48.998574 kernel: smpboot: x86: Booting SMP configuration: Jan 30 05:04:48.998583 kernel: .... node #0, CPUs: #1 Jan 30 05:04:48.998592 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 05:04:48.998603 kernel: smpboot: Max logical packages: 1 Jan 30 05:04:48.998615 kernel: smpboot: Total of 2 processors activated (9976.68 BogoMIPS) Jan 30 05:04:48.998624 kernel: devtmpfs: initialized Jan 30 05:04:48.998633 kernel: x86/mm: Memory block size: 128MB Jan 30 05:04:48.998642 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 05:04:48.998651 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 05:04:48.998660 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 05:04:48.998669 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 05:04:48.998678 kernel: audit: initializing netlink subsys (disabled) Jan 30 05:04:48.998687 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 05:04:48.998699 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 05:04:48.998708 kernel: audit: type=2000 audit(1738213487.894:1): state=initialized audit_enabled=0 res=1 Jan 30 05:04:48.998717 kernel: cpuidle: using governor menu Jan 30 05:04:48.998725 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 05:04:48.998734 kernel: dca service started, version 1.12.1 Jan 30 05:04:48.998743 kernel: PCI: Using configuration type 1 for base access Jan 30 05:04:48.998752 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 05:04:48.998761 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 05:04:48.998770 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 05:04:48.998782 kernel: ACPI: Added _OSI(Module Device) Jan 30 05:04:48.998791 kernel: ACPI: Added _OSI(Processor Device) Jan 30 05:04:48.998799 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 05:04:48.998808 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 05:04:48.998817 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 05:04:48.998826 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 05:04:48.998835 kernel: ACPI: Interpreter enabled Jan 30 05:04:48.998860 kernel: ACPI: PM: (supports S0 S5) Jan 30 05:04:48.998869 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 05:04:48.998880 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 05:04:48.998889 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 05:04:48.998898 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 05:04:48.998907 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 05:04:49.000124 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 05:04:49.000295 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 05:04:49.000454 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 05:04:49.000481 kernel: acpiphp: Slot [3] registered Jan 30 05:04:49.000493 kernel: acpiphp: Slot [4] registered Jan 30 05:04:49.000506 kernel: acpiphp: Slot [5] registered Jan 30 05:04:49.000519 kernel: acpiphp: Slot [6] registered Jan 30 05:04:49.000532 kernel: acpiphp: Slot [7] registered Jan 30 05:04:49.000545 kernel: acpiphp: Slot [8] registered Jan 30 05:04:49.000559 kernel: acpiphp: Slot [9] registered Jan 30 05:04:49.000572 kernel: acpiphp: Slot [10] registered Jan 30 05:04:49.000585 kernel: acpiphp: Slot [11] registered Jan 30 05:04:49.000598 kernel: acpiphp: Slot [12] registered Jan 30 05:04:49.000611 kernel: acpiphp: Slot [13] registered Jan 30 05:04:49.000626 kernel: acpiphp: Slot [14] registered Jan 30 05:04:49.000638 kernel: acpiphp: Slot [15] registered Jan 30 05:04:49.000647 kernel: acpiphp: Slot [16] registered Jan 30 05:04:49.000674 kernel: acpiphp: Slot [17] registered Jan 30 05:04:49.000683 kernel: acpiphp: Slot [18] registered Jan 30 05:04:49.000692 kernel: acpiphp: Slot [19] registered Jan 30 05:04:49.000701 kernel: acpiphp: Slot [20] registered Jan 30 05:04:49.000710 kernel: acpiphp: Slot [21] registered Jan 30 05:04:49.000723 kernel: acpiphp: Slot [22] registered Jan 30 05:04:49.000732 kernel: acpiphp: Slot [23] registered Jan 30 05:04:49.000741 kernel: acpiphp: Slot [24] registered Jan 30 05:04:49.000749 kernel: acpiphp: Slot [25] registered Jan 30 05:04:49.000758 kernel: acpiphp: Slot [26] registered Jan 30 05:04:49.000767 kernel: acpiphp: Slot [27] registered Jan 30 05:04:49.000776 kernel: acpiphp: Slot [28] registered Jan 30 05:04:49.000785 kernel: acpiphp: Slot [29] registered Jan 30 05:04:49.000794 kernel: acpiphp: Slot [30] registered Jan 30 05:04:49.000806 kernel: acpiphp: Slot [31] registered Jan 30 05:04:49.000818 kernel: PCI host bridge to bus 0000:00 Jan 30 05:04:49.001029 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 05:04:49.001145 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 05:04:49.001278 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 05:04:49.001379 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 05:04:49.001463 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 30 05:04:49.001547 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 05:04:49.001698 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 05:04:49.001847 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 05:04:49.001972 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 05:04:49.002073 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 30 05:04:49.002168 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 05:04:49.002291 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 05:04:49.002405 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 05:04:49.002507 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 05:04:49.002641 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 30 05:04:49.002784 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 30 05:04:49.002960 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 05:04:49.003106 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 05:04:49.003255 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 05:04:49.003414 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 05:04:49.003530 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 05:04:49.003626 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 30 05:04:49.003721 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 30 05:04:49.003815 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 05:04:49.003908 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 05:04:49.004034 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 05:04:49.004169 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 30 05:04:49.004272 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 30 05:04:49.004365 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 30 05:04:49.004487 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 05:04:49.004593 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 30 05:04:49.004697 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 30 05:04:49.004791 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 30 05:04:49.004894 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 30 05:04:49.005007 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 30 05:04:49.005110 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 30 05:04:49.005243 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 30 05:04:49.005406 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 30 05:04:49.005514 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 05:04:49.005612 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 30 05:04:49.005731 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 30 05:04:49.005841 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 30 05:04:49.005956 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 30 05:04:49.006054 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 30 05:04:49.006158 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 30 05:04:49.006274 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 05:04:49.006375 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 30 05:04:49.006470 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 30 05:04:49.006482 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 05:04:49.006492 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 05:04:49.006507 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 05:04:49.006521 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 05:04:49.006539 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 05:04:49.006551 kernel: iommu: Default domain type: Translated Jan 30 05:04:49.006565 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 05:04:49.006581 kernel: PCI: Using ACPI for IRQ routing Jan 30 05:04:49.006594 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 05:04:49.006607 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 05:04:49.006620 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 30 05:04:49.006782 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 05:04:49.006882 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 05:04:49.007002 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 05:04:49.007014 kernel: vgaarb: loaded Jan 30 05:04:49.007027 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 05:04:49.007044 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 05:04:49.007056 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 05:04:49.007067 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 05:04:49.007081 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 05:04:49.007095 kernel: pnp: PnP ACPI init Jan 30 05:04:49.007108 kernel: pnp: PnP ACPI: found 4 devices Jan 30 05:04:49.007129 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 05:04:49.007143 kernel: NET: Registered PF_INET protocol family Jan 30 05:04:49.007157 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 05:04:49.007171 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 05:04:49.007183 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 05:04:49.007197 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 05:04:49.007211 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 05:04:49.007223 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 05:04:49.007232 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 05:04:49.007245 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 05:04:49.007254 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 05:04:49.007264 kernel: NET: Registered PF_XDP protocol family Jan 30 05:04:49.007372 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 05:04:49.007459 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 05:04:49.007544 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 05:04:49.007678 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 05:04:49.007813 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 30 05:04:49.007925 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 05:04:49.008051 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 05:04:49.008065 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 05:04:49.008164 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 42526 usecs Jan 30 05:04:49.008177 kernel: PCI: CLS 0 bytes, default 64 Jan 30 05:04:49.008186 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 05:04:49.008196 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f3b868b6c, max_idle_ns: 440795251212 ns Jan 30 05:04:49.008205 kernel: Initialise system trusted keyrings Jan 30 05:04:49.008219 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 05:04:49.008229 kernel: Key type asymmetric registered Jan 30 05:04:49.008238 kernel: Asymmetric key parser 'x509' registered Jan 30 05:04:49.008247 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 05:04:49.008262 kernel: io scheduler mq-deadline registered Jan 30 05:04:49.008275 kernel: io scheduler kyber registered Jan 30 05:04:49.008300 kernel: io scheduler bfq registered Jan 30 05:04:49.008312 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 05:04:49.008324 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 05:04:49.008342 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 05:04:49.008357 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 05:04:49.008367 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 05:04:49.008376 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 05:04:49.008385 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 05:04:49.008394 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 05:04:49.008404 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 05:04:49.008531 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 05:04:49.008545 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 05:04:49.008637 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 05:04:49.008756 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T05:04:48 UTC (1738213488) Jan 30 05:04:49.008869 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 05:04:49.008882 kernel: intel_pstate: CPU model not supported Jan 30 05:04:49.008891 kernel: NET: Registered PF_INET6 protocol family Jan 30 05:04:49.008900 kernel: Segment Routing with IPv6 Jan 30 05:04:49.008909 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 05:04:49.008918 kernel: NET: Registered PF_PACKET protocol family Jan 30 05:04:49.008932 kernel: Key type dns_resolver registered Jan 30 05:04:49.009175 kernel: IPI shorthand broadcast: enabled Jan 30 05:04:49.009190 kernel: sched_clock: Marking stable (949004911, 116132353)->(1184104041, -118966777) Jan 30 05:04:49.009202 kernel: registered taskstats version 1 Jan 30 05:04:49.009215 kernel: Loading compiled-in X.509 certificates Jan 30 05:04:49.009227 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 05:04:49.009241 kernel: Key type .fscrypt registered Jan 30 05:04:49.009253 kernel: Key type fscrypt-provisioning registered Jan 30 05:04:49.009263 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 05:04:49.009277 kernel: ima: Allocated hash algorithm: sha1 Jan 30 05:04:49.009286 kernel: ima: No architecture policies found Jan 30 05:04:49.009295 kernel: clk: Disabling unused clocks Jan 30 05:04:49.009304 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 05:04:49.009314 kernel: Write protecting the kernel read-only data: 36864k Jan 30 05:04:49.009344 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 05:04:49.009356 kernel: Run /init as init process Jan 30 05:04:49.009366 kernel: with arguments: Jan 30 05:04:49.009375 kernel: /init Jan 30 05:04:49.009387 kernel: with environment: Jan 30 05:04:49.009396 kernel: HOME=/ Jan 30 05:04:49.009406 kernel: TERM=linux Jan 30 05:04:49.009415 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 05:04:49.009428 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 05:04:49.009441 systemd[1]: Detected virtualization kvm. Jan 30 05:04:49.009451 systemd[1]: Detected architecture x86-64. Jan 30 05:04:49.009463 systemd[1]: Running in initrd. Jan 30 05:04:49.009473 systemd[1]: No hostname configured, using default hostname. Jan 30 05:04:49.009482 systemd[1]: Hostname set to . Jan 30 05:04:49.009493 systemd[1]: Initializing machine ID from VM UUID. Jan 30 05:04:49.009502 systemd[1]: Queued start job for default target initrd.target. Jan 30 05:04:49.009512 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:04:49.009522 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:04:49.009533 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 05:04:49.009546 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 05:04:49.009556 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 05:04:49.009566 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 05:04:49.009578 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 05:04:49.009588 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 05:04:49.009598 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:04:49.009608 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:04:49.009627 systemd[1]: Reached target paths.target - Path Units. Jan 30 05:04:49.009642 systemd[1]: Reached target slices.target - Slice Units. Jan 30 05:04:49.009655 systemd[1]: Reached target swap.target - Swaps. Jan 30 05:04:49.009673 systemd[1]: Reached target timers.target - Timer Units. Jan 30 05:04:49.009687 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 05:04:49.009700 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 05:04:49.009717 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 05:04:49.009730 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 05:04:49.009744 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:04:49.009759 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 05:04:49.009775 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:04:49.009785 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 05:04:49.009795 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 05:04:49.009805 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 05:04:49.009818 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 05:04:49.009828 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 05:04:49.009838 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 05:04:49.009848 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 05:04:49.009859 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:04:49.009868 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 05:04:49.009878 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:04:49.009888 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 05:04:49.009951 systemd-journald[183]: Collecting audit messages is disabled. Jan 30 05:04:49.009979 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 05:04:49.009991 systemd-journald[183]: Journal started Jan 30 05:04:49.010016 systemd-journald[183]: Runtime Journal (/run/log/journal/0f8e87940c4a4c38af0c5bfd7626c9e8) is 4.9M, max 39.3M, 34.4M free. Jan 30 05:04:49.011013 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 05:04:49.012793 systemd-modules-load[184]: Inserted module 'overlay' Jan 30 05:04:49.041969 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 05:04:49.045985 kernel: Bridge firewalling registered Jan 30 05:04:49.044314 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 30 05:04:49.045248 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:04:49.046065 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 05:04:49.054213 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:04:49.058125 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:04:49.061155 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 05:04:49.063062 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 05:04:49.074213 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 05:04:49.093665 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:04:49.101340 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:04:49.103469 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:04:49.104386 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:04:49.111283 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 05:04:49.114221 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 05:04:49.135849 dracut-cmdline[218]: dracut-dracut-053 Jan 30 05:04:49.142739 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:04:49.164065 systemd-resolved[219]: Positive Trust Anchors: Jan 30 05:04:49.164079 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 05:04:49.164114 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 05:04:49.167270 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 30 05:04:49.168622 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 05:04:49.172282 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:04:49.257993 kernel: SCSI subsystem initialized Jan 30 05:04:49.270003 kernel: Loading iSCSI transport class v2.0-870. Jan 30 05:04:49.282995 kernel: iscsi: registered transport (tcp) Jan 30 05:04:49.306168 kernel: iscsi: registered transport (qla4xxx) Jan 30 05:04:49.306388 kernel: QLogic iSCSI HBA Driver Jan 30 05:04:49.362023 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 05:04:49.369379 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 05:04:49.401205 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 05:04:49.401315 kernel: device-mapper: uevent: version 1.0.3 Jan 30 05:04:49.402320 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 05:04:49.454041 kernel: raid6: avx2x4 gen() 15143 MB/s Jan 30 05:04:49.470004 kernel: raid6: avx2x2 gen() 21536 MB/s Jan 30 05:04:49.487012 kernel: raid6: avx2x1 gen() 18849 MB/s Jan 30 05:04:49.487131 kernel: raid6: using algorithm avx2x2 gen() 21536 MB/s Jan 30 05:04:49.505126 kernel: raid6: .... xor() 17512 MB/s, rmw enabled Jan 30 05:04:49.505246 kernel: raid6: using avx2x2 recovery algorithm Jan 30 05:04:49.530994 kernel: xor: automatically using best checksumming function avx Jan 30 05:04:49.705989 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 05:04:49.719719 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 05:04:49.727335 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:04:49.753574 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 30 05:04:49.760517 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:04:49.768130 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 05:04:49.789815 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 30 05:04:49.833073 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 05:04:49.840321 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 05:04:49.912258 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:04:49.922206 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 05:04:49.952154 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 05:04:49.957313 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 05:04:49.957892 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:04:49.958288 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 05:04:49.967532 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 05:04:50.001008 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 05:04:50.011975 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 30 05:04:50.047256 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 05:04:50.052165 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 05:04:50.052199 kernel: GPT:9289727 != 125829119 Jan 30 05:04:50.052220 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 05:04:50.052240 kernel: GPT:9289727 != 125829119 Jan 30 05:04:50.052259 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 05:04:50.052279 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:04:50.052300 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 30 05:04:50.074806 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 05:04:50.074847 kernel: scsi host0: Virtio SCSI HBA Jan 30 05:04:50.075131 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Jan 30 05:04:50.078995 kernel: ACPI: bus type USB registered Jan 30 05:04:50.081297 kernel: usbcore: registered new interface driver usbfs Jan 30 05:04:50.081387 kernel: usbcore: registered new interface driver hub Jan 30 05:04:50.083965 kernel: libata version 3.00 loaded. Jan 30 05:04:50.084061 kernel: usbcore: registered new device driver usb Jan 30 05:04:50.109150 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 05:04:50.180307 kernel: scsi host1: ata_piix Jan 30 05:04:50.180522 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (450) Jan 30 05:04:50.180538 kernel: scsi host2: ata_piix Jan 30 05:04:50.180665 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 30 05:04:50.180679 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 30 05:04:50.180702 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (461) Jan 30 05:04:50.180715 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 05:04:50.131410 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 05:04:50.131554 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:04:50.133214 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:04:50.134637 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:04:50.134904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:04:50.135402 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:04:50.142759 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:04:50.247119 kernel: AES CTR mode by8 optimization enabled Jan 30 05:04:50.247182 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 30 05:04:50.247588 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 30 05:04:50.247821 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 30 05:04:50.248060 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 30 05:04:50.248278 kernel: hub 1-0:1.0: USB hub found Jan 30 05:04:50.248515 kernel: hub 1-0:1.0: 2 ports detected Jan 30 05:04:50.173609 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 05:04:50.189800 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 05:04:50.249292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:04:50.257542 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 05:04:50.265297 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 05:04:50.266062 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 05:04:50.272275 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 05:04:50.278442 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:04:50.285972 disk-uuid[531]: Primary Header is updated. Jan 30 05:04:50.285972 disk-uuid[531]: Secondary Entries is updated. Jan 30 05:04:50.285972 disk-uuid[531]: Secondary Header is updated. Jan 30 05:04:50.297003 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:04:50.310505 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:04:50.318658 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:04:50.329997 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:04:51.319993 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:04:51.321119 disk-uuid[532]: The operation has completed successfully. Jan 30 05:04:51.368197 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 05:04:51.368367 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 05:04:51.375238 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 05:04:51.391443 sh[562]: Success Jan 30 05:04:51.407988 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 05:04:51.478895 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 05:04:51.500929 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 05:04:51.501751 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 05:04:51.530519 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 05:04:51.530609 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:04:51.530623 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 05:04:51.530636 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 05:04:51.530649 kernel: BTRFS info (device dm-0): using free space tree Jan 30 05:04:51.539188 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 05:04:51.540369 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 05:04:51.547216 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 05:04:51.550512 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 05:04:51.565094 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:04:51.565207 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:04:51.565227 kernel: BTRFS info (device vda6): using free space tree Jan 30 05:04:51.569969 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 05:04:51.585649 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 05:04:51.586444 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:04:51.598660 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 05:04:51.607254 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 05:04:51.691289 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 05:04:51.716324 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 05:04:51.743314 systemd-networkd[747]: lo: Link UP Jan 30 05:04:51.743325 systemd-networkd[747]: lo: Gained carrier Jan 30 05:04:51.746605 systemd-networkd[747]: Enumeration completed Jan 30 05:04:51.747369 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 05:04:51.747931 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 05:04:51.747935 systemd-networkd[747]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 30 05:04:51.749097 systemd[1]: Reached target network.target - Network. Jan 30 05:04:51.750392 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:04:51.750397 systemd-networkd[747]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 05:04:51.751733 systemd-networkd[747]: eth0: Link UP Jan 30 05:04:51.751740 systemd-networkd[747]: eth0: Gained carrier Jan 30 05:04:51.751757 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 05:04:51.758624 systemd-networkd[747]: eth1: Link UP Jan 30 05:04:51.758639 systemd-networkd[747]: eth1: Gained carrier Jan 30 05:04:51.758660 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:04:51.766830 ignition[658]: Ignition 2.19.0 Jan 30 05:04:51.766844 ignition[658]: Stage: fetch-offline Jan 30 05:04:51.766908 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:04:51.766920 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:04:51.769090 ignition[658]: parsed url from cmdline: "" Jan 30 05:04:51.769101 ignition[658]: no config URL provided Jan 30 05:04:51.769111 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 05:04:51.769133 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jan 30 05:04:51.769141 ignition[658]: failed to fetch config: resource requires networking Jan 30 05:04:51.770898 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 05:04:51.769400 ignition[658]: Ignition finished successfully Jan 30 05:04:51.772041 systemd-networkd[747]: eth0: DHCPv4 address 24.144.82.28/20, gateway 24.144.80.1 acquired from 169.254.169.253 Jan 30 05:04:51.780155 systemd-networkd[747]: eth1: DHCPv4 address 10.124.0.7/20 acquired from 169.254.169.253 Jan 30 05:04:51.780333 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 05:04:51.817631 ignition[754]: Ignition 2.19.0 Jan 30 05:04:51.817650 ignition[754]: Stage: fetch Jan 30 05:04:51.817976 ignition[754]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:04:51.818000 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:04:51.818158 ignition[754]: parsed url from cmdline: "" Jan 30 05:04:51.818165 ignition[754]: no config URL provided Jan 30 05:04:51.818174 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 05:04:51.818236 ignition[754]: no config at "/usr/lib/ignition/user.ign" Jan 30 05:04:51.818268 ignition[754]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 30 05:04:51.835739 ignition[754]: GET result: OK Jan 30 05:04:51.836569 ignition[754]: parsing config with SHA512: 8353bce5728c2e42a317fd1faf34b3eaad1d68fb43812d419e983731623b994227952c0c393e3622d576a1a7b9d02d585451b526b57c7401108cbc51c2743a2a Jan 30 05:04:51.844911 unknown[754]: fetched base config from "system" Jan 30 05:04:51.844931 unknown[754]: fetched base config from "system" Jan 30 05:04:51.846445 ignition[754]: fetch: fetch complete Jan 30 05:04:51.844957 unknown[754]: fetched user config from "digitalocean" Jan 30 05:04:51.846914 ignition[754]: fetch: fetch passed Jan 30 05:04:51.849099 ignition[754]: Ignition finished successfully Jan 30 05:04:51.852132 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 05:04:51.857253 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 05:04:51.886532 ignition[762]: Ignition 2.19.0 Jan 30 05:04:51.886550 ignition[762]: Stage: kargs Jan 30 05:04:51.886870 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:04:51.886888 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:04:51.890088 ignition[762]: kargs: kargs passed Jan 30 05:04:51.890159 ignition[762]: Ignition finished successfully Jan 30 05:04:51.892258 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 05:04:51.898300 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 05:04:51.932843 ignition[768]: Ignition 2.19.0 Jan 30 05:04:51.932871 ignition[768]: Stage: disks Jan 30 05:04:51.933106 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:04:51.936420 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 05:04:51.933120 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:04:51.934278 ignition[768]: disks: disks passed Jan 30 05:04:51.934372 ignition[768]: Ignition finished successfully Jan 30 05:04:51.941571 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 05:04:51.942590 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 05:04:51.943438 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 05:04:51.944336 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 05:04:51.945184 systemd[1]: Reached target basic.target - Basic System. Jan 30 05:04:51.952337 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 05:04:51.972096 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 05:04:51.975128 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 05:04:51.982727 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 05:04:52.102995 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 05:04:52.104314 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 05:04:52.105507 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 05:04:52.121212 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 05:04:52.124111 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 05:04:52.128285 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 30 05:04:52.137320 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (784) Jan 30 05:04:52.137231 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 05:04:52.149765 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:04:52.149806 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:04:52.149820 kernel: BTRFS info (device vda6): using free space tree Jan 30 05:04:52.150556 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 05:04:52.150622 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 05:04:52.154394 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 05:04:52.162982 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 05:04:52.164667 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 05:04:52.166578 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 05:04:52.237139 coreos-metadata[787]: Jan 30 05:04:52.237 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 05:04:52.247236 coreos-metadata[786]: Jan 30 05:04:52.247 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 05:04:52.258048 coreos-metadata[787]: Jan 30 05:04:52.257 INFO Fetch successful Jan 30 05:04:52.259487 coreos-metadata[786]: Jan 30 05:04:52.259 INFO Fetch successful Jan 30 05:04:52.263230 coreos-metadata[787]: Jan 30 05:04:52.262 INFO wrote hostname ci-4081.3.0-c-6bfcfa9ae9 to /sysroot/etc/hostname Jan 30 05:04:52.264337 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 05:04:52.267216 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 30 05:04:52.267343 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 30 05:04:52.271812 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 05:04:52.277690 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Jan 30 05:04:52.283172 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 05:04:52.288204 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 05:04:52.396819 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 05:04:52.403182 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 05:04:52.406364 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 05:04:52.419990 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:04:52.445441 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 05:04:52.457494 ignition[904]: INFO : Ignition 2.19.0 Jan 30 05:04:52.457494 ignition[904]: INFO : Stage: mount Jan 30 05:04:52.458683 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:04:52.458683 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:04:52.460282 ignition[904]: INFO : mount: mount passed Jan 30 05:04:52.460282 ignition[904]: INFO : Ignition finished successfully Jan 30 05:04:52.460755 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 05:04:52.468155 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 05:04:52.528801 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 05:04:52.535325 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 05:04:52.551384 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (917) Jan 30 05:04:52.551460 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:04:52.553561 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:04:52.553649 kernel: BTRFS info (device vda6): using free space tree Jan 30 05:04:52.558983 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 05:04:52.562142 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 05:04:52.601742 ignition[934]: INFO : Ignition 2.19.0 Jan 30 05:04:52.601742 ignition[934]: INFO : Stage: files Jan 30 05:04:52.603278 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:04:52.603278 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:04:52.604723 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Jan 30 05:04:52.605369 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 05:04:52.605369 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 05:04:52.610014 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 05:04:52.611756 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 05:04:52.613191 unknown[934]: wrote ssh authorized keys file for user: core Jan 30 05:04:52.614396 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 05:04:52.616496 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 05:04:52.617752 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 05:04:52.660387 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 05:04:52.817159 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 05:04:52.818008 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 05:04:52.818008 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 05:04:53.099153 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 05:04:53.219111 systemd-networkd[747]: eth1: Gained IPv6LL Jan 30 05:04:53.233307 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 05:04:53.234116 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 05:04:53.234923 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 05:04:53.234923 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 05:04:53.234923 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 05:04:53.234923 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 05:04:53.234923 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 05:04:53.234923 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 05:04:53.238580 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 05:04:53.238580 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 05:04:53.238580 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 05:04:53.238580 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:04:53.238580 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:04:53.238580 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:04:53.238580 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 05:04:53.283159 systemd-networkd[747]: eth0: Gained IPv6LL Jan 30 05:04:53.677852 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 05:04:54.035934 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:04:54.036966 ignition[934]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 05:04:54.038748 ignition[934]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 05:04:54.039493 ignition[934]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 05:04:54.039493 ignition[934]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 05:04:54.039493 ignition[934]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 05:04:54.039493 ignition[934]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 05:04:54.039493 ignition[934]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 05:04:54.039493 ignition[934]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 05:04:54.039493 ignition[934]: INFO : files: files passed Jan 30 05:04:54.039493 ignition[934]: INFO : Ignition finished successfully Jan 30 05:04:54.040748 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 05:04:54.047230 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 05:04:54.056212 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 05:04:54.062720 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 05:04:54.062897 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 05:04:54.071886 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:04:54.071886 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:04:54.074418 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:04:54.075390 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 05:04:54.076762 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 05:04:54.082481 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 05:04:54.117746 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 05:04:54.117886 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 05:04:54.119721 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 05:04:54.120264 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 05:04:54.121304 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 05:04:54.137277 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 05:04:54.154928 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 05:04:54.161249 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 05:04:54.185270 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:04:54.186273 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:04:54.187402 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 05:04:54.188353 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 05:04:54.188511 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 05:04:54.189493 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 05:04:54.189974 systemd[1]: Stopped target basic.target - Basic System. Jan 30 05:04:54.191099 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 05:04:54.191886 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 05:04:54.192885 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 05:04:54.193804 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 05:04:54.194976 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 05:04:54.196129 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 05:04:54.197087 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 05:04:54.198352 systemd[1]: Stopped target swap.target - Swaps. Jan 30 05:04:54.199439 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 05:04:54.199648 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 05:04:54.201011 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:04:54.202464 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:04:54.203158 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 05:04:54.205071 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:04:54.205753 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 05:04:54.205897 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 05:04:54.207114 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 05:04:54.207403 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 05:04:54.208377 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 05:04:54.208563 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 05:04:54.209765 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 05:04:54.209985 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 05:04:54.223328 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 05:04:54.223865 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 05:04:54.224093 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:04:54.227322 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 05:04:54.230270 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 05:04:54.230598 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:04:54.232401 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 05:04:54.232613 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 05:04:54.242577 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 05:04:54.242727 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 05:04:54.267712 ignition[987]: INFO : Ignition 2.19.0 Jan 30 05:04:54.267712 ignition[987]: INFO : Stage: umount Jan 30 05:04:54.267712 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:04:54.267712 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:04:54.267060 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 05:04:54.276916 ignition[987]: INFO : umount: umount passed Jan 30 05:04:54.276916 ignition[987]: INFO : Ignition finished successfully Jan 30 05:04:54.273814 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 05:04:54.274076 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 05:04:54.275180 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 05:04:54.275347 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 05:04:54.277724 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 05:04:54.277880 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 05:04:54.278838 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 05:04:54.278914 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 05:04:54.279664 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 05:04:54.279717 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 05:04:54.280402 systemd[1]: Stopped target network.target - Network. Jan 30 05:04:54.281202 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 05:04:54.281275 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 05:04:54.282039 systemd[1]: Stopped target paths.target - Path Units. Jan 30 05:04:54.282888 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 05:04:54.283022 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:04:54.283700 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 05:04:54.284608 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 05:04:54.285406 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 05:04:54.285461 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 05:04:54.286312 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 05:04:54.286357 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 05:04:54.287197 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 05:04:54.287274 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 05:04:54.288044 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 05:04:54.288102 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 05:04:54.288866 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 05:04:54.288930 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 05:04:54.289980 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 05:04:54.291413 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 05:04:54.294071 systemd-networkd[747]: eth1: DHCPv6 lease lost Jan 30 05:04:54.298092 systemd-networkd[747]: eth0: DHCPv6 lease lost Jan 30 05:04:54.300580 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 05:04:54.300746 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 05:04:54.302416 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 05:04:54.302469 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:04:54.310285 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 05:04:54.310761 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 05:04:54.310850 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 05:04:54.314998 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:04:54.316627 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 05:04:54.317799 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 05:04:54.328724 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 05:04:54.329647 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:04:54.330311 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 05:04:54.330375 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 05:04:54.332230 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 05:04:54.332318 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:04:54.334044 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 05:04:54.334588 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:04:54.335555 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 05:04:54.335689 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 05:04:54.337854 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 05:04:54.337925 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 05:04:54.339127 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 05:04:54.339175 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:04:54.339668 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 05:04:54.339722 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 05:04:54.340807 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 05:04:54.340856 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 05:04:54.341555 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 05:04:54.341616 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:04:54.349224 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 05:04:54.350559 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 05:04:54.350636 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:04:54.352314 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:04:54.352459 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:04:54.357677 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 05:04:54.358816 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 05:04:54.360929 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 05:04:54.366422 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 05:04:54.384400 systemd[1]: Switching root. Jan 30 05:04:54.442321 systemd-journald[183]: Journal stopped Jan 30 05:04:55.638798 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 30 05:04:55.638874 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 05:04:55.638890 kernel: SELinux: policy capability open_perms=1 Jan 30 05:04:55.638902 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 05:04:55.638913 kernel: SELinux: policy capability always_check_network=0 Jan 30 05:04:55.638932 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 05:04:55.651246 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 05:04:55.651270 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 05:04:55.651283 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 05:04:55.651295 kernel: audit: type=1403 audit(1738213494.627:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 05:04:55.651311 systemd[1]: Successfully loaded SELinux policy in 48.291ms. Jan 30 05:04:55.651341 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.961ms. Jan 30 05:04:55.651357 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 05:04:55.651453 systemd[1]: Detected virtualization kvm. Jan 30 05:04:55.651473 systemd[1]: Detected architecture x86-64. Jan 30 05:04:55.651485 systemd[1]: Detected first boot. Jan 30 05:04:55.651498 systemd[1]: Hostname set to . Jan 30 05:04:55.651511 systemd[1]: Initializing machine ID from VM UUID. Jan 30 05:04:55.651524 zram_generator::config[1034]: No configuration found. Jan 30 05:04:55.651538 systemd[1]: Populated /etc with preset unit settings. Jan 30 05:04:55.651764 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 05:04:55.651785 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 05:04:55.651799 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 05:04:55.651814 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 05:04:55.651827 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 05:04:55.651846 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 05:04:55.651859 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 05:04:55.651871 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 05:04:55.651884 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 05:04:55.651898 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 05:04:55.651914 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 05:04:55.651927 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:04:55.651969 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:04:55.651983 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 05:04:55.651995 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 05:04:55.652008 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 05:04:55.652021 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 05:04:55.652033 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 05:04:55.652047 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:04:55.652063 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 05:04:55.652076 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 05:04:55.652104 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 05:04:55.652118 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 05:04:55.652130 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:04:55.652143 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 05:04:55.652158 systemd[1]: Reached target slices.target - Slice Units. Jan 30 05:04:55.652172 systemd[1]: Reached target swap.target - Swaps. Jan 30 05:04:55.652184 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 05:04:55.652196 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 05:04:55.652209 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:04:55.652222 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 05:04:55.652235 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:04:55.652247 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 05:04:55.652259 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 05:04:55.652275 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 05:04:55.652287 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 05:04:55.652301 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:04:55.652313 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 05:04:55.652325 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 05:04:55.652338 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 05:04:55.652351 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 05:04:55.652363 systemd[1]: Reached target machines.target - Containers. Jan 30 05:04:55.652381 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 05:04:55.652396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:04:55.652408 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 05:04:55.652421 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 05:04:55.652433 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:04:55.652446 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 05:04:55.652458 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:04:55.652470 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 05:04:55.652482 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:04:55.652498 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 05:04:55.652515 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 05:04:55.652528 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 05:04:55.652541 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 05:04:55.652553 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 05:04:55.652565 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 05:04:55.652578 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 05:04:55.652590 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 05:04:55.652602 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 05:04:55.652618 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 05:04:55.652630 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 05:04:55.652643 systemd[1]: Stopped verity-setup.service. Jan 30 05:04:55.652660 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:04:55.652672 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 05:04:55.652685 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 05:04:55.652697 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 05:04:55.652709 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 05:04:55.652725 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 05:04:55.652738 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 05:04:55.652751 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:04:55.652764 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 05:04:55.652776 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 05:04:55.652792 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:04:55.652807 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:04:55.652820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:04:55.652832 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:04:55.652844 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 05:04:55.652859 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 05:04:55.652872 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 05:04:55.652885 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 05:04:55.652897 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 05:04:55.652910 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 05:04:55.652923 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 05:04:55.652948 kernel: loop: module loaded Jan 30 05:04:55.652962 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 05:04:55.653012 systemd-journald[1106]: Collecting audit messages is disabled. Jan 30 05:04:55.653042 kernel: fuse: init (API version 7.39) Jan 30 05:04:55.653055 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 05:04:55.653071 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:04:55.653083 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 05:04:55.653097 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:04:55.653110 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 05:04:55.653124 systemd-journald[1106]: Journal started Jan 30 05:04:55.653151 systemd-journald[1106]: Runtime Journal (/run/log/journal/0f8e87940c4a4c38af0c5bfd7626c9e8) is 4.9M, max 39.3M, 34.4M free. Jan 30 05:04:55.279120 systemd[1]: Queued start job for default target multi-user.target. Jan 30 05:04:55.302426 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 05:04:55.302914 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 05:04:55.666561 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 05:04:55.666628 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 05:04:55.662796 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 05:04:55.663151 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 05:04:55.664606 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:04:55.665241 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:04:55.666540 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 05:04:55.668570 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 05:04:55.670559 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 05:04:55.721264 kernel: ACPI: bus type drm_connector registered Jan 30 05:04:55.723962 kernel: loop0: detected capacity change from 0 to 142488 Jan 30 05:04:55.741112 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 05:04:55.755194 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 05:04:55.757147 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:04:55.773287 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:04:55.780137 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 05:04:55.780000 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 05:04:55.781128 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 05:04:55.781359 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 05:04:55.783746 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 05:04:55.803630 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 05:04:55.814666 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 05:04:55.823908 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 05:04:55.841986 kernel: loop1: detected capacity change from 0 to 8 Jan 30 05:04:55.835266 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 05:04:55.885006 systemd-journald[1106]: Time spent on flushing to /var/log/journal/0f8e87940c4a4c38af0c5bfd7626c9e8 is 128.739ms for 997 entries. Jan 30 05:04:55.885006 systemd-journald[1106]: System Journal (/var/log/journal/0f8e87940c4a4c38af0c5bfd7626c9e8) is 8.0M, max 195.6M, 187.6M free. Jan 30 05:04:56.031926 systemd-journald[1106]: Received client request to flush runtime journal. Jan 30 05:04:56.035145 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 05:04:56.035185 kernel: loop3: detected capacity change from 0 to 210664 Jan 30 05:04:55.932557 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:04:55.998532 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 05:04:56.001093 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 05:04:56.010781 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 05:04:56.020379 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 05:04:56.050061 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 05:04:56.062177 kernel: loop4: detected capacity change from 0 to 142488 Jan 30 05:04:56.085871 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:04:56.096058 kernel: loop5: detected capacity change from 0 to 8 Jan 30 05:04:56.101433 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 05:04:56.108125 kernel: loop6: detected capacity change from 0 to 140768 Jan 30 05:04:56.143292 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 05:04:56.148005 kernel: loop7: detected capacity change from 0 to 210664 Jan 30 05:04:56.172017 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Jan 30 05:04:56.172045 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Jan 30 05:04:56.178210 (sd-merge)[1171]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 30 05:04:56.179117 (sd-merge)[1171]: Merged extensions into '/usr'. Jan 30 05:04:56.200080 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:04:56.204901 systemd[1]: Reloading requested from client PID 1127 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 05:04:56.204937 systemd[1]: Reloading... Jan 30 05:04:56.439980 zram_generator::config[1207]: No configuration found. Jan 30 05:04:56.587074 ldconfig[1120]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 05:04:56.712430 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:04:56.802736 systemd[1]: Reloading finished in 596 ms. Jan 30 05:04:56.856858 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 05:04:56.861598 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 05:04:56.874433 systemd[1]: Starting ensure-sysext.service... Jan 30 05:04:56.888057 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 05:04:56.909873 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Jan 30 05:04:56.910206 systemd[1]: Reloading... Jan 30 05:04:56.968668 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 05:04:56.971734 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 05:04:56.980191 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 05:04:56.980773 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 30 05:04:56.980914 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 30 05:04:56.992719 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 05:04:56.993233 systemd-tmpfiles[1245]: Skipping /boot Jan 30 05:04:57.034849 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 05:04:57.036031 systemd-tmpfiles[1245]: Skipping /boot Jan 30 05:04:57.070980 zram_generator::config[1272]: No configuration found. Jan 30 05:04:57.311096 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:04:57.399219 systemd[1]: Reloading finished in 488 ms. Jan 30 05:04:57.421508 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 05:04:57.427745 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:04:57.446541 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 05:04:57.456497 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 05:04:57.460599 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 05:04:57.468883 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 05:04:57.479495 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:04:57.490571 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 05:04:57.495870 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:04:57.497329 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:04:57.508515 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:04:57.517409 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:04:57.521278 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:04:57.523199 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:04:57.523439 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:04:57.528404 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:04:57.528702 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:04:57.531098 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:04:57.540631 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 05:04:57.541864 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:04:57.547859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:04:57.549050 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:04:57.556437 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 05:04:57.558462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:04:57.558753 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:04:57.564274 systemd[1]: Finished ensure-sysext.service. Jan 30 05:04:57.581191 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 05:04:57.582934 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:04:57.584416 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:04:57.592424 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:04:57.604730 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 05:04:57.616412 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:04:57.616674 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:04:57.618538 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:04:57.618749 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:04:57.621918 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:04:57.646570 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 05:04:57.646895 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 05:04:57.664081 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 05:04:57.676230 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 05:04:57.678037 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 05:04:57.680949 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 05:04:57.684591 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Jan 30 05:04:57.692623 augenrules[1353]: No rules Jan 30 05:04:57.696587 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 05:04:57.728819 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 05:04:57.730934 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:04:57.734084 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 05:04:57.743363 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 05:04:57.963162 systemd-resolved[1321]: Positive Trust Anchors: Jan 30 05:04:57.963741 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 05:04:57.963904 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 05:04:57.971874 systemd-resolved[1321]: Using system hostname 'ci-4081.3.0-c-6bfcfa9ae9'. Jan 30 05:04:57.975575 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 05:04:57.976619 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:04:58.011336 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 30 05:04:58.011377 systemd-networkd[1367]: lo: Link UP Jan 30 05:04:58.011384 systemd-networkd[1367]: lo: Gained carrier Jan 30 05:04:58.011917 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:04:58.012168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:04:58.012739 systemd-networkd[1367]: Enumeration completed Jan 30 05:04:58.020274 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:04:58.029316 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:04:58.035406 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:04:58.036099 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:04:58.036161 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 05:04:58.036185 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:04:58.036424 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 05:04:58.037385 systemd[1]: Reached target network.target - Network. Jan 30 05:04:58.043221 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 05:04:58.044429 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:04:58.045122 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:04:58.053467 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 05:04:58.054871 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 05:04:58.066616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:04:58.066888 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:04:58.074513 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 05:04:58.074646 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:04:58.081633 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:04:58.082110 kernel: ISO 9660 Extensions: RRIP_1991A Jan 30 05:04:58.083069 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:04:58.088045 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 30 05:04:58.093563 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:04:58.122985 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1369) Jan 30 05:04:58.157990 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 05:04:58.175996 kernel: ACPI: button: Power Button [PWRF] Jan 30 05:04:58.214808 systemd-networkd[1367]: eth1: Configuring with /run/systemd/network/10-62:7d:2a:28:45:3d.network. Jan 30 05:04:58.218197 systemd-networkd[1367]: eth1: Link UP Jan 30 05:04:58.218357 systemd-networkd[1367]: eth1: Gained carrier Jan 30 05:04:58.225965 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Jan 30 05:04:58.233981 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 05:04:58.263054 systemd-networkd[1367]: eth0: Configuring with /run/systemd/network/10-4a:a8:48:ca:ca:d4.network. Jan 30 05:04:58.265816 systemd-networkd[1367]: eth0: Link UP Jan 30 05:04:58.266044 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 05:04:58.267381 systemd-networkd[1367]: eth0: Gained carrier Jan 30 05:04:58.293741 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 05:04:58.314225 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 05:04:58.335068 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 05:04:58.335251 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 05:04:58.348256 kernel: Console: switching to colour dummy device 80x25 Jan 30 05:04:58.348365 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 05:04:58.348391 kernel: [drm] features: -context_init Jan 30 05:04:58.354978 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 05:04:58.363983 kernel: [drm] number of scanouts: 1 Jan 30 05:04:58.364073 kernel: [drm] number of cap sets: 0 Jan 30 05:04:58.367050 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 05:04:58.371982 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 05:04:58.395232 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 05:04:58.395375 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 05:04:58.418045 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 05:04:58.433406 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:04:58.440209 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:04:58.440495 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:04:58.481092 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:04:58.497251 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:04:58.498157 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:04:58.506292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:04:58.590038 kernel: EDAC MC: Ver: 3.0.0 Jan 30 05:04:58.626856 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 05:04:58.638231 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 05:04:58.646122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:04:58.667639 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 05:04:58.707622 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 05:04:58.710683 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:04:58.710857 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 05:04:58.711166 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 05:04:58.711382 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 05:04:58.711784 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 05:04:58.712663 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 05:04:58.714514 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 05:04:58.714954 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 05:04:58.715319 systemd[1]: Reached target paths.target - Path Units. Jan 30 05:04:58.715603 systemd[1]: Reached target timers.target - Timer Units. Jan 30 05:04:58.719092 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 05:04:58.722215 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 05:04:58.732582 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 05:04:58.740311 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 05:04:58.744250 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 05:04:58.746184 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 05:04:58.747818 systemd[1]: Reached target basic.target - Basic System. Jan 30 05:04:58.748700 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 05:04:58.748757 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 05:04:58.753170 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 05:04:58.763315 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 05:04:58.764314 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 05:04:58.775394 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 05:04:58.785263 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 05:04:58.793276 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 05:04:58.794165 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 05:04:58.802414 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 05:04:58.811218 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 05:04:58.817216 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 05:04:58.830344 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 05:04:58.846342 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 05:04:58.850863 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 05:04:58.852802 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 05:04:58.861292 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 05:04:58.873240 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 05:04:58.877297 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 05:04:58.897118 jq[1432]: false Jan 30 05:04:58.897502 coreos-metadata[1430]: Jan 30 05:04:58.891 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 05:04:58.901751 extend-filesystems[1433]: Found loop4 Jan 30 05:04:58.922526 extend-filesystems[1433]: Found loop5 Jan 30 05:04:58.922526 extend-filesystems[1433]: Found loop6 Jan 30 05:04:58.922526 extend-filesystems[1433]: Found loop7 Jan 30 05:04:58.922526 extend-filesystems[1433]: Found vda Jan 30 05:04:58.922526 extend-filesystems[1433]: Found vda1 Jan 30 05:04:58.922526 extend-filesystems[1433]: Found vda2 Jan 30 05:04:58.922526 extend-filesystems[1433]: Found vda3 Jan 30 05:04:58.922526 extend-filesystems[1433]: Found usr Jan 30 05:04:58.922526 extend-filesystems[1433]: Found vda4 Jan 30 05:04:58.922526 extend-filesystems[1433]: Found vda6 Jan 30 05:04:58.922526 extend-filesystems[1433]: Found vda7 Jan 30 05:04:58.922526 extend-filesystems[1433]: Found vda9 Jan 30 05:04:58.922526 extend-filesystems[1433]: Checking size of /dev/vda9 Jan 30 05:04:59.067625 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 30 05:04:59.081916 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1368) Jan 30 05:04:58.909720 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 05:04:59.047110 dbus-daemon[1431]: [system] SELinux support is enabled Jan 30 05:04:59.082626 update_engine[1442]: I20250130 05:04:58.926355 1442 main.cc:92] Flatcar Update Engine starting Jan 30 05:04:59.082906 coreos-metadata[1430]: Jan 30 05:04:58.928 INFO Fetch successful Jan 30 05:04:59.096373 extend-filesystems[1433]: Resized partition /dev/vda9 Jan 30 05:04:58.911180 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 05:04:59.099262 extend-filesystems[1457]: resize2fs 1.47.1 (20-May-2024) Jan 30 05:04:59.121717 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 05:04:58.992542 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 05:04:59.124995 dbus-daemon[1431]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 05:04:58.995274 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 05:04:59.024741 systemd-logind[1440]: New seat seat0. Jan 30 05:04:59.148844 extend-filesystems[1457]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 05:04:59.148844 extend-filesystems[1457]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 05:04:59.148844 extend-filesystems[1457]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 05:04:59.172242 update_engine[1442]: I20250130 05:04:59.147408 1442 update_check_scheduler.cc:74] Next update check in 6m29s Jan 30 05:04:59.047485 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 05:04:59.172731 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Jan 30 05:04:59.172731 extend-filesystems[1433]: Found vdb Jan 30 05:04:59.055669 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 05:04:59.182685 jq[1443]: true Jan 30 05:04:59.055693 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 05:04:59.187482 tar[1450]: linux-amd64/helm Jan 30 05:04:59.072358 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 05:04:59.188100 jq[1463]: true Jan 30 05:04:59.075896 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 05:04:59.076583 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 05:04:59.079566 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 05:04:59.079611 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 05:04:59.080391 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 05:04:59.080530 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 30 05:04:59.080573 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 05:04:59.101131 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 05:04:59.148216 systemd[1]: Started update-engine.service - Update Engine. Jan 30 05:04:59.151867 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 05:04:59.152203 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 05:04:59.182486 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 05:04:59.230631 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 05:04:59.235530 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 05:04:59.305096 systemd-networkd[1367]: eth0: Gained IPv6LL Jan 30 05:04:59.312012 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 05:04:59.319895 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 05:04:59.342636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:04:59.349245 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Jan 30 05:04:59.358197 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 05:04:59.364022 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 05:04:59.385502 systemd[1]: Starting sshkeys.service... Jan 30 05:04:59.477920 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 05:04:59.490590 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 05:04:59.516579 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 05:04:59.557081 systemd-networkd[1367]: eth1: Gained IPv6LL Jan 30 05:04:59.695258 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 05:04:59.709017 coreos-metadata[1507]: Jan 30 05:04:59.707 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 05:04:59.731977 coreos-metadata[1507]: Jan 30 05:04:59.730 INFO Fetch successful Jan 30 05:04:59.760659 unknown[1507]: wrote ssh authorized keys file for user: core Jan 30 05:04:59.785891 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 05:04:59.878809 update-ssh-keys[1521]: Updated "/home/core/.ssh/authorized_keys" Jan 30 05:04:59.880564 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 05:04:59.888315 systemd[1]: Finished sshkeys.service. Jan 30 05:04:59.898766 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 05:04:59.931018 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 05:04:59.994123 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 05:04:59.994440 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 05:05:00.008074 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 05:05:00.089826 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 05:05:00.101554 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 05:05:00.117619 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 05:05:00.125691 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 05:05:00.179699 containerd[1460]: time="2025-01-30T05:05:00.177235385Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 05:05:00.297766 containerd[1460]: time="2025-01-30T05:05:00.297594459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:05:00.306093 containerd[1460]: time="2025-01-30T05:05:00.304515263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:05:00.306093 containerd[1460]: time="2025-01-30T05:05:00.306077957Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 05:05:00.306377 containerd[1460]: time="2025-01-30T05:05:00.306124540Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 05:05:00.306495 containerd[1460]: time="2025-01-30T05:05:00.306443626Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 05:05:00.307823 containerd[1460]: time="2025-01-30T05:05:00.306502700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 05:05:00.307823 containerd[1460]: time="2025-01-30T05:05:00.306613496Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:05:00.307823 containerd[1460]: time="2025-01-30T05:05:00.306640925Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:05:00.307823 containerd[1460]: time="2025-01-30T05:05:00.306930832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:05:00.307823 containerd[1460]: time="2025-01-30T05:05:00.306971333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 05:05:00.307823 containerd[1460]: time="2025-01-30T05:05:00.306986591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:05:00.307823 containerd[1460]: time="2025-01-30T05:05:00.306997410Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 05:05:00.307823 containerd[1460]: time="2025-01-30T05:05:00.307099748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:05:00.307823 containerd[1460]: time="2025-01-30T05:05:00.307357601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:05:00.307823 containerd[1460]: time="2025-01-30T05:05:00.307563741Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:05:00.307823 containerd[1460]: time="2025-01-30T05:05:00.307588796Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 05:05:00.308365 containerd[1460]: time="2025-01-30T05:05:00.307762890Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 05:05:00.308365 containerd[1460]: time="2025-01-30T05:05:00.307840252Z" level=info msg="metadata content store policy set" policy=shared Jan 30 05:05:00.319992 containerd[1460]: time="2025-01-30T05:05:00.318658043Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 05:05:00.319992 containerd[1460]: time="2025-01-30T05:05:00.318745744Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 05:05:00.319992 containerd[1460]: time="2025-01-30T05:05:00.318763469Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 05:05:00.319992 containerd[1460]: time="2025-01-30T05:05:00.318783242Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 05:05:00.319992 containerd[1460]: time="2025-01-30T05:05:00.318807006Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 05:05:00.319992 containerd[1460]: time="2025-01-30T05:05:00.319063578Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 05:05:00.319992 containerd[1460]: time="2025-01-30T05:05:00.319452135Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 05:05:00.319992 containerd[1460]: time="2025-01-30T05:05:00.319688709Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 05:05:00.319992 containerd[1460]: time="2025-01-30T05:05:00.319717017Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 05:05:00.319992 containerd[1460]: time="2025-01-30T05:05:00.319744469Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 05:05:00.319992 containerd[1460]: time="2025-01-30T05:05:00.319794833Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 05:05:00.319992 containerd[1460]: time="2025-01-30T05:05:00.319814510Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 05:05:00.319992 containerd[1460]: time="2025-01-30T05:05:00.319837751Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 05:05:00.319992 containerd[1460]: time="2025-01-30T05:05:00.319858879Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 05:05:00.320803 containerd[1460]: time="2025-01-30T05:05:00.319880446Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 05:05:00.320803 containerd[1460]: time="2025-01-30T05:05:00.319899894Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 05:05:00.320803 containerd[1460]: time="2025-01-30T05:05:00.319916434Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 05:05:00.320803 containerd[1460]: time="2025-01-30T05:05:00.319931206Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 05:05:00.320803 containerd[1460]: time="2025-01-30T05:05:00.319977037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.320803 containerd[1460]: time="2025-01-30T05:05:00.319992264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.320803 containerd[1460]: time="2025-01-30T05:05:00.320036147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.320803 containerd[1460]: time="2025-01-30T05:05:00.320057515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.320803 containerd[1460]: time="2025-01-30T05:05:00.320076700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.320803 containerd[1460]: time="2025-01-30T05:05:00.320095086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.320803 containerd[1460]: time="2025-01-30T05:05:00.320113323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.320803 containerd[1460]: time="2025-01-30T05:05:00.320132301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.320803 containerd[1460]: time="2025-01-30T05:05:00.320150298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.320803 containerd[1460]: time="2025-01-30T05:05:00.320170328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.321557 containerd[1460]: time="2025-01-30T05:05:00.320182979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.321557 containerd[1460]: time="2025-01-30T05:05:00.320196540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.321557 containerd[1460]: time="2025-01-30T05:05:00.320237560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.321557 containerd[1460]: time="2025-01-30T05:05:00.320264586Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 05:05:00.321557 containerd[1460]: time="2025-01-30T05:05:00.320291249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.321557 containerd[1460]: time="2025-01-30T05:05:00.320303002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.321557 containerd[1460]: time="2025-01-30T05:05:00.320316092Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 05:05:00.321557 containerd[1460]: time="2025-01-30T05:05:00.320365973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 05:05:00.321557 containerd[1460]: time="2025-01-30T05:05:00.320386892Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 05:05:00.321557 containerd[1460]: time="2025-01-30T05:05:00.320399475Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 05:05:00.321557 containerd[1460]: time="2025-01-30T05:05:00.320411864Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 05:05:00.321557 containerd[1460]: time="2025-01-30T05:05:00.320422796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.321557 containerd[1460]: time="2025-01-30T05:05:00.320435603Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 05:05:00.321557 containerd[1460]: time="2025-01-30T05:05:00.320452786Z" level=info msg="NRI interface is disabled by configuration." Jan 30 05:05:00.323173 containerd[1460]: time="2025-01-30T05:05:00.320463032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 05:05:00.323249 containerd[1460]: time="2025-01-30T05:05:00.320872136Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 05:05:00.326198 containerd[1460]: time="2025-01-30T05:05:00.325348930Z" level=info msg="Connect containerd service" Jan 30 05:05:00.326198 containerd[1460]: time="2025-01-30T05:05:00.325469538Z" level=info msg="using legacy CRI server" Jan 30 05:05:00.326198 containerd[1460]: time="2025-01-30T05:05:00.325486019Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 05:05:00.326198 containerd[1460]: time="2025-01-30T05:05:00.325675058Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 05:05:00.328386 containerd[1460]: time="2025-01-30T05:05:00.327538986Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 05:05:00.330439 containerd[1460]: time="2025-01-30T05:05:00.330355978Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 05:05:00.330735 containerd[1460]: time="2025-01-30T05:05:00.330706893Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 05:05:00.331027 containerd[1460]: time="2025-01-30T05:05:00.330976324Z" level=info msg="Start subscribing containerd event" Jan 30 05:05:00.336660 containerd[1460]: time="2025-01-30T05:05:00.336243509Z" level=info msg="Start recovering state" Jan 30 05:05:00.336660 containerd[1460]: time="2025-01-30T05:05:00.336431194Z" level=info msg="Start event monitor" Jan 30 05:05:00.336660 containerd[1460]: time="2025-01-30T05:05:00.336462360Z" level=info msg="Start snapshots syncer" Jan 30 05:05:00.336660 containerd[1460]: time="2025-01-30T05:05:00.336491078Z" level=info msg="Start cni network conf syncer for default" Jan 30 05:05:00.336660 containerd[1460]: time="2025-01-30T05:05:00.336513136Z" level=info msg="Start streaming server" Jan 30 05:05:00.337149 containerd[1460]: time="2025-01-30T05:05:00.337117847Z" level=info msg="containerd successfully booted in 0.164643s" Jan 30 05:05:00.337248 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 05:05:00.538971 tar[1450]: linux-amd64/LICENSE Jan 30 05:05:00.539537 tar[1450]: linux-amd64/README.md Jan 30 05:05:00.558538 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 05:05:01.159006 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 05:05:01.172782 systemd[1]: Started sshd@0-24.144.82.28:22-147.75.109.163:40418.service - OpenSSH per-connection server daemon (147.75.109.163:40418). Jan 30 05:05:01.394431 sshd[1550]: Accepted publickey for core from 147.75.109.163 port 40418 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:05:01.410930 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:05:01.461734 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 05:05:01.501655 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 05:05:01.525674 systemd-logind[1440]: New session 1 of user core. Jan 30 05:05:01.626549 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 05:05:01.653074 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 05:05:01.691731 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 05:05:01.806899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:05:01.811028 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 05:05:01.836879 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:05:02.048472 systemd[1554]: Queued start job for default target default.target. Jan 30 05:05:02.060769 systemd[1554]: Created slice app.slice - User Application Slice. Jan 30 05:05:02.061529 systemd[1554]: Reached target paths.target - Paths. Jan 30 05:05:02.061559 systemd[1554]: Reached target timers.target - Timers. Jan 30 05:05:02.067231 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 05:05:02.115535 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 05:05:02.116368 systemd[1554]: Reached target sockets.target - Sockets. Jan 30 05:05:02.116593 systemd[1554]: Reached target basic.target - Basic System. Jan 30 05:05:02.117435 systemd[1554]: Reached target default.target - Main User Target. Jan 30 05:05:02.117492 systemd[1554]: Startup finished in 386ms. Jan 30 05:05:02.117870 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 05:05:02.177314 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 05:05:02.188003 systemd[1]: Startup finished in 1.124s (kernel) + 5.876s (initrd) + 7.605s (userspace) = 14.606s. Jan 30 05:05:02.342846 systemd[1]: Started sshd@1-24.144.82.28:22-147.75.109.163:40430.service - OpenSSH per-connection server daemon (147.75.109.163:40430). Jan 30 05:05:02.512278 sshd[1580]: Accepted publickey for core from 147.75.109.163 port 40430 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:05:02.517925 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:05:02.535724 systemd-logind[1440]: New session 2 of user core. Jan 30 05:05:02.542919 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 05:05:02.655231 sshd[1580]: pam_unix(sshd:session): session closed for user core Jan 30 05:05:02.683526 systemd[1]: sshd@1-24.144.82.28:22-147.75.109.163:40430.service: Deactivated successfully. Jan 30 05:05:02.693181 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 05:05:02.697056 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Jan 30 05:05:02.713774 systemd[1]: Started sshd@2-24.144.82.28:22-147.75.109.163:40442.service - OpenSSH per-connection server daemon (147.75.109.163:40442). Jan 30 05:05:02.728183 systemd-logind[1440]: Removed session 2. Jan 30 05:05:02.774524 sshd[1587]: Accepted publickey for core from 147.75.109.163 port 40442 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:05:02.775789 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:05:02.802711 systemd-logind[1440]: New session 3 of user core. Jan 30 05:05:02.810537 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 05:05:02.881401 sshd[1587]: pam_unix(sshd:session): session closed for user core Jan 30 05:05:02.894721 systemd[1]: sshd@2-24.144.82.28:22-147.75.109.163:40442.service: Deactivated successfully. Jan 30 05:05:02.901020 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 05:05:02.909800 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Jan 30 05:05:02.916609 systemd[1]: Started sshd@3-24.144.82.28:22-147.75.109.163:40456.service - OpenSSH per-connection server daemon (147.75.109.163:40456). Jan 30 05:05:02.921533 systemd-logind[1440]: Removed session 3. Jan 30 05:05:03.015810 sshd[1594]: Accepted publickey for core from 147.75.109.163 port 40456 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:05:03.019371 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:05:03.038834 systemd-logind[1440]: New session 4 of user core. Jan 30 05:05:03.044373 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 05:05:03.126509 sshd[1594]: pam_unix(sshd:session): session closed for user core Jan 30 05:05:03.141327 systemd[1]: sshd@3-24.144.82.28:22-147.75.109.163:40456.service: Deactivated successfully. Jan 30 05:05:03.146388 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 05:05:03.151668 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Jan 30 05:05:03.167099 systemd[1]: Started sshd@4-24.144.82.28:22-147.75.109.163:40468.service - OpenSSH per-connection server daemon (147.75.109.163:40468). Jan 30 05:05:03.178579 systemd-logind[1440]: Removed session 4. Jan 30 05:05:03.273072 sshd[1601]: Accepted publickey for core from 147.75.109.163 port 40468 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:05:03.279521 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:05:03.301757 systemd-logind[1440]: New session 5 of user core. Jan 30 05:05:03.307366 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 05:05:03.426496 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 05:05:03.429515 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:05:03.452968 sudo[1605]: pam_unix(sudo:session): session closed for user root Jan 30 05:05:03.481093 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 30 05:05:03.505628 systemd[1]: Started sshd@5-24.144.82.28:22-147.75.109.163:40480.service - OpenSSH per-connection server daemon (147.75.109.163:40480). Jan 30 05:05:03.507613 systemd[1]: sshd@4-24.144.82.28:22-147.75.109.163:40468.service: Deactivated successfully. Jan 30 05:05:03.513259 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 05:05:03.518927 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Jan 30 05:05:03.528879 systemd-logind[1440]: Removed session 5. Jan 30 05:05:03.625096 sshd[1608]: Accepted publickey for core from 147.75.109.163 port 40480 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:05:03.629823 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:05:03.658215 systemd-logind[1440]: New session 6 of user core. Jan 30 05:05:03.667554 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 05:05:03.747930 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 05:05:03.748661 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:05:03.751408 kubelet[1562]: E0130 05:05:03.749707 1562 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:05:03.755148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:05:03.755586 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:05:03.756070 systemd[1]: kubelet.service: Consumed 1.496s CPU time. Jan 30 05:05:03.761642 sudo[1614]: pam_unix(sudo:session): session closed for user root Jan 30 05:05:03.773272 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 05:05:03.773788 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:05:03.812584 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 05:05:03.821553 auditctl[1618]: No rules Jan 30 05:05:03.822304 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 05:05:03.822686 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 05:05:03.834156 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 05:05:03.922022 augenrules[1636]: No rules Jan 30 05:05:03.923320 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 05:05:03.925688 sudo[1613]: pam_unix(sudo:session): session closed for user root Jan 30 05:05:03.930694 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 30 05:05:03.947173 systemd[1]: sshd@5-24.144.82.28:22-147.75.109.163:40480.service: Deactivated successfully. Jan 30 05:05:03.950454 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 05:05:03.953864 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Jan 30 05:05:03.966678 systemd[1]: Started sshd@6-24.144.82.28:22-147.75.109.163:40492.service - OpenSSH per-connection server daemon (147.75.109.163:40492). Jan 30 05:05:03.969546 systemd-logind[1440]: Removed session 6. Jan 30 05:05:04.031444 sshd[1644]: Accepted publickey for core from 147.75.109.163 port 40492 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:05:04.041198 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:05:04.059029 systemd-logind[1440]: New session 7 of user core. Jan 30 05:05:04.071334 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 05:05:04.148695 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 05:05:04.155160 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:05:05.039102 systemd-timesyncd[1338]: Contacted time server 216.240.36.24:123 (1.flatcar.pool.ntp.org). Jan 30 05:05:05.039216 systemd-timesyncd[1338]: Initial clock synchronization to Thu 2025-01-30 05:05:05.038119 UTC. Jan 30 05:05:05.040473 systemd-resolved[1321]: Clock change detected. Flushing caches. Jan 30 05:05:05.393312 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 05:05:05.415378 (dockerd)[1664]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 05:05:06.073970 dockerd[1664]: time="2025-01-30T05:05:06.073857095Z" level=info msg="Starting up" Jan 30 05:05:06.251777 dockerd[1664]: time="2025-01-30T05:05:06.251127672Z" level=info msg="Loading containers: start." Jan 30 05:05:06.440490 kernel: Initializing XFRM netlink socket Jan 30 05:05:06.565218 systemd-networkd[1367]: docker0: Link UP Jan 30 05:05:06.603281 dockerd[1664]: time="2025-01-30T05:05:06.603213952Z" level=info msg="Loading containers: done." Jan 30 05:05:06.627582 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4060325339-merged.mount: Deactivated successfully. Jan 30 05:05:06.631025 dockerd[1664]: time="2025-01-30T05:05:06.630799184Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 05:05:06.631899 dockerd[1664]: time="2025-01-30T05:05:06.631269057Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 05:05:06.631899 dockerd[1664]: time="2025-01-30T05:05:06.631571713Z" level=info msg="Daemon has completed initialization" Jan 30 05:05:06.696547 dockerd[1664]: time="2025-01-30T05:05:06.696059212Z" level=info msg="API listen on /run/docker.sock" Jan 30 05:05:06.696351 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 05:05:07.894092 containerd[1460]: time="2025-01-30T05:05:07.893991462Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 05:05:08.571547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4212224432.mount: Deactivated successfully. Jan 30 05:05:10.316462 containerd[1460]: time="2025-01-30T05:05:10.315215338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:10.317088 containerd[1460]: time="2025-01-30T05:05:10.316503555Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 05:05:10.317922 containerd[1460]: time="2025-01-30T05:05:10.317859839Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:10.323165 containerd[1460]: time="2025-01-30T05:05:10.323094299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:10.325003 containerd[1460]: time="2025-01-30T05:05:10.324933854Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.430873299s" Jan 30 05:05:10.325306 containerd[1460]: time="2025-01-30T05:05:10.325277379Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 05:05:10.379421 containerd[1460]: time="2025-01-30T05:05:10.379360545Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 05:05:10.936708 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 30 05:05:12.064227 containerd[1460]: time="2025-01-30T05:05:12.064156860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:12.065440 containerd[1460]: time="2025-01-30T05:05:12.064875934Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 05:05:12.066204 containerd[1460]: time="2025-01-30T05:05:12.066099540Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:12.070166 containerd[1460]: time="2025-01-30T05:05:12.069645400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:12.071243 containerd[1460]: time="2025-01-30T05:05:12.071179398Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.691510724s" Jan 30 05:05:12.071243 containerd[1460]: time="2025-01-30T05:05:12.071245102Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 05:05:12.120786 containerd[1460]: time="2025-01-30T05:05:12.120734933Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 05:05:13.413613 containerd[1460]: time="2025-01-30T05:05:13.412504587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:13.415329 containerd[1460]: time="2025-01-30T05:05:13.414951799Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 05:05:13.417120 containerd[1460]: time="2025-01-30T05:05:13.417048499Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:13.420784 containerd[1460]: time="2025-01-30T05:05:13.420731793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:13.422605 containerd[1460]: time="2025-01-30T05:05:13.422524065Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.301501791s" Jan 30 05:05:13.422605 containerd[1460]: time="2025-01-30T05:05:13.422602662Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 05:05:13.459212 containerd[1460]: time="2025-01-30T05:05:13.459147354Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 05:05:14.200716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 05:05:14.210668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:05:14.393667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:05:14.410248 (kubelet)[1900]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:05:14.529517 kubelet[1900]: E0130 05:05:14.529307 1900 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:05:14.538316 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:05:14.538855 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:05:14.929183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152888363.mount: Deactivated successfully. Jan 30 05:05:15.611361 containerd[1460]: time="2025-01-30T05:05:15.611285208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:15.612573 containerd[1460]: time="2025-01-30T05:05:15.612159731Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 05:05:15.613239 containerd[1460]: time="2025-01-30T05:05:15.613072768Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:15.617016 containerd[1460]: time="2025-01-30T05:05:15.616917619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:15.618776 containerd[1460]: time="2025-01-30T05:05:15.618701799Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.159288718s" Jan 30 05:05:15.618776 containerd[1460]: time="2025-01-30T05:05:15.618776161Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 05:05:15.653712 containerd[1460]: time="2025-01-30T05:05:15.653250892Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 05:05:15.655108 systemd-resolved[1321]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 30 05:05:16.117423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2518256954.mount: Deactivated successfully. Jan 30 05:05:17.181047 containerd[1460]: time="2025-01-30T05:05:17.180969224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:17.182837 containerd[1460]: time="2025-01-30T05:05:17.182460947Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 05:05:17.184439 containerd[1460]: time="2025-01-30T05:05:17.183684894Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:17.188453 containerd[1460]: time="2025-01-30T05:05:17.188153237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:17.191274 containerd[1460]: time="2025-01-30T05:05:17.190012748Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.536683415s" Jan 30 05:05:17.191274 containerd[1460]: time="2025-01-30T05:05:17.190082861Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 05:05:17.237034 containerd[1460]: time="2025-01-30T05:05:17.236972234Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 05:05:17.712350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4138975901.mount: Deactivated successfully. Jan 30 05:05:17.721455 containerd[1460]: time="2025-01-30T05:05:17.720262670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:17.722479 containerd[1460]: time="2025-01-30T05:05:17.721926610Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 05:05:17.722479 containerd[1460]: time="2025-01-30T05:05:17.722166846Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:17.726170 containerd[1460]: time="2025-01-30T05:05:17.726064154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:17.728063 containerd[1460]: time="2025-01-30T05:05:17.727487262Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 490.440497ms" Jan 30 05:05:17.728063 containerd[1460]: time="2025-01-30T05:05:17.727555172Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 05:05:17.768876 containerd[1460]: time="2025-01-30T05:05:17.768762519Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 05:05:18.303266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1324477191.mount: Deactivated successfully. Jan 30 05:05:20.274543 containerd[1460]: time="2025-01-30T05:05:20.274448909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:20.276270 containerd[1460]: time="2025-01-30T05:05:20.276181754Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 05:05:20.279440 containerd[1460]: time="2025-01-30T05:05:20.277510915Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:20.283784 containerd[1460]: time="2025-01-30T05:05:20.283713499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:20.287474 containerd[1460]: time="2025-01-30T05:05:20.287381662Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.518256483s" Jan 30 05:05:20.287719 containerd[1460]: time="2025-01-30T05:05:20.287684239Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 05:05:23.758526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:05:23.770945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:05:23.820698 systemd[1]: Reloading requested from client PID 2085 ('systemctl') (unit session-7.scope)... Jan 30 05:05:23.820727 systemd[1]: Reloading... Jan 30 05:05:23.998439 zram_generator::config[2124]: No configuration found. Jan 30 05:05:24.175731 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:05:24.285872 systemd[1]: Reloading finished in 464 ms. Jan 30 05:05:24.366673 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 05:05:24.366838 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 05:05:24.367747 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:05:24.384093 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:05:24.542789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:05:24.550565 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 05:05:24.621351 kubelet[2179]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:05:24.622286 kubelet[2179]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 05:05:24.622286 kubelet[2179]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:05:24.624440 kubelet[2179]: I0130 05:05:24.623554 2179 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 05:05:25.217886 kubelet[2179]: I0130 05:05:25.217809 2179 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 05:05:25.217886 kubelet[2179]: I0130 05:05:25.217870 2179 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 05:05:25.218411 kubelet[2179]: I0130 05:05:25.218359 2179 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 05:05:25.264312 kubelet[2179]: E0130 05:05:25.264248 2179 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://24.144.82.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:25.265101 kubelet[2179]: I0130 05:05:25.264428 2179 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:05:25.288565 kubelet[2179]: I0130 05:05:25.288524 2179 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 05:05:25.291134 kubelet[2179]: I0130 05:05:25.290550 2179 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 05:05:25.291134 kubelet[2179]: I0130 05:05:25.290670 2179 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-c-6bfcfa9ae9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 05:05:25.291908 kubelet[2179]: I0130 05:05:25.291864 2179 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 05:05:25.292103 kubelet[2179]: I0130 05:05:25.292086 2179 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 05:05:25.292356 kubelet[2179]: I0130 05:05:25.292339 2179 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:05:25.293504 kubelet[2179]: I0130 05:05:25.293466 2179 kubelet.go:400] "Attempting to sync node with API server" Jan 30 05:05:25.293698 kubelet[2179]: I0130 05:05:25.293679 2179 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 05:05:25.293902 kubelet[2179]: I0130 05:05:25.293805 2179 kubelet.go:312] "Adding apiserver pod source" Jan 30 05:05:25.293902 kubelet[2179]: I0130 05:05:25.293836 2179 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 05:05:25.300940 kubelet[2179]: W0130 05:05:25.300827 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://24.144.82.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-c-6bfcfa9ae9&limit=500&resourceVersion=0": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:25.300940 kubelet[2179]: E0130 05:05:25.300951 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://24.144.82.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-c-6bfcfa9ae9&limit=500&resourceVersion=0": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:25.304909 kubelet[2179]: W0130 05:05:25.303387 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://24.144.82.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:25.304909 kubelet[2179]: E0130 05:05:25.303764 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://24.144.82.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:25.304909 kubelet[2179]: I0130 05:05:25.304119 2179 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 05:05:25.307431 kubelet[2179]: I0130 05:05:25.306682 2179 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 05:05:25.307431 kubelet[2179]: W0130 05:05:25.306838 2179 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 05:05:25.310728 kubelet[2179]: I0130 05:05:25.310663 2179 server.go:1264] "Started kubelet" Jan 30 05:05:25.318598 kubelet[2179]: I0130 05:05:25.318504 2179 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 05:05:25.320526 kubelet[2179]: I0130 05:05:25.320480 2179 server.go:455] "Adding debug handlers to kubelet server" Jan 30 05:05:25.322976 kubelet[2179]: I0130 05:05:25.322872 2179 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 05:05:25.323709 kubelet[2179]: I0130 05:05:25.323678 2179 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 05:05:25.325950 kubelet[2179]: I0130 05:05:25.325727 2179 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 05:05:25.330922 kubelet[2179]: E0130 05:05:25.329442 2179 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.144.82.28:6443/api/v1/namespaces/default/events\": dial tcp 24.144.82.28:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-c-6bfcfa9ae9.181f5ffde7ead5fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-c-6bfcfa9ae9,UID:ci-4081.3.0-c-6bfcfa9ae9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-c-6bfcfa9ae9,},FirstTimestamp:2025-01-30 05:05:25.310592508 +0000 UTC m=+0.753410789,LastTimestamp:2025-01-30 05:05:25.310592508 +0000 UTC m=+0.753410789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-c-6bfcfa9ae9,}" Jan 30 05:05:25.333140 kubelet[2179]: E0130 05:05:25.332657 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-c-6bfcfa9ae9\" not found" Jan 30 05:05:25.333140 kubelet[2179]: I0130 05:05:25.332725 2179 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 05:05:25.333140 kubelet[2179]: I0130 05:05:25.332883 2179 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 05:05:25.333140 kubelet[2179]: I0130 05:05:25.333001 2179 reconciler.go:26] "Reconciler: start to sync state" Jan 30 05:05:25.333666 kubelet[2179]: W0130 05:05:25.333591 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://24.144.82.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:25.333765 kubelet[2179]: E0130 05:05:25.333675 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://24.144.82.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:25.334043 kubelet[2179]: E0130 05:05:25.333996 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.144.82.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-c-6bfcfa9ae9?timeout=10s\": dial tcp 24.144.82.28:6443: connect: connection refused" interval="200ms" Jan 30 05:05:25.342719 kubelet[2179]: I0130 05:05:25.342652 2179 factory.go:221] Registration of the systemd container factory successfully Jan 30 05:05:25.344240 kubelet[2179]: I0130 05:05:25.342828 2179 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 05:05:25.347436 kubelet[2179]: I0130 05:05:25.347355 2179 factory.go:221] Registration of the containerd container factory successfully Jan 30 05:05:25.385050 kubelet[2179]: I0130 05:05:25.385009 2179 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 05:05:25.385050 kubelet[2179]: I0130 05:05:25.385037 2179 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 05:05:25.385379 kubelet[2179]: I0130 05:05:25.385083 2179 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:05:25.386510 kubelet[2179]: I0130 05:05:25.386435 2179 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 05:05:25.388187 kubelet[2179]: I0130 05:05:25.387605 2179 policy_none.go:49] "None policy: Start" Jan 30 05:05:25.390638 kubelet[2179]: I0130 05:05:25.390462 2179 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 05:05:25.390638 kubelet[2179]: I0130 05:05:25.390522 2179 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 05:05:25.390638 kubelet[2179]: I0130 05:05:25.390551 2179 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 05:05:25.390638 kubelet[2179]: E0130 05:05:25.390612 2179 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 05:05:25.398880 kubelet[2179]: W0130 05:05:25.398707 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://24.144.82.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:25.398880 kubelet[2179]: E0130 05:05:25.398806 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://24.144.82.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:25.399900 kubelet[2179]: I0130 05:05:25.399342 2179 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 05:05:25.399900 kubelet[2179]: I0130 05:05:25.399401 2179 state_mem.go:35] "Initializing new in-memory state store" Jan 30 05:05:25.411273 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 05:05:25.431759 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 05:05:25.436653 kubelet[2179]: I0130 05:05:25.436390 2179 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:25.439834 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 05:05:25.440684 kubelet[2179]: E0130 05:05:25.440534 2179 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://24.144.82.28:6443/api/v1/nodes\": dial tcp 24.144.82.28:6443: connect: connection refused" node="ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:25.454933 kubelet[2179]: I0130 05:05:25.453040 2179 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 05:05:25.454933 kubelet[2179]: I0130 05:05:25.453420 2179 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 05:05:25.454933 kubelet[2179]: I0130 05:05:25.453621 2179 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 05:05:25.462627 kubelet[2179]: E0130 05:05:25.462588 2179 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-c-6bfcfa9ae9\" not found" Jan 30 05:05:25.491662 kubelet[2179]: I0130 05:05:25.491471 2179 topology_manager.go:215] "Topology Admit Handler" podUID="72455972bf69328ad219f67b4cf3b3d7" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:25.493291 kubelet[2179]: I0130 05:05:25.493245 2179 topology_manager.go:215] "Topology Admit Handler" podUID="573990840ade09706ea213805c510025" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:25.496474 kubelet[2179]: I0130 05:05:25.495805 2179 topology_manager.go:215] "Topology Admit Handler" podUID="c5fa574f50743a80c23f91450cf45bcb" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:25.507334 systemd[1]: Created slice kubepods-burstable-pod72455972bf69328ad219f67b4cf3b3d7.slice - libcontainer container kubepods-burstable-pod72455972bf69328ad219f67b4cf3b3d7.slice. Jan 30 05:05:25.523353 systemd[1]: Created slice kubepods-burstable-podc5fa574f50743a80c23f91450cf45bcb.slice - libcontainer container kubepods-burstable-podc5fa574f50743a80c23f91450cf45bcb.slice. Jan 30 05:05:25.533873 kubelet[2179]: I0130 05:05:25.533800 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/72455972bf69328ad219f67b4cf3b3d7-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"72455972bf69328ad219f67b4cf3b3d7\") " pod="kube-system/kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:25.535843 kubelet[2179]: E0130 05:05:25.535783 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.144.82.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-c-6bfcfa9ae9?timeout=10s\": dial tcp 24.144.82.28:6443: connect: connection refused" interval="400ms" Jan 30 05:05:25.547824 systemd[1]: Created slice kubepods-burstable-pod573990840ade09706ea213805c510025.slice - libcontainer container kubepods-burstable-pod573990840ade09706ea213805c510025.slice. Jan 30 05:05:25.634626 kubelet[2179]: I0130 05:05:25.634549 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c5fa574f50743a80c23f91450cf45bcb-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"c5fa574f50743a80c23f91450cf45bcb\") " pod="kube-system/kube-scheduler-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:25.635133 kubelet[2179]: I0130 05:05:25.634782 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/72455972bf69328ad219f67b4cf3b3d7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"72455972bf69328ad219f67b4cf3b3d7\") " pod="kube-system/kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:25.635133 kubelet[2179]: I0130 05:05:25.634841 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/573990840ade09706ea213805c510025-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"573990840ade09706ea213805c510025\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:25.635133 kubelet[2179]: I0130 05:05:25.634872 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/573990840ade09706ea213805c510025-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"573990840ade09706ea213805c510025\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:25.635133 kubelet[2179]: I0130 05:05:25.634901 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/72455972bf69328ad219f67b4cf3b3d7-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"72455972bf69328ad219f67b4cf3b3d7\") " pod="kube-system/kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:25.635133 kubelet[2179]: I0130 05:05:25.634990 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/573990840ade09706ea213805c510025-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"573990840ade09706ea213805c510025\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:25.635346 kubelet[2179]: I0130 05:05:25.635046 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/573990840ade09706ea213805c510025-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"573990840ade09706ea213805c510025\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:25.635346 kubelet[2179]: I0130 05:05:25.635072 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/573990840ade09706ea213805c510025-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"573990840ade09706ea213805c510025\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:25.643593 kubelet[2179]: I0130 05:05:25.643470 2179 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:25.644389 kubelet[2179]: E0130 05:05:25.644326 2179 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://24.144.82.28:6443/api/v1/nodes\": dial tcp 24.144.82.28:6443: connect: connection refused" node="ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:25.818372 kubelet[2179]: E0130 05:05:25.817655 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:25.818968 containerd[1460]: time="2025-01-30T05:05:25.818756018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9,Uid:72455972bf69328ad219f67b4cf3b3d7,Namespace:kube-system,Attempt:0,}" Jan 30 05:05:25.842645 kubelet[2179]: E0130 05:05:25.841886 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:25.851216 containerd[1460]: time="2025-01-30T05:05:25.851138290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-c-6bfcfa9ae9,Uid:c5fa574f50743a80c23f91450cf45bcb,Namespace:kube-system,Attempt:0,}" Jan 30 05:05:25.853288 kubelet[2179]: E0130 05:05:25.853199 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:25.854329 containerd[1460]: time="2025-01-30T05:05:25.853972654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9,Uid:573990840ade09706ea213805c510025,Namespace:kube-system,Attempt:0,}" Jan 30 05:05:25.936983 kubelet[2179]: E0130 05:05:25.936918 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.144.82.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-c-6bfcfa9ae9?timeout=10s\": dial tcp 24.144.82.28:6443: connect: connection refused" interval="800ms" Jan 30 05:05:26.045874 kubelet[2179]: I0130 05:05:26.045794 2179 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:26.046695 kubelet[2179]: E0130 05:05:26.046628 2179 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://24.144.82.28:6443/api/v1/nodes\": dial tcp 24.144.82.28:6443: connect: connection refused" node="ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:26.084642 kubelet[2179]: E0130 05:05:26.084297 2179 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.144.82.28:6443/api/v1/namespaces/default/events\": dial tcp 24.144.82.28:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-c-6bfcfa9ae9.181f5ffde7ead5fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-c-6bfcfa9ae9,UID:ci-4081.3.0-c-6bfcfa9ae9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-c-6bfcfa9ae9,},FirstTimestamp:2025-01-30 05:05:25.310592508 +0000 UTC m=+0.753410789,LastTimestamp:2025-01-30 05:05:25.310592508 +0000 UTC m=+0.753410789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-c-6bfcfa9ae9,}" Jan 30 05:05:26.196243 kubelet[2179]: W0130 05:05:26.196123 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://24.144.82.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:26.196243 kubelet[2179]: E0130 05:05:26.196175 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://24.144.82.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:26.257995 kubelet[2179]: W0130 05:05:26.257880 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://24.144.82.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:26.257995 kubelet[2179]: E0130 05:05:26.257954 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://24.144.82.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:26.288807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3507001607.mount: Deactivated successfully. Jan 30 05:05:26.295981 containerd[1460]: time="2025-01-30T05:05:26.295876740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:05:26.297888 containerd[1460]: time="2025-01-30T05:05:26.297785030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 05:05:26.298811 containerd[1460]: time="2025-01-30T05:05:26.298609674Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:05:26.301444 containerd[1460]: time="2025-01-30T05:05:26.301193448Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 05:05:26.303145 containerd[1460]: time="2025-01-30T05:05:26.301954913Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 05:05:26.303145 containerd[1460]: time="2025-01-30T05:05:26.302066134Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:05:26.308201 containerd[1460]: time="2025-01-30T05:05:26.308073522Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:05:26.309784 containerd[1460]: time="2025-01-30T05:05:26.309435661Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 455.347529ms" Jan 30 05:05:26.312486 containerd[1460]: time="2025-01-30T05:05:26.311901936Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 460.371757ms" Jan 30 05:05:26.316445 containerd[1460]: time="2025-01-30T05:05:26.315681133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:05:26.318414 containerd[1460]: time="2025-01-30T05:05:26.318073247Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 499.201384ms" Jan 30 05:05:26.386061 kubelet[2179]: W0130 05:05:26.381622 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://24.144.82.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:26.386061 kubelet[2179]: E0130 05:05:26.381692 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://24.144.82.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:26.514324 containerd[1460]: time="2025-01-30T05:05:26.514133741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:05:26.514930 containerd[1460]: time="2025-01-30T05:05:26.514725554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:05:26.515945 containerd[1460]: time="2025-01-30T05:05:26.514971536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:05:26.517506 containerd[1460]: time="2025-01-30T05:05:26.517345409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:05:26.524844 containerd[1460]: time="2025-01-30T05:05:26.524443873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:05:26.524844 containerd[1460]: time="2025-01-30T05:05:26.524544824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:05:26.524844 containerd[1460]: time="2025-01-30T05:05:26.524569666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:05:26.527202 containerd[1460]: time="2025-01-30T05:05:26.526312898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:05:26.527202 containerd[1460]: time="2025-01-30T05:05:26.526447537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:05:26.527202 containerd[1460]: time="2025-01-30T05:05:26.526474462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:05:26.527202 containerd[1460]: time="2025-01-30T05:05:26.526627341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:05:26.530636 containerd[1460]: time="2025-01-30T05:05:26.528590680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:05:26.567689 systemd[1]: Started cri-containerd-69e3af3c3578de7dde33be262ee6e9ceedf11e0dc83216651d855642b9e13405.scope - libcontainer container 69e3af3c3578de7dde33be262ee6e9ceedf11e0dc83216651d855642b9e13405. Jan 30 05:05:26.577713 systemd[1]: Started cri-containerd-f4e7f080c8472fb1ba93e15d516f01b4e10e6933582e9e8f8b48c873a19ceff2.scope - libcontainer container f4e7f080c8472fb1ba93e15d516f01b4e10e6933582e9e8f8b48c873a19ceff2. Jan 30 05:05:26.613709 systemd[1]: Started cri-containerd-9581dbc56588f1dcfd6d1b67990735870006cd7a502f8b2520525f64f69235b6.scope - libcontainer container 9581dbc56588f1dcfd6d1b67990735870006cd7a502f8b2520525f64f69235b6. Jan 30 05:05:26.665599 kubelet[2179]: W0130 05:05:26.663789 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://24.144.82.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-c-6bfcfa9ae9&limit=500&resourceVersion=0": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:26.665599 kubelet[2179]: E0130 05:05:26.664036 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://24.144.82.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-c-6bfcfa9ae9&limit=500&resourceVersion=0": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:26.708880 containerd[1460]: time="2025-01-30T05:05:26.708810855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9,Uid:573990840ade09706ea213805c510025,Namespace:kube-system,Attempt:0,} returns sandbox id \"69e3af3c3578de7dde33be262ee6e9ceedf11e0dc83216651d855642b9e13405\"" Jan 30 05:05:26.721281 containerd[1460]: time="2025-01-30T05:05:26.721224801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-c-6bfcfa9ae9,Uid:c5fa574f50743a80c23f91450cf45bcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4e7f080c8472fb1ba93e15d516f01b4e10e6933582e9e8f8b48c873a19ceff2\"" Jan 30 05:05:26.725841 kubelet[2179]: E0130 05:05:26.724720 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:26.730822 kubelet[2179]: E0130 05:05:26.730778 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:26.736980 containerd[1460]: time="2025-01-30T05:05:26.736896573Z" level=info msg="CreateContainer within sandbox \"f4e7f080c8472fb1ba93e15d516f01b4e10e6933582e9e8f8b48c873a19ceff2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 05:05:26.737265 containerd[1460]: time="2025-01-30T05:05:26.736920897Z" level=info msg="CreateContainer within sandbox \"69e3af3c3578de7dde33be262ee6e9ceedf11e0dc83216651d855642b9e13405\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 05:05:26.738353 kubelet[2179]: E0130 05:05:26.738076 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.144.82.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-c-6bfcfa9ae9?timeout=10s\": dial tcp 24.144.82.28:6443: connect: connection refused" interval="1.6s" Jan 30 05:05:26.753943 containerd[1460]: time="2025-01-30T05:05:26.753892006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9,Uid:72455972bf69328ad219f67b4cf3b3d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"9581dbc56588f1dcfd6d1b67990735870006cd7a502f8b2520525f64f69235b6\"" Jan 30 05:05:26.755751 kubelet[2179]: E0130 05:05:26.755710 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:26.760274 containerd[1460]: time="2025-01-30T05:05:26.760120259Z" level=info msg="CreateContainer within sandbox \"f4e7f080c8472fb1ba93e15d516f01b4e10e6933582e9e8f8b48c873a19ceff2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"22049fb0687e9990f269e4b17aa6348b1b6f5b01dc373229f156b9adda34e012\"" Jan 30 05:05:26.762321 containerd[1460]: time="2025-01-30T05:05:26.762246108Z" level=info msg="CreateContainer within sandbox \"9581dbc56588f1dcfd6d1b67990735870006cd7a502f8b2520525f64f69235b6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 05:05:26.763012 containerd[1460]: time="2025-01-30T05:05:26.762959869Z" level=info msg="StartContainer for \"22049fb0687e9990f269e4b17aa6348b1b6f5b01dc373229f156b9adda34e012\"" Jan 30 05:05:26.768893 containerd[1460]: time="2025-01-30T05:05:26.768803345Z" level=info msg="CreateContainer within sandbox \"69e3af3c3578de7dde33be262ee6e9ceedf11e0dc83216651d855642b9e13405\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d0a51ad82c0ef7fc50b8ba79f33f784fcc392f047fe311a11cc3750b99e4f32a\"" Jan 30 05:05:26.770453 containerd[1460]: time="2025-01-30T05:05:26.769584031Z" level=info msg="StartContainer for \"d0a51ad82c0ef7fc50b8ba79f33f784fcc392f047fe311a11cc3750b99e4f32a\"" Jan 30 05:05:26.791009 containerd[1460]: time="2025-01-30T05:05:26.790922323Z" level=info msg="CreateContainer within sandbox \"9581dbc56588f1dcfd6d1b67990735870006cd7a502f8b2520525f64f69235b6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"71debfab74a3bd5f1118d845a0434895eb2ddb3430c21680f2e109e5d714516d\"" Jan 30 05:05:26.792039 containerd[1460]: time="2025-01-30T05:05:26.791990245Z" level=info msg="StartContainer for \"71debfab74a3bd5f1118d845a0434895eb2ddb3430c21680f2e109e5d714516d\"" Jan 30 05:05:26.818628 systemd[1]: Started cri-containerd-22049fb0687e9990f269e4b17aa6348b1b6f5b01dc373229f156b9adda34e012.scope - libcontainer container 22049fb0687e9990f269e4b17aa6348b1b6f5b01dc373229f156b9adda34e012. Jan 30 05:05:26.839833 systemd[1]: Started cri-containerd-d0a51ad82c0ef7fc50b8ba79f33f784fcc392f047fe311a11cc3750b99e4f32a.scope - libcontainer container d0a51ad82c0ef7fc50b8ba79f33f784fcc392f047fe311a11cc3750b99e4f32a. Jan 30 05:05:26.857167 kubelet[2179]: I0130 05:05:26.855973 2179 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:26.857167 kubelet[2179]: E0130 05:05:26.856856 2179 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://24.144.82.28:6443/api/v1/nodes\": dial tcp 24.144.82.28:6443: connect: connection refused" node="ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:26.888735 systemd[1]: Started cri-containerd-71debfab74a3bd5f1118d845a0434895eb2ddb3430c21680f2e109e5d714516d.scope - libcontainer container 71debfab74a3bd5f1118d845a0434895eb2ddb3430c21680f2e109e5d714516d. Jan 30 05:05:26.930553 containerd[1460]: time="2025-01-30T05:05:26.929957453Z" level=info msg="StartContainer for \"22049fb0687e9990f269e4b17aa6348b1b6f5b01dc373229f156b9adda34e012\" returns successfully" Jan 30 05:05:26.981275 containerd[1460]: time="2025-01-30T05:05:26.980934427Z" level=info msg="StartContainer for \"d0a51ad82c0ef7fc50b8ba79f33f784fcc392f047fe311a11cc3750b99e4f32a\" returns successfully" Jan 30 05:05:27.019611 containerd[1460]: time="2025-01-30T05:05:27.019545168Z" level=info msg="StartContainer for \"71debfab74a3bd5f1118d845a0434895eb2ddb3430c21680f2e109e5d714516d\" returns successfully" Jan 30 05:05:27.319781 kubelet[2179]: E0130 05:05:27.319612 2179 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://24.144.82.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 24.144.82.28:6443: connect: connection refused Jan 30 05:05:27.413604 kubelet[2179]: E0130 05:05:27.413300 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:27.421847 kubelet[2179]: E0130 05:05:27.419123 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:27.426232 kubelet[2179]: E0130 05:05:27.426185 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:28.426446 kubelet[2179]: E0130 05:05:28.425802 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:28.459884 kubelet[2179]: I0130 05:05:28.459209 2179 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:29.584838 kubelet[2179]: E0130 05:05:29.584781 2179 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-c-6bfcfa9ae9\" not found" node="ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:29.658978 kubelet[2179]: I0130 05:05:29.658907 2179 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:30.305673 kubelet[2179]: I0130 05:05:30.305621 2179 apiserver.go:52] "Watching apiserver" Jan 30 05:05:30.333691 kubelet[2179]: I0130 05:05:30.333607 2179 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 05:05:31.171493 kubelet[2179]: W0130 05:05:31.169729 2179 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:05:31.173118 kubelet[2179]: E0130 05:05:31.172831 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:31.433011 kubelet[2179]: E0130 05:05:31.432766 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:32.012059 systemd[1]: Reloading requested from client PID 2452 ('systemctl') (unit session-7.scope)... Jan 30 05:05:32.012086 systemd[1]: Reloading... Jan 30 05:05:32.218464 zram_generator::config[2497]: No configuration found. Jan 30 05:05:32.397429 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:05:32.530586 systemd[1]: Reloading finished in 517 ms. Jan 30 05:05:32.603022 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:05:32.604441 kubelet[2179]: I0130 05:05:32.604349 2179 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:05:32.620587 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 05:05:32.620987 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:05:32.621081 systemd[1]: kubelet.service: Consumed 1.254s CPU time, 112.0M memory peak, 0B memory swap peak. Jan 30 05:05:32.637935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:05:32.835854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:05:32.845226 (kubelet)[2542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 05:05:32.931927 kubelet[2542]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:05:32.931927 kubelet[2542]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 05:05:32.931927 kubelet[2542]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:05:32.933535 kubelet[2542]: I0130 05:05:32.932031 2542 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 05:05:32.944886 kubelet[2542]: I0130 05:05:32.944824 2542 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 05:05:32.944886 kubelet[2542]: I0130 05:05:32.944870 2542 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 05:05:32.945385 kubelet[2542]: I0130 05:05:32.945343 2542 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 05:05:32.951727 kubelet[2542]: I0130 05:05:32.951618 2542 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 05:05:32.956065 kubelet[2542]: I0130 05:05:32.955475 2542 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:05:32.985543 kubelet[2542]: I0130 05:05:32.985259 2542 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 05:05:32.985899 kubelet[2542]: I0130 05:05:32.985805 2542 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 05:05:32.986530 kubelet[2542]: I0130 05:05:32.985877 2542 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-c-6bfcfa9ae9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 05:05:32.986530 kubelet[2542]: I0130 05:05:32.986532 2542 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 05:05:32.986918 kubelet[2542]: I0130 05:05:32.986554 2542 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 05:05:32.986918 kubelet[2542]: I0130 05:05:32.986632 2542 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:05:32.988611 kubelet[2542]: I0130 05:05:32.988181 2542 kubelet.go:400] "Attempting to sync node with API server" Jan 30 05:05:32.988951 kubelet[2542]: I0130 05:05:32.988876 2542 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 05:05:32.988951 kubelet[2542]: I0130 05:05:32.988922 2542 kubelet.go:312] "Adding apiserver pod source" Jan 30 05:05:32.988951 kubelet[2542]: I0130 05:05:32.988954 2542 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 05:05:32.992242 kubelet[2542]: I0130 05:05:32.991994 2542 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 05:05:32.996473 kubelet[2542]: I0130 05:05:32.996269 2542 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 05:05:32.998467 kubelet[2542]: I0130 05:05:32.997481 2542 server.go:1264] "Started kubelet" Jan 30 05:05:33.002056 kubelet[2542]: I0130 05:05:33.001975 2542 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 05:05:33.024024 kubelet[2542]: I0130 05:05:33.023781 2542 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 05:05:33.036812 kubelet[2542]: I0130 05:05:33.036709 2542 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 05:05:33.037271 kubelet[2542]: I0130 05:05:33.037238 2542 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 05:05:33.040650 kubelet[2542]: I0130 05:05:33.040568 2542 server.go:455] "Adding debug handlers to kubelet server" Jan 30 05:05:33.051653 kubelet[2542]: I0130 05:05:33.051609 2542 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 05:05:33.056150 kubelet[2542]: I0130 05:05:33.056098 2542 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 05:05:33.057700 kubelet[2542]: I0130 05:05:33.057665 2542 reconciler.go:26] "Reconciler: start to sync state" Jan 30 05:05:33.062037 kubelet[2542]: I0130 05:05:33.061632 2542 factory.go:221] Registration of the systemd container factory successfully Jan 30 05:05:33.062037 kubelet[2542]: I0130 05:05:33.061830 2542 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 05:05:33.065812 kubelet[2542]: I0130 05:05:33.064023 2542 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 05:05:33.072565 kubelet[2542]: E0130 05:05:33.071516 2542 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 05:05:33.084449 kubelet[2542]: I0130 05:05:33.083588 2542 factory.go:221] Registration of the containerd container factory successfully Jan 30 05:05:33.093124 kubelet[2542]: I0130 05:05:33.091520 2542 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 05:05:33.096728 kubelet[2542]: I0130 05:05:33.096670 2542 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 05:05:33.096926 kubelet[2542]: I0130 05:05:33.096755 2542 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 05:05:33.096926 kubelet[2542]: E0130 05:05:33.096847 2542 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 05:05:33.098629 sudo[2564]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 05:05:33.099239 sudo[2564]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 05:05:33.156476 kubelet[2542]: I0130 05:05:33.154683 2542 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:33.197576 kubelet[2542]: I0130 05:05:33.197537 2542 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:33.198783 kubelet[2542]: I0130 05:05:33.198492 2542 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:33.199124 kubelet[2542]: E0130 05:05:33.197681 2542 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 05:05:33.229818 kubelet[2542]: I0130 05:05:33.229257 2542 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 05:05:33.229818 kubelet[2542]: I0130 05:05:33.229289 2542 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 05:05:33.229818 kubelet[2542]: I0130 05:05:33.229323 2542 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:05:33.229818 kubelet[2542]: I0130 05:05:33.229646 2542 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 05:05:33.229818 kubelet[2542]: I0130 05:05:33.229668 2542 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 05:05:33.229818 kubelet[2542]: I0130 05:05:33.229698 2542 policy_none.go:49] "None policy: Start" Jan 30 05:05:33.233090 kubelet[2542]: I0130 05:05:33.232450 2542 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 05:05:33.233090 kubelet[2542]: I0130 05:05:33.232496 2542 state_mem.go:35] "Initializing new in-memory state store" Jan 30 05:05:33.233971 kubelet[2542]: I0130 05:05:33.233671 2542 state_mem.go:75] "Updated machine memory state" Jan 30 05:05:33.247672 kubelet[2542]: I0130 05:05:33.246486 2542 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 05:05:33.247672 kubelet[2542]: I0130 05:05:33.246744 2542 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 05:05:33.247672 kubelet[2542]: I0130 05:05:33.246918 2542 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 05:05:33.399754 kubelet[2542]: I0130 05:05:33.399596 2542 topology_manager.go:215] "Topology Admit Handler" podUID="72455972bf69328ad219f67b4cf3b3d7" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:33.399754 kubelet[2542]: I0130 05:05:33.399751 2542 topology_manager.go:215] "Topology Admit Handler" podUID="573990840ade09706ea213805c510025" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:33.399930 kubelet[2542]: I0130 05:05:33.399817 2542 topology_manager.go:215] "Topology Admit Handler" podUID="c5fa574f50743a80c23f91450cf45bcb" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:33.422087 kubelet[2542]: W0130 05:05:33.421959 2542 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:05:33.422811 kubelet[2542]: W0130 05:05:33.422515 2542 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:05:33.422811 kubelet[2542]: W0130 05:05:33.422654 2542 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:05:33.423338 kubelet[2542]: E0130 05:05:33.423249 2542 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:33.464571 kubelet[2542]: I0130 05:05:33.464496 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/573990840ade09706ea213805c510025-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"573990840ade09706ea213805c510025\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:33.464738 kubelet[2542]: I0130 05:05:33.464624 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/573990840ade09706ea213805c510025-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"573990840ade09706ea213805c510025\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:33.464738 kubelet[2542]: I0130 05:05:33.464683 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c5fa574f50743a80c23f91450cf45bcb-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"c5fa574f50743a80c23f91450cf45bcb\") " pod="kube-system/kube-scheduler-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:33.464738 kubelet[2542]: I0130 05:05:33.464711 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/72455972bf69328ad219f67b4cf3b3d7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"72455972bf69328ad219f67b4cf3b3d7\") " pod="kube-system/kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:33.464738 kubelet[2542]: I0130 05:05:33.464739 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/72455972bf69328ad219f67b4cf3b3d7-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"72455972bf69328ad219f67b4cf3b3d7\") " pod="kube-system/kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:33.464951 kubelet[2542]: I0130 05:05:33.464763 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/573990840ade09706ea213805c510025-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"573990840ade09706ea213805c510025\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:33.464951 kubelet[2542]: I0130 05:05:33.464797 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/573990840ade09706ea213805c510025-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"573990840ade09706ea213805c510025\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:33.464951 kubelet[2542]: I0130 05:05:33.464837 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/573990840ade09706ea213805c510025-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"573990840ade09706ea213805c510025\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:33.464951 kubelet[2542]: I0130 05:05:33.464864 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/72455972bf69328ad219f67b4cf3b3d7-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9\" (UID: \"72455972bf69328ad219f67b4cf3b3d7\") " pod="kube-system/kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:33.728175 kubelet[2542]: E0130 05:05:33.728112 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:33.730106 kubelet[2542]: E0130 05:05:33.728850 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:33.730106 kubelet[2542]: E0130 05:05:33.729286 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:33.990978 kubelet[2542]: I0130 05:05:33.990845 2542 apiserver.go:52] "Watching apiserver" Jan 30 05:05:34.040697 sudo[2564]: pam_unix(sudo:session): session closed for user root Jan 30 05:05:34.057429 kubelet[2542]: I0130 05:05:34.057302 2542 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 05:05:34.174520 kubelet[2542]: E0130 05:05:34.170989 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:34.180825 kubelet[2542]: E0130 05:05:34.180766 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:34.194644 kubelet[2542]: W0130 05:05:34.194591 2542 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:05:34.194855 kubelet[2542]: E0130 05:05:34.194684 2542 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9" Jan 30 05:05:34.197897 kubelet[2542]: E0130 05:05:34.197849 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:34.254830 kubelet[2542]: I0130 05:05:34.254649 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-c-6bfcfa9ae9" podStartSLOduration=3.254625118 podStartE2EDuration="3.254625118s" podCreationTimestamp="2025-01-30 05:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:05:34.234756756 +0000 UTC m=+1.378235815" watchObservedRunningTime="2025-01-30 05:05:34.254625118 +0000 UTC m=+1.398104154" Jan 30 05:05:34.254830 kubelet[2542]: I0130 05:05:34.254794 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-c-6bfcfa9ae9" podStartSLOduration=1.254788531 podStartE2EDuration="1.254788531s" podCreationTimestamp="2025-01-30 05:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:05:34.25324405 +0000 UTC m=+1.396723109" watchObservedRunningTime="2025-01-30 05:05:34.254788531 +0000 UTC m=+1.398267590" Jan 30 05:05:34.279367 kubelet[2542]: I0130 05:05:34.279267 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-c-6bfcfa9ae9" podStartSLOduration=1.2792329740000001 podStartE2EDuration="1.279232974s" podCreationTimestamp="2025-01-30 05:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:05:34.27885145 +0000 UTC m=+1.422330513" watchObservedRunningTime="2025-01-30 05:05:34.279232974 +0000 UTC m=+1.422712028" Jan 30 05:05:35.175230 kubelet[2542]: E0130 05:05:35.174772 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:35.175230 kubelet[2542]: E0130 05:05:35.175006 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:35.905071 sudo[1647]: pam_unix(sudo:session): session closed for user root Jan 30 05:05:35.911101 sshd[1644]: pam_unix(sshd:session): session closed for user core Jan 30 05:05:35.920886 systemd[1]: sshd@6-24.144.82.28:22-147.75.109.163:40492.service: Deactivated successfully. Jan 30 05:05:35.924091 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 05:05:35.924369 systemd[1]: session-7.scope: Consumed 6.665s CPU time, 188.7M memory peak, 0B memory swap peak. Jan 30 05:05:35.925632 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Jan 30 05:05:35.929229 systemd-logind[1440]: Removed session 7. Jan 30 05:05:36.176510 kubelet[2542]: E0130 05:05:36.176370 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:36.308838 kubelet[2542]: E0130 05:05:36.308190 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:37.178675 kubelet[2542]: E0130 05:05:37.178618 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:40.123630 kubelet[2542]: E0130 05:05:40.123142 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:40.186331 kubelet[2542]: E0130 05:05:40.186248 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:44.386554 kubelet[2542]: E0130 05:05:44.386074 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:45.027425 update_engine[1442]: I20250130 05:05:45.027138 1442 update_attempter.cc:509] Updating boot flags... Jan 30 05:05:45.090516 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2618) Jan 30 05:05:45.175783 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2619) Jan 30 05:05:45.208425 kubelet[2542]: E0130 05:05:45.207006 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:45.275458 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2619) Jan 30 05:05:46.437830 kubelet[2542]: I0130 05:05:46.437765 2542 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 05:05:46.439516 containerd[1460]: time="2025-01-30T05:05:46.439284391Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 05:05:46.441303 kubelet[2542]: I0130 05:05:46.440366 2542 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 05:05:46.452310 kubelet[2542]: I0130 05:05:46.452231 2542 topology_manager.go:215] "Topology Admit Handler" podUID="9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d" podNamespace="kube-system" podName="kube-proxy-68jp8" Jan 30 05:05:46.466033 kubelet[2542]: I0130 05:05:46.465473 2542 topology_manager.go:215] "Topology Admit Handler" podUID="cd653454-ca48-417f-b59c-b6b05e5af714" podNamespace="kube-system" podName="cilium-mdhdq" Jan 30 05:05:46.469585 systemd[1]: Created slice kubepods-besteffort-pod9f5a3280_9dc2_4ff3_b94e_e91cd4b3ae2d.slice - libcontainer container kubepods-besteffort-pod9f5a3280_9dc2_4ff3_b94e_e91cd4b3ae2d.slice. Jan 30 05:05:46.475705 kubelet[2542]: W0130 05:05:46.474069 2542 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081.3.0-c-6bfcfa9ae9" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-c-6bfcfa9ae9' and this object Jan 30 05:05:46.475705 kubelet[2542]: E0130 05:05:46.474135 2542 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081.3.0-c-6bfcfa9ae9" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-c-6bfcfa9ae9' and this object Jan 30 05:05:46.475705 kubelet[2542]: W0130 05:05:46.474199 2542 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-c-6bfcfa9ae9" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-c-6bfcfa9ae9' and this object Jan 30 05:05:46.475705 kubelet[2542]: E0130 05:05:46.474219 2542 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-c-6bfcfa9ae9" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-c-6bfcfa9ae9' and this object Jan 30 05:05:46.495621 systemd[1]: Created slice kubepods-burstable-podcd653454_ca48_417f_b59c_b6b05e5af714.slice - libcontainer container kubepods-burstable-podcd653454_ca48_417f_b59c_b6b05e5af714.slice. Jan 30 05:05:46.557439 kubelet[2542]: I0130 05:05:46.557355 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d-xtables-lock\") pod \"kube-proxy-68jp8\" (UID: \"9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d\") " pod="kube-system/kube-proxy-68jp8" Jan 30 05:05:46.557812 kubelet[2542]: I0130 05:05:46.557772 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d-lib-modules\") pod \"kube-proxy-68jp8\" (UID: \"9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d\") " pod="kube-system/kube-proxy-68jp8" Jan 30 05:05:46.558428 kubelet[2542]: I0130 05:05:46.558372 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d-kube-proxy\") pod \"kube-proxy-68jp8\" (UID: \"9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d\") " pod="kube-system/kube-proxy-68jp8" Jan 30 05:05:46.558634 kubelet[2542]: I0130 05:05:46.558608 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmb8g\" (UniqueName: \"kubernetes.io/projected/9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d-kube-api-access-cmb8g\") pod \"kube-proxy-68jp8\" (UID: \"9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d\") " pod="kube-system/kube-proxy-68jp8" Jan 30 05:05:46.659001 kubelet[2542]: I0130 05:05:46.658931 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-cni-path\") pod \"cilium-mdhdq\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " pod="kube-system/cilium-mdhdq" Jan 30 05:05:46.660256 kubelet[2542]: I0130 05:05:46.660195 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-host-proc-sys-kernel\") pod \"cilium-mdhdq\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " pod="kube-system/cilium-mdhdq" Jan 30 05:05:46.662436 kubelet[2542]: I0130 05:05:46.661150 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-cilium-run\") pod \"cilium-mdhdq\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " pod="kube-system/cilium-mdhdq" Jan 30 05:05:46.662436 kubelet[2542]: I0130 05:05:46.661233 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd653454-ca48-417f-b59c-b6b05e5af714-clustermesh-secrets\") pod \"cilium-mdhdq\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " pod="kube-system/cilium-mdhdq" Jan 30 05:05:46.662436 kubelet[2542]: I0130 05:05:46.661258 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-etc-cni-netd\") pod \"cilium-mdhdq\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " pod="kube-system/cilium-mdhdq" Jan 30 05:05:46.662436 kubelet[2542]: I0130 05:05:46.661277 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-lib-modules\") pod \"cilium-mdhdq\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " pod="kube-system/cilium-mdhdq" Jan 30 05:05:46.662436 kubelet[2542]: I0130 05:05:46.661318 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-hostproc\") pod \"cilium-mdhdq\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " pod="kube-system/cilium-mdhdq" Jan 30 05:05:46.662436 kubelet[2542]: I0130 05:05:46.661334 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd653454-ca48-417f-b59c-b6b05e5af714-hubble-tls\") pod \"cilium-mdhdq\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " pod="kube-system/cilium-mdhdq" Jan 30 05:05:46.662825 kubelet[2542]: I0130 05:05:46.661350 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh9sz\" (UniqueName: \"kubernetes.io/projected/cd653454-ca48-417f-b59c-b6b05e5af714-kube-api-access-kh9sz\") pod \"cilium-mdhdq\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " pod="kube-system/cilium-mdhdq" Jan 30 05:05:46.662825 kubelet[2542]: I0130 05:05:46.661378 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-xtables-lock\") pod \"cilium-mdhdq\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " pod="kube-system/cilium-mdhdq" Jan 30 05:05:46.662825 kubelet[2542]: I0130 05:05:46.661401 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd653454-ca48-417f-b59c-b6b05e5af714-cilium-config-path\") pod \"cilium-mdhdq\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " pod="kube-system/cilium-mdhdq" Jan 30 05:05:46.662825 kubelet[2542]: I0130 05:05:46.661428 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-bpf-maps\") pod \"cilium-mdhdq\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " pod="kube-system/cilium-mdhdq" Jan 30 05:05:46.662825 kubelet[2542]: I0130 05:05:46.661468 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-cilium-cgroup\") pod \"cilium-mdhdq\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " pod="kube-system/cilium-mdhdq" Jan 30 05:05:46.662825 kubelet[2542]: I0130 05:05:46.661494 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-host-proc-sys-net\") pod \"cilium-mdhdq\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " pod="kube-system/cilium-mdhdq" Jan 30 05:05:46.713775 kubelet[2542]: I0130 05:05:46.712464 2542 topology_manager.go:215] "Topology Admit Handler" podUID="97e2bb4b-bab9-4dff-b033-161fc213cb6e" podNamespace="kube-system" podName="cilium-operator-599987898-6dj4r" Jan 30 05:05:46.730140 systemd[1]: Created slice kubepods-besteffort-pod97e2bb4b_bab9_4dff_b033_161fc213cb6e.slice - libcontainer container kubepods-besteffort-pod97e2bb4b_bab9_4dff_b033_161fc213cb6e.slice. Jan 30 05:05:46.767433 kubelet[2542]: I0130 05:05:46.766981 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds9sp\" (UniqueName: \"kubernetes.io/projected/97e2bb4b-bab9-4dff-b033-161fc213cb6e-kube-api-access-ds9sp\") pod \"cilium-operator-599987898-6dj4r\" (UID: \"97e2bb4b-bab9-4dff-b033-161fc213cb6e\") " pod="kube-system/cilium-operator-599987898-6dj4r" Jan 30 05:05:46.774729 kubelet[2542]: I0130 05:05:46.774569 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97e2bb4b-bab9-4dff-b033-161fc213cb6e-cilium-config-path\") pod \"cilium-operator-599987898-6dj4r\" (UID: \"97e2bb4b-bab9-4dff-b033-161fc213cb6e\") " pod="kube-system/cilium-operator-599987898-6dj4r" Jan 30 05:05:47.664578 kubelet[2542]: E0130 05:05:47.664510 2542 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 30 05:05:47.665225 kubelet[2542]: E0130 05:05:47.664646 2542 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d-kube-proxy podName:9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d nodeName:}" failed. No retries permitted until 2025-01-30 05:05:48.164614292 +0000 UTC m=+15.308093346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d-kube-proxy") pod "kube-proxy-68jp8" (UID: "9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d") : failed to sync configmap cache: timed out waiting for the condition Jan 30 05:05:47.699349 kubelet[2542]: E0130 05:05:47.698761 2542 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 05:05:47.699349 kubelet[2542]: E0130 05:05:47.698866 2542 projected.go:200] Error preparing data for projected volume kube-api-access-cmb8g for pod kube-system/kube-proxy-68jp8: failed to sync configmap cache: timed out waiting for the condition Jan 30 05:05:47.699349 kubelet[2542]: E0130 05:05:47.698973 2542 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d-kube-api-access-cmb8g podName:9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d nodeName:}" failed. No retries permitted until 2025-01-30 05:05:48.198947957 +0000 UTC m=+15.342427018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cmb8g" (UniqueName: "kubernetes.io/projected/9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d-kube-api-access-cmb8g") pod "kube-proxy-68jp8" (UID: "9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d") : failed to sync configmap cache: timed out waiting for the condition Jan 30 05:05:47.821495 kubelet[2542]: E0130 05:05:47.821298 2542 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 05:05:47.821495 kubelet[2542]: E0130 05:05:47.821366 2542 projected.go:200] Error preparing data for projected volume kube-api-access-kh9sz for pod kube-system/cilium-mdhdq: failed to sync configmap cache: timed out waiting for the condition Jan 30 05:05:47.821495 kubelet[2542]: E0130 05:05:47.821473 2542 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cd653454-ca48-417f-b59c-b6b05e5af714-kube-api-access-kh9sz podName:cd653454-ca48-417f-b59c-b6b05e5af714 nodeName:}" failed. No retries permitted until 2025-01-30 05:05:48.321446792 +0000 UTC m=+15.464925847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kh9sz" (UniqueName: "kubernetes.io/projected/cd653454-ca48-417f-b59c-b6b05e5af714-kube-api-access-kh9sz") pod "cilium-mdhdq" (UID: "cd653454-ca48-417f-b59c-b6b05e5af714") : failed to sync configmap cache: timed out waiting for the condition Jan 30 05:05:47.887199 kubelet[2542]: E0130 05:05:47.886853 2542 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 05:05:47.887199 kubelet[2542]: E0130 05:05:47.886932 2542 projected.go:200] Error preparing data for projected volume kube-api-access-ds9sp for pod kube-system/cilium-operator-599987898-6dj4r: failed to sync configmap cache: timed out waiting for the condition Jan 30 05:05:47.887199 kubelet[2542]: E0130 05:05:47.887022 2542 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/97e2bb4b-bab9-4dff-b033-161fc213cb6e-kube-api-access-ds9sp podName:97e2bb4b-bab9-4dff-b033-161fc213cb6e nodeName:}" failed. No retries permitted until 2025-01-30 05:05:48.387002068 +0000 UTC m=+15.530481335 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ds9sp" (UniqueName: "kubernetes.io/projected/97e2bb4b-bab9-4dff-b033-161fc213cb6e-kube-api-access-ds9sp") pod "cilium-operator-599987898-6dj4r" (UID: "97e2bb4b-bab9-4dff-b033-161fc213cb6e") : failed to sync configmap cache: timed out waiting for the condition Jan 30 05:05:48.537439 kubelet[2542]: E0130 05:05:48.536941 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:48.537897 containerd[1460]: time="2025-01-30T05:05:48.537842868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-6dj4r,Uid:97e2bb4b-bab9-4dff-b033-161fc213cb6e,Namespace:kube-system,Attempt:0,}" Jan 30 05:05:48.569884 containerd[1460]: time="2025-01-30T05:05:48.569572770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:05:48.569884 containerd[1460]: time="2025-01-30T05:05:48.569995119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:05:48.571653 containerd[1460]: time="2025-01-30T05:05:48.571542484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:05:48.572221 containerd[1460]: time="2025-01-30T05:05:48.572151768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:05:48.594530 kubelet[2542]: E0130 05:05:48.594274 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:48.595162 containerd[1460]: time="2025-01-30T05:05:48.594979642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-68jp8,Uid:9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d,Namespace:kube-system,Attempt:0,}" Jan 30 05:05:48.604317 kubelet[2542]: E0130 05:05:48.604207 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:48.606621 containerd[1460]: time="2025-01-30T05:05:48.606461660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mdhdq,Uid:cd653454-ca48-417f-b59c-b6b05e5af714,Namespace:kube-system,Attempt:0,}" Jan 30 05:05:48.610102 systemd[1]: Started cri-containerd-bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df.scope - libcontainer container bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df. Jan 30 05:05:48.685597 containerd[1460]: time="2025-01-30T05:05:48.681546636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:05:48.685597 containerd[1460]: time="2025-01-30T05:05:48.682972803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:05:48.685597 containerd[1460]: time="2025-01-30T05:05:48.683013059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:05:48.685597 containerd[1460]: time="2025-01-30T05:05:48.683725866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:05:48.697790 containerd[1460]: time="2025-01-30T05:05:48.696420526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:05:48.701655 containerd[1460]: time="2025-01-30T05:05:48.699927861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:05:48.701655 containerd[1460]: time="2025-01-30T05:05:48.700031206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:05:48.701655 containerd[1460]: time="2025-01-30T05:05:48.700906357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:05:48.726888 systemd[1]: Started cri-containerd-1f07504c44fa9abf8fd84c2419d4bc3997bbc59e59aaaf19de1801938fe7174c.scope - libcontainer container 1f07504c44fa9abf8fd84c2419d4bc3997bbc59e59aaaf19de1801938fe7174c. Jan 30 05:05:48.752289 containerd[1460]: time="2025-01-30T05:05:48.752219274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-6dj4r,Uid:97e2bb4b-bab9-4dff-b033-161fc213cb6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df\"" Jan 30 05:05:48.756250 kubelet[2542]: E0130 05:05:48.754385 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:48.756913 systemd[1]: Started cri-containerd-dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74.scope - libcontainer container dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74. Jan 30 05:05:48.763554 containerd[1460]: time="2025-01-30T05:05:48.763466702Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 05:05:48.822543 containerd[1460]: time="2025-01-30T05:05:48.820563450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mdhdq,Uid:cd653454-ca48-417f-b59c-b6b05e5af714,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\"" Jan 30 05:05:48.822772 kubelet[2542]: E0130 05:05:48.822023 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:48.837753 containerd[1460]: time="2025-01-30T05:05:48.837497832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-68jp8,Uid:9f5a3280-9dc2-4ff3-b94e-e91cd4b3ae2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f07504c44fa9abf8fd84c2419d4bc3997bbc59e59aaaf19de1801938fe7174c\"" Jan 30 05:05:48.839970 kubelet[2542]: E0130 05:05:48.839906 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:48.845629 containerd[1460]: time="2025-01-30T05:05:48.845328060Z" level=info msg="CreateContainer within sandbox \"1f07504c44fa9abf8fd84c2419d4bc3997bbc59e59aaaf19de1801938fe7174c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 05:05:48.872741 containerd[1460]: time="2025-01-30T05:05:48.871828526Z" level=info msg="CreateContainer within sandbox \"1f07504c44fa9abf8fd84c2419d4bc3997bbc59e59aaaf19de1801938fe7174c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1f92044cac990dd5d19d5531549e2c1c652d79b03c3f7171e92a433de0658eae\"" Jan 30 05:05:48.875596 containerd[1460]: time="2025-01-30T05:05:48.873218282Z" level=info msg="StartContainer for \"1f92044cac990dd5d19d5531549e2c1c652d79b03c3f7171e92a433de0658eae\"" Jan 30 05:05:48.920754 systemd[1]: Started cri-containerd-1f92044cac990dd5d19d5531549e2c1c652d79b03c3f7171e92a433de0658eae.scope - libcontainer container 1f92044cac990dd5d19d5531549e2c1c652d79b03c3f7171e92a433de0658eae. Jan 30 05:05:48.975696 containerd[1460]: time="2025-01-30T05:05:48.975632232Z" level=info msg="StartContainer for \"1f92044cac990dd5d19d5531549e2c1c652d79b03c3f7171e92a433de0658eae\" returns successfully" Jan 30 05:05:49.220183 kubelet[2542]: E0130 05:05:49.220089 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:50.613880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3918307979.mount: Deactivated successfully. Jan 30 05:05:52.391488 containerd[1460]: time="2025-01-30T05:05:52.390965921Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:52.393286 containerd[1460]: time="2025-01-30T05:05:52.392904761Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 05:05:52.394596 containerd[1460]: time="2025-01-30T05:05:52.394022809Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:05:52.396610 containerd[1460]: time="2025-01-30T05:05:52.396554919Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.633024018s" Jan 30 05:05:52.396925 containerd[1460]: time="2025-01-30T05:05:52.396888021Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 05:05:52.399091 containerd[1460]: time="2025-01-30T05:05:52.398413030Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 05:05:52.415643 containerd[1460]: time="2025-01-30T05:05:52.415563163Z" level=info msg="CreateContainer within sandbox \"bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 05:05:52.446722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1253343162.mount: Deactivated successfully. Jan 30 05:05:52.448503 containerd[1460]: time="2025-01-30T05:05:52.448053923Z" level=info msg="CreateContainer within sandbox \"bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e91e4e7cb64b52ac2769028d078b3130ab545e9d94a72baab29ce54f3e5aaad3\"" Jan 30 05:05:52.453463 containerd[1460]: time="2025-01-30T05:05:52.452186381Z" level=info msg="StartContainer for \"e91e4e7cb64b52ac2769028d078b3130ab545e9d94a72baab29ce54f3e5aaad3\"" Jan 30 05:05:52.511810 systemd[1]: Started cri-containerd-e91e4e7cb64b52ac2769028d078b3130ab545e9d94a72baab29ce54f3e5aaad3.scope - libcontainer container e91e4e7cb64b52ac2769028d078b3130ab545e9d94a72baab29ce54f3e5aaad3. Jan 30 05:05:52.561451 containerd[1460]: time="2025-01-30T05:05:52.560527666Z" level=info msg="StartContainer for \"e91e4e7cb64b52ac2769028d078b3130ab545e9d94a72baab29ce54f3e5aaad3\" returns successfully" Jan 30 05:05:53.237434 kubelet[2542]: E0130 05:05:53.236290 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:53.238382 kubelet[2542]: I0130 05:05:53.237838 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-68jp8" podStartSLOduration=7.237814676 podStartE2EDuration="7.237814676s" podCreationTimestamp="2025-01-30 05:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:05:49.248485335 +0000 UTC m=+16.391964392" watchObservedRunningTime="2025-01-30 05:05:53.237814676 +0000 UTC m=+20.381293737" Jan 30 05:05:54.249436 kubelet[2542]: E0130 05:05:54.249072 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:59.644323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3395928416.mount: Deactivated successfully. Jan 30 05:06:03.424168 containerd[1460]: time="2025-01-30T05:06:03.423988602Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:06:03.425991 containerd[1460]: time="2025-01-30T05:06:03.425895109Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 05:06:03.428175 containerd[1460]: time="2025-01-30T05:06:03.427246492Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:06:03.431191 containerd[1460]: time="2025-01-30T05:06:03.431119374Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.032027121s" Jan 30 05:06:03.431545 containerd[1460]: time="2025-01-30T05:06:03.431505160Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 05:06:03.442680 containerd[1460]: time="2025-01-30T05:06:03.442613169Z" level=info msg="CreateContainer within sandbox \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 05:06:03.588593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2620505694.mount: Deactivated successfully. Jan 30 05:06:03.595509 containerd[1460]: time="2025-01-30T05:06:03.595340084Z" level=info msg="CreateContainer within sandbox \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d8ccd921e559e3b764c5e7407bdf6e3acca97d6190fba7ca362a36791877c786\"" Jan 30 05:06:03.599331 containerd[1460]: time="2025-01-30T05:06:03.599168726Z" level=info msg="StartContainer for \"d8ccd921e559e3b764c5e7407bdf6e3acca97d6190fba7ca362a36791877c786\"" Jan 30 05:06:03.826304 systemd[1]: run-containerd-runc-k8s.io-d8ccd921e559e3b764c5e7407bdf6e3acca97d6190fba7ca362a36791877c786-runc.xYmAmX.mount: Deactivated successfully. Jan 30 05:06:03.840169 systemd[1]: Started cri-containerd-d8ccd921e559e3b764c5e7407bdf6e3acca97d6190fba7ca362a36791877c786.scope - libcontainer container d8ccd921e559e3b764c5e7407bdf6e3acca97d6190fba7ca362a36791877c786. Jan 30 05:06:03.888745 containerd[1460]: time="2025-01-30T05:06:03.888685771Z" level=info msg="StartContainer for \"d8ccd921e559e3b764c5e7407bdf6e3acca97d6190fba7ca362a36791877c786\" returns successfully" Jan 30 05:06:03.913313 systemd[1]: cri-containerd-d8ccd921e559e3b764c5e7407bdf6e3acca97d6190fba7ca362a36791877c786.scope: Deactivated successfully. Jan 30 05:06:04.088222 containerd[1460]: time="2025-01-30T05:06:04.056733965Z" level=info msg="shim disconnected" id=d8ccd921e559e3b764c5e7407bdf6e3acca97d6190fba7ca362a36791877c786 namespace=k8s.io Jan 30 05:06:04.088222 containerd[1460]: time="2025-01-30T05:06:04.087591588Z" level=warning msg="cleaning up after shim disconnected" id=d8ccd921e559e3b764c5e7407bdf6e3acca97d6190fba7ca362a36791877c786 namespace=k8s.io Jan 30 05:06:04.088222 containerd[1460]: time="2025-01-30T05:06:04.087626290Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:06:04.285520 kubelet[2542]: E0130 05:06:04.284299 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:04.292817 containerd[1460]: time="2025-01-30T05:06:04.292754339Z" level=info msg="CreateContainer within sandbox \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 05:06:04.317506 containerd[1460]: time="2025-01-30T05:06:04.317295063Z" level=info msg="CreateContainer within sandbox \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"34293682309b4e5fbbdba7382001287d6464f8bb761b89a5199cffbb05242ea4\"" Jan 30 05:06:04.319440 kubelet[2542]: I0130 05:06:04.318755 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-6dj4r" podStartSLOduration=14.677253443 podStartE2EDuration="18.318727543s" podCreationTimestamp="2025-01-30 05:05:46 +0000 UTC" firstStartedPulling="2025-01-30 05:05:48.756596757 +0000 UTC m=+15.900075796" lastFinishedPulling="2025-01-30 05:05:52.398070845 +0000 UTC m=+19.541549896" observedRunningTime="2025-01-30 05:05:53.63177416 +0000 UTC m=+20.775253217" watchObservedRunningTime="2025-01-30 05:06:04.318727543 +0000 UTC m=+31.462206613" Jan 30 05:06:04.320091 containerd[1460]: time="2025-01-30T05:06:04.320050188Z" level=info msg="StartContainer for \"34293682309b4e5fbbdba7382001287d6464f8bb761b89a5199cffbb05242ea4\"" Jan 30 05:06:04.367772 systemd[1]: Started cri-containerd-34293682309b4e5fbbdba7382001287d6464f8bb761b89a5199cffbb05242ea4.scope - libcontainer container 34293682309b4e5fbbdba7382001287d6464f8bb761b89a5199cffbb05242ea4. Jan 30 05:06:04.417490 containerd[1460]: time="2025-01-30T05:06:04.417192186Z" level=info msg="StartContainer for \"34293682309b4e5fbbdba7382001287d6464f8bb761b89a5199cffbb05242ea4\" returns successfully" Jan 30 05:06:04.444767 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 05:06:04.445375 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:06:04.445656 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:06:04.455024 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:06:04.455937 systemd[1]: cri-containerd-34293682309b4e5fbbdba7382001287d6464f8bb761b89a5199cffbb05242ea4.scope: Deactivated successfully. Jan 30 05:06:04.513064 containerd[1460]: time="2025-01-30T05:06:04.512991412Z" level=info msg="shim disconnected" id=34293682309b4e5fbbdba7382001287d6464f8bb761b89a5199cffbb05242ea4 namespace=k8s.io Jan 30 05:06:04.513849 containerd[1460]: time="2025-01-30T05:06:04.513793236Z" level=warning msg="cleaning up after shim disconnected" id=34293682309b4e5fbbdba7382001287d6464f8bb761b89a5199cffbb05242ea4 namespace=k8s.io Jan 30 05:06:04.514082 containerd[1460]: time="2025-01-30T05:06:04.514036170Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:06:04.532702 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:06:04.587016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8ccd921e559e3b764c5e7407bdf6e3acca97d6190fba7ca362a36791877c786-rootfs.mount: Deactivated successfully. Jan 30 05:06:05.337056 kubelet[2542]: E0130 05:06:05.336975 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:05.342581 containerd[1460]: time="2025-01-30T05:06:05.342425994Z" level=info msg="CreateContainer within sandbox \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 05:06:05.405601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1640788782.mount: Deactivated successfully. Jan 30 05:06:05.409275 containerd[1460]: time="2025-01-30T05:06:05.409113604Z" level=info msg="CreateContainer within sandbox \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5b8ccacbadb535d6d31d5c6b97e64ceb6d883e0186a56b613404a7fd724d8914\"" Jan 30 05:06:05.410972 containerd[1460]: time="2025-01-30T05:06:05.410817641Z" level=info msg="StartContainer for \"5b8ccacbadb535d6d31d5c6b97e64ceb6d883e0186a56b613404a7fd724d8914\"" Jan 30 05:06:05.476831 systemd[1]: Started cri-containerd-5b8ccacbadb535d6d31d5c6b97e64ceb6d883e0186a56b613404a7fd724d8914.scope - libcontainer container 5b8ccacbadb535d6d31d5c6b97e64ceb6d883e0186a56b613404a7fd724d8914. Jan 30 05:06:05.532578 containerd[1460]: time="2025-01-30T05:06:05.531953225Z" level=info msg="StartContainer for \"5b8ccacbadb535d6d31d5c6b97e64ceb6d883e0186a56b613404a7fd724d8914\" returns successfully" Jan 30 05:06:05.535649 systemd[1]: cri-containerd-5b8ccacbadb535d6d31d5c6b97e64ceb6d883e0186a56b613404a7fd724d8914.scope: Deactivated successfully. Jan 30 05:06:05.578772 containerd[1460]: time="2025-01-30T05:06:05.578372548Z" level=info msg="shim disconnected" id=5b8ccacbadb535d6d31d5c6b97e64ceb6d883e0186a56b613404a7fd724d8914 namespace=k8s.io Jan 30 05:06:05.578772 containerd[1460]: time="2025-01-30T05:06:05.578728654Z" level=warning msg="cleaning up after shim disconnected" id=5b8ccacbadb535d6d31d5c6b97e64ceb6d883e0186a56b613404a7fd724d8914 namespace=k8s.io Jan 30 05:06:05.579630 containerd[1460]: time="2025-01-30T05:06:05.579188349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:06:05.585384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b8ccacbadb535d6d31d5c6b97e64ceb6d883e0186a56b613404a7fd724d8914-rootfs.mount: Deactivated successfully. Jan 30 05:06:06.346744 kubelet[2542]: E0130 05:06:06.345460 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:06.352438 containerd[1460]: time="2025-01-30T05:06:06.351288695Z" level=info msg="CreateContainer within sandbox \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 05:06:06.384339 containerd[1460]: time="2025-01-30T05:06:06.384262415Z" level=info msg="CreateContainer within sandbox \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3214fd01c12cc0be64a81d4a1cdf23b07f4770d4d64f3ee4988f584f0bb52183\"" Jan 30 05:06:06.388498 containerd[1460]: time="2025-01-30T05:06:06.385427136Z" level=info msg="StartContainer for \"3214fd01c12cc0be64a81d4a1cdf23b07f4770d4d64f3ee4988f584f0bb52183\"" Jan 30 05:06:06.446871 systemd[1]: Started cri-containerd-3214fd01c12cc0be64a81d4a1cdf23b07f4770d4d64f3ee4988f584f0bb52183.scope - libcontainer container 3214fd01c12cc0be64a81d4a1cdf23b07f4770d4d64f3ee4988f584f0bb52183. Jan 30 05:06:06.493837 systemd[1]: cri-containerd-3214fd01c12cc0be64a81d4a1cdf23b07f4770d4d64f3ee4988f584f0bb52183.scope: Deactivated successfully. Jan 30 05:06:06.497160 containerd[1460]: time="2025-01-30T05:06:06.495504379Z" level=info msg="StartContainer for \"3214fd01c12cc0be64a81d4a1cdf23b07f4770d4d64f3ee4988f584f0bb52183\" returns successfully" Jan 30 05:06:06.534435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3214fd01c12cc0be64a81d4a1cdf23b07f4770d4d64f3ee4988f584f0bb52183-rootfs.mount: Deactivated successfully. Jan 30 05:06:06.536596 containerd[1460]: time="2025-01-30T05:06:06.536090997Z" level=info msg="shim disconnected" id=3214fd01c12cc0be64a81d4a1cdf23b07f4770d4d64f3ee4988f584f0bb52183 namespace=k8s.io Jan 30 05:06:06.537723 containerd[1460]: time="2025-01-30T05:06:06.536876308Z" level=warning msg="cleaning up after shim disconnected" id=3214fd01c12cc0be64a81d4a1cdf23b07f4770d4d64f3ee4988f584f0bb52183 namespace=k8s.io Jan 30 05:06:06.537723 containerd[1460]: time="2025-01-30T05:06:06.536924953Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:06:07.353511 kubelet[2542]: E0130 05:06:07.353440 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:07.362746 containerd[1460]: time="2025-01-30T05:06:07.362253241Z" level=info msg="CreateContainer within sandbox \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 05:06:07.428205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3151624036.mount: Deactivated successfully. Jan 30 05:06:07.437934 containerd[1460]: time="2025-01-30T05:06:07.437873176Z" level=info msg="CreateContainer within sandbox \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33\"" Jan 30 05:06:07.439729 containerd[1460]: time="2025-01-30T05:06:07.439667404Z" level=info msg="StartContainer for \"86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33\"" Jan 30 05:06:07.495905 systemd[1]: Started cri-containerd-86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33.scope - libcontainer container 86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33. Jan 30 05:06:07.557911 containerd[1460]: time="2025-01-30T05:06:07.557819195Z" level=info msg="StartContainer for \"86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33\" returns successfully" Jan 30 05:06:07.864150 kubelet[2542]: I0130 05:06:07.863962 2542 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 05:06:07.915227 kubelet[2542]: I0130 05:06:07.915089 2542 topology_manager.go:215] "Topology Admit Handler" podUID="10d0804a-eac8-4274-99bd-252eda737991" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zc8rs" Jan 30 05:06:07.930592 kubelet[2542]: I0130 05:06:07.928877 2542 topology_manager.go:215] "Topology Admit Handler" podUID="0144343b-1d92-44cc-87f1-279ff4db2ea3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-h485z" Jan 30 05:06:07.929520 systemd[1]: Created slice kubepods-burstable-pod10d0804a_eac8_4274_99bd_252eda737991.slice - libcontainer container kubepods-burstable-pod10d0804a_eac8_4274_99bd_252eda737991.slice. Jan 30 05:06:07.940312 systemd[1]: Created slice kubepods-burstable-pod0144343b_1d92_44cc_87f1_279ff4db2ea3.slice - libcontainer container kubepods-burstable-pod0144343b_1d92_44cc_87f1_279ff4db2ea3.slice. Jan 30 05:06:07.967188 kubelet[2542]: I0130 05:06:07.967133 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0144343b-1d92-44cc-87f1-279ff4db2ea3-config-volume\") pod \"coredns-7db6d8ff4d-h485z\" (UID: \"0144343b-1d92-44cc-87f1-279ff4db2ea3\") " pod="kube-system/coredns-7db6d8ff4d-h485z" Jan 30 05:06:07.967188 kubelet[2542]: I0130 05:06:07.967188 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfb7n\" (UniqueName: \"kubernetes.io/projected/10d0804a-eac8-4274-99bd-252eda737991-kube-api-access-dfb7n\") pod \"coredns-7db6d8ff4d-zc8rs\" (UID: \"10d0804a-eac8-4274-99bd-252eda737991\") " pod="kube-system/coredns-7db6d8ff4d-zc8rs" Jan 30 05:06:07.967664 kubelet[2542]: I0130 05:06:07.967219 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5dzs\" (UniqueName: \"kubernetes.io/projected/0144343b-1d92-44cc-87f1-279ff4db2ea3-kube-api-access-g5dzs\") pod \"coredns-7db6d8ff4d-h485z\" (UID: \"0144343b-1d92-44cc-87f1-279ff4db2ea3\") " pod="kube-system/coredns-7db6d8ff4d-h485z" Jan 30 05:06:07.967664 kubelet[2542]: I0130 05:06:07.967235 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10d0804a-eac8-4274-99bd-252eda737991-config-volume\") pod \"coredns-7db6d8ff4d-zc8rs\" (UID: \"10d0804a-eac8-4274-99bd-252eda737991\") " pod="kube-system/coredns-7db6d8ff4d-zc8rs" Jan 30 05:06:08.238233 kubelet[2542]: E0130 05:06:08.238060 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:08.241854 containerd[1460]: time="2025-01-30T05:06:08.241353196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zc8rs,Uid:10d0804a-eac8-4274-99bd-252eda737991,Namespace:kube-system,Attempt:0,}" Jan 30 05:06:08.253508 kubelet[2542]: E0130 05:06:08.253366 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:08.257705 containerd[1460]: time="2025-01-30T05:06:08.257651255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h485z,Uid:0144343b-1d92-44cc-87f1-279ff4db2ea3,Namespace:kube-system,Attempt:0,}" Jan 30 05:06:08.386526 kubelet[2542]: E0130 05:06:08.386261 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:08.418290 systemd[1]: run-containerd-runc-k8s.io-86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33-runc.ukefnW.mount: Deactivated successfully. Jan 30 05:06:08.436983 kubelet[2542]: I0130 05:06:08.435738 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mdhdq" podStartSLOduration=7.824537616 podStartE2EDuration="22.435713127s" podCreationTimestamp="2025-01-30 05:05:46 +0000 UTC" firstStartedPulling="2025-01-30 05:05:48.823579295 +0000 UTC m=+15.967058347" lastFinishedPulling="2025-01-30 05:06:03.43475481 +0000 UTC m=+30.578233858" observedRunningTime="2025-01-30 05:06:08.435246974 +0000 UTC m=+35.578726037" watchObservedRunningTime="2025-01-30 05:06:08.435713127 +0000 UTC m=+35.579192189" Jan 30 05:06:09.387932 kubelet[2542]: E0130 05:06:09.387803 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:10.064605 systemd-networkd[1367]: cilium_host: Link UP Jan 30 05:06:10.068174 systemd-networkd[1367]: cilium_net: Link UP Jan 30 05:06:10.069927 systemd-networkd[1367]: cilium_net: Gained carrier Jan 30 05:06:10.070257 systemd-networkd[1367]: cilium_host: Gained carrier Jan 30 05:06:10.152280 systemd-networkd[1367]: cilium_net: Gained IPv6LL Jan 30 05:06:10.260214 systemd-networkd[1367]: cilium_vxlan: Link UP Jan 30 05:06:10.260259 systemd-networkd[1367]: cilium_vxlan: Gained carrier Jan 30 05:06:10.393242 kubelet[2542]: E0130 05:06:10.392533 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:10.681550 kernel: NET: Registered PF_ALG protocol family Jan 30 05:06:11.031656 systemd-networkd[1367]: cilium_host: Gained IPv6LL Jan 30 05:06:11.544638 systemd-networkd[1367]: cilium_vxlan: Gained IPv6LL Jan 30 05:06:11.794611 systemd[1]: Started sshd@7-24.144.82.28:22-147.75.109.163:42390.service - OpenSSH per-connection server daemon (147.75.109.163:42390). Jan 30 05:06:11.882252 sshd[3691]: Accepted publickey for core from 147.75.109.163 port 42390 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:06:11.886909 sshd[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:06:11.900672 systemd-logind[1440]: New session 8 of user core. Jan 30 05:06:11.910921 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 05:06:11.957699 systemd-networkd[1367]: lxc_health: Link UP Jan 30 05:06:11.972861 systemd-networkd[1367]: lxc_health: Gained carrier Jan 30 05:06:12.397730 systemd-networkd[1367]: lxc349d17a25c80: Link UP Jan 30 05:06:12.411884 kernel: eth0: renamed from tmp1f3cf Jan 30 05:06:12.431556 systemd-networkd[1367]: lxc349d17a25c80: Gained carrier Jan 30 05:06:12.464315 systemd-networkd[1367]: lxc49d0c68e15e0: Link UP Jan 30 05:06:12.470507 kernel: eth0: renamed from tmpa8aa5 Jan 30 05:06:12.496027 systemd-networkd[1367]: lxc49d0c68e15e0: Gained carrier Jan 30 05:06:12.617166 kubelet[2542]: E0130 05:06:12.615802 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:12.981086 sshd[3691]: pam_unix(sshd:session): session closed for user core Jan 30 05:06:13.001777 systemd[1]: sshd@7-24.144.82.28:22-147.75.109.163:42390.service: Deactivated successfully. Jan 30 05:06:13.007985 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 05:06:13.010095 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Jan 30 05:06:13.013772 systemd-logind[1440]: Removed session 8. Jan 30 05:06:13.463576 systemd-networkd[1367]: lxc_health: Gained IPv6LL Jan 30 05:06:13.656004 systemd-networkd[1367]: lxc349d17a25c80: Gained IPv6LL Jan 30 05:06:13.912802 systemd-networkd[1367]: lxc49d0c68e15e0: Gained IPv6LL Jan 30 05:06:14.565617 kubelet[2542]: I0130 05:06:14.564531 2542 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:06:14.567776 kubelet[2542]: E0130 05:06:14.567722 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:15.416276 kubelet[2542]: E0130 05:06:15.416165 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:18.000197 systemd[1]: Started sshd@8-24.144.82.28:22-147.75.109.163:35580.service - OpenSSH per-connection server daemon (147.75.109.163:35580). Jan 30 05:06:18.079484 sshd[3770]: Accepted publickey for core from 147.75.109.163 port 35580 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:06:18.081642 sshd[3770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:06:18.091699 systemd-logind[1440]: New session 9 of user core. Jan 30 05:06:18.098263 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 05:06:18.362076 sshd[3770]: pam_unix(sshd:session): session closed for user core Jan 30 05:06:18.369248 systemd[1]: sshd@8-24.144.82.28:22-147.75.109.163:35580.service: Deactivated successfully. Jan 30 05:06:18.369621 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Jan 30 05:06:18.375232 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 05:06:18.380221 systemd-logind[1440]: Removed session 9. Jan 30 05:06:18.530063 containerd[1460]: time="2025-01-30T05:06:18.529577930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:06:18.530063 containerd[1460]: time="2025-01-30T05:06:18.529649470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:06:18.530063 containerd[1460]: time="2025-01-30T05:06:18.529662829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:06:18.530063 containerd[1460]: time="2025-01-30T05:06:18.529758522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:06:18.547914 containerd[1460]: time="2025-01-30T05:06:18.546471665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:06:18.549697 containerd[1460]: time="2025-01-30T05:06:18.548319862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:06:18.549697 containerd[1460]: time="2025-01-30T05:06:18.548362590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:06:18.549697 containerd[1460]: time="2025-01-30T05:06:18.548521265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:06:18.595717 systemd[1]: Started cri-containerd-a8aa53b09f37b4387b33868a118b87031cb22bcfeaeb2d7cdc2eb431cf689a45.scope - libcontainer container a8aa53b09f37b4387b33868a118b87031cb22bcfeaeb2d7cdc2eb431cf689a45. Jan 30 05:06:18.638970 systemd[1]: Started cri-containerd-1f3cf05f2615df5cd587aa66e2681bc6f53295f9abcce03878f0aa29ac79f58c.scope - libcontainer container 1f3cf05f2615df5cd587aa66e2681bc6f53295f9abcce03878f0aa29ac79f58c. Jan 30 05:06:18.774111 containerd[1460]: time="2025-01-30T05:06:18.773025677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zc8rs,Uid:10d0804a-eac8-4274-99bd-252eda737991,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f3cf05f2615df5cd587aa66e2681bc6f53295f9abcce03878f0aa29ac79f58c\"" Jan 30 05:06:18.775718 kubelet[2542]: E0130 05:06:18.774868 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:18.778926 containerd[1460]: time="2025-01-30T05:06:18.777939754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h485z,Uid:0144343b-1d92-44cc-87f1-279ff4db2ea3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8aa53b09f37b4387b33868a118b87031cb22bcfeaeb2d7cdc2eb431cf689a45\"" Jan 30 05:06:18.780265 kubelet[2542]: E0130 05:06:18.779650 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:18.784335 containerd[1460]: time="2025-01-30T05:06:18.784248228Z" level=info msg="CreateContainer within sandbox \"1f3cf05f2615df5cd587aa66e2681bc6f53295f9abcce03878f0aa29ac79f58c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 05:06:18.785797 containerd[1460]: time="2025-01-30T05:06:18.785739322Z" level=info msg="CreateContainer within sandbox \"a8aa53b09f37b4387b33868a118b87031cb22bcfeaeb2d7cdc2eb431cf689a45\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 05:06:18.821852 containerd[1460]: time="2025-01-30T05:06:18.821525327Z" level=info msg="CreateContainer within sandbox \"1f3cf05f2615df5cd587aa66e2681bc6f53295f9abcce03878f0aa29ac79f58c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2e5674cc41581701885e4898d685cf7e96c89242e2155de27fc0e34721494ae7\"" Jan 30 05:06:18.822686 containerd[1460]: time="2025-01-30T05:06:18.822472424Z" level=info msg="CreateContainer within sandbox \"a8aa53b09f37b4387b33868a118b87031cb22bcfeaeb2d7cdc2eb431cf689a45\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55a4f977caf5ef9213a8bd9cbb9723cb5df08a091dd769a411332a884842e3fd\"" Jan 30 05:06:18.824134 containerd[1460]: time="2025-01-30T05:06:18.824090567Z" level=info msg="StartContainer for \"55a4f977caf5ef9213a8bd9cbb9723cb5df08a091dd769a411332a884842e3fd\"" Jan 30 05:06:18.825590 containerd[1460]: time="2025-01-30T05:06:18.824340360Z" level=info msg="StartContainer for \"2e5674cc41581701885e4898d685cf7e96c89242e2155de27fc0e34721494ae7\"" Jan 30 05:06:18.875504 systemd[1]: Started cri-containerd-55a4f977caf5ef9213a8bd9cbb9723cb5df08a091dd769a411332a884842e3fd.scope - libcontainer container 55a4f977caf5ef9213a8bd9cbb9723cb5df08a091dd769a411332a884842e3fd. Jan 30 05:06:18.895968 systemd[1]: Started cri-containerd-2e5674cc41581701885e4898d685cf7e96c89242e2155de27fc0e34721494ae7.scope - libcontainer container 2e5674cc41581701885e4898d685cf7e96c89242e2155de27fc0e34721494ae7. Jan 30 05:06:18.933438 containerd[1460]: time="2025-01-30T05:06:18.931607933Z" level=info msg="StartContainer for \"55a4f977caf5ef9213a8bd9cbb9723cb5df08a091dd769a411332a884842e3fd\" returns successfully" Jan 30 05:06:18.970113 containerd[1460]: time="2025-01-30T05:06:18.970018221Z" level=info msg="StartContainer for \"2e5674cc41581701885e4898d685cf7e96c89242e2155de27fc0e34721494ae7\" returns successfully" Jan 30 05:06:19.433739 kubelet[2542]: E0130 05:06:19.433667 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:19.440599 kubelet[2542]: E0130 05:06:19.440378 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:19.490573 kubelet[2542]: I0130 05:06:19.490481 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zc8rs" podStartSLOduration=33.490431394 podStartE2EDuration="33.490431394s" podCreationTimestamp="2025-01-30 05:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:06:19.464705525 +0000 UTC m=+46.608184583" watchObservedRunningTime="2025-01-30 05:06:19.490431394 +0000 UTC m=+46.633910452" Jan 30 05:06:19.515857 kubelet[2542]: I0130 05:06:19.515738 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-h485z" podStartSLOduration=33.515705501 podStartE2EDuration="33.515705501s" podCreationTimestamp="2025-01-30 05:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:06:19.513814218 +0000 UTC m=+46.657293310" watchObservedRunningTime="2025-01-30 05:06:19.515705501 +0000 UTC m=+46.659184559" Jan 30 05:06:20.443691 kubelet[2542]: E0130 05:06:20.442896 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:20.445033 kubelet[2542]: E0130 05:06:20.444937 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:21.447462 kubelet[2542]: E0130 05:06:21.445806 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:21.447462 kubelet[2542]: E0130 05:06:21.446081 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:23.383962 systemd[1]: Started sshd@9-24.144.82.28:22-147.75.109.163:35582.service - OpenSSH per-connection server daemon (147.75.109.163:35582). Jan 30 05:06:23.464772 sshd[3955]: Accepted publickey for core from 147.75.109.163 port 35582 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:06:23.467547 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:06:23.474888 systemd-logind[1440]: New session 10 of user core. Jan 30 05:06:23.486040 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 05:06:23.719194 sshd[3955]: pam_unix(sshd:session): session closed for user core Jan 30 05:06:23.724165 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Jan 30 05:06:23.726793 systemd[1]: sshd@9-24.144.82.28:22-147.75.109.163:35582.service: Deactivated successfully. Jan 30 05:06:23.731602 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 05:06:23.733115 systemd-logind[1440]: Removed session 10. Jan 30 05:06:28.739864 systemd[1]: Started sshd@10-24.144.82.28:22-147.75.109.163:58044.service - OpenSSH per-connection server daemon (147.75.109.163:58044). Jan 30 05:06:28.800639 sshd[3970]: Accepted publickey for core from 147.75.109.163 port 58044 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:06:28.803151 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:06:28.810543 systemd-logind[1440]: New session 11 of user core. Jan 30 05:06:28.821831 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 05:06:28.967720 sshd[3970]: pam_unix(sshd:session): session closed for user core Jan 30 05:06:28.982438 systemd[1]: sshd@10-24.144.82.28:22-147.75.109.163:58044.service: Deactivated successfully. Jan 30 05:06:28.986475 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 05:06:28.989592 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Jan 30 05:06:28.999820 systemd[1]: Started sshd@11-24.144.82.28:22-147.75.109.163:58058.service - OpenSSH per-connection server daemon (147.75.109.163:58058). Jan 30 05:06:29.002592 systemd-logind[1440]: Removed session 11. Jan 30 05:06:29.047318 sshd[3983]: Accepted publickey for core from 147.75.109.163 port 58058 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:06:29.050336 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:06:29.057918 systemd-logind[1440]: New session 12 of user core. Jan 30 05:06:29.064787 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 05:06:29.305337 sshd[3983]: pam_unix(sshd:session): session closed for user core Jan 30 05:06:29.322932 systemd[1]: Started sshd@12-24.144.82.28:22-147.75.109.163:58060.service - OpenSSH per-connection server daemon (147.75.109.163:58060). Jan 30 05:06:29.323608 systemd[1]: sshd@11-24.144.82.28:22-147.75.109.163:58058.service: Deactivated successfully. Jan 30 05:06:29.332375 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 05:06:29.336655 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Jan 30 05:06:29.341327 systemd-logind[1440]: Removed session 12. Jan 30 05:06:29.395470 sshd[3992]: Accepted publickey for core from 147.75.109.163 port 58060 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:06:29.398049 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:06:29.406811 systemd-logind[1440]: New session 13 of user core. Jan 30 05:06:29.412721 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 05:06:29.588270 sshd[3992]: pam_unix(sshd:session): session closed for user core Jan 30 05:06:29.592715 systemd[1]: sshd@12-24.144.82.28:22-147.75.109.163:58060.service: Deactivated successfully. Jan 30 05:06:29.595428 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 05:06:29.597625 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Jan 30 05:06:29.599503 systemd-logind[1440]: Removed session 13. Jan 30 05:06:34.613085 systemd[1]: Started sshd@13-24.144.82.28:22-147.75.109.163:58074.service - OpenSSH per-connection server daemon (147.75.109.163:58074). Jan 30 05:06:34.691171 sshd[4010]: Accepted publickey for core from 147.75.109.163 port 58074 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:06:34.693623 sshd[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:06:34.702009 systemd-logind[1440]: New session 14 of user core. Jan 30 05:06:34.707034 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 05:06:34.877771 sshd[4010]: pam_unix(sshd:session): session closed for user core Jan 30 05:06:34.884599 systemd[1]: sshd@13-24.144.82.28:22-147.75.109.163:58074.service: Deactivated successfully. Jan 30 05:06:34.887912 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 05:06:34.889253 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Jan 30 05:06:34.893135 systemd-logind[1440]: Removed session 14. Jan 30 05:06:39.895913 systemd[1]: Started sshd@14-24.144.82.28:22-147.75.109.163:33816.service - OpenSSH per-connection server daemon (147.75.109.163:33816). Jan 30 05:06:39.952076 sshd[4023]: Accepted publickey for core from 147.75.109.163 port 33816 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:06:39.954655 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:06:39.960551 systemd-logind[1440]: New session 15 of user core. Jan 30 05:06:39.966714 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 05:06:40.111375 sshd[4023]: pam_unix(sshd:session): session closed for user core Jan 30 05:06:40.117094 systemd[1]: sshd@14-24.144.82.28:22-147.75.109.163:33816.service: Deactivated successfully. Jan 30 05:06:40.121726 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 05:06:40.123974 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Jan 30 05:06:40.125968 systemd-logind[1440]: Removed session 15. Jan 30 05:06:45.131870 systemd[1]: Started sshd@15-24.144.82.28:22-147.75.109.163:33818.service - OpenSSH per-connection server daemon (147.75.109.163:33818). Jan 30 05:06:45.185353 sshd[4036]: Accepted publickey for core from 147.75.109.163 port 33818 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:06:45.187557 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:06:45.192919 systemd-logind[1440]: New session 16 of user core. Jan 30 05:06:45.201704 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 05:06:45.343693 sshd[4036]: pam_unix(sshd:session): session closed for user core Jan 30 05:06:45.352997 systemd[1]: sshd@15-24.144.82.28:22-147.75.109.163:33818.service: Deactivated successfully. Jan 30 05:06:45.355824 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 05:06:45.358334 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Jan 30 05:06:45.365863 systemd[1]: Started sshd@16-24.144.82.28:22-147.75.109.163:33834.service - OpenSSH per-connection server daemon (147.75.109.163:33834). Jan 30 05:06:45.367936 systemd-logind[1440]: Removed session 16. Jan 30 05:06:45.419775 sshd[4049]: Accepted publickey for core from 147.75.109.163 port 33834 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:06:45.421650 sshd[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:06:45.429785 systemd-logind[1440]: New session 17 of user core. Jan 30 05:06:45.441698 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 05:06:45.831849 sshd[4049]: pam_unix(sshd:session): session closed for user core Jan 30 05:06:45.843180 systemd[1]: sshd@16-24.144.82.28:22-147.75.109.163:33834.service: Deactivated successfully. Jan 30 05:06:45.845839 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 05:06:45.847522 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Jan 30 05:06:45.855966 systemd[1]: Started sshd@17-24.144.82.28:22-147.75.109.163:33850.service - OpenSSH per-connection server daemon (147.75.109.163:33850). Jan 30 05:06:45.858702 systemd-logind[1440]: Removed session 17. Jan 30 05:06:45.930440 sshd[4060]: Accepted publickey for core from 147.75.109.163 port 33850 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:06:45.932047 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:06:45.940574 systemd-logind[1440]: New session 18 of user core. Jan 30 05:06:45.948705 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 05:06:48.131739 sshd[4060]: pam_unix(sshd:session): session closed for user core Jan 30 05:06:48.159991 systemd[1]: Started sshd@18-24.144.82.28:22-147.75.109.163:39550.service - OpenSSH per-connection server daemon (147.75.109.163:39550). Jan 30 05:06:48.160886 systemd[1]: sshd@17-24.144.82.28:22-147.75.109.163:33850.service: Deactivated successfully. Jan 30 05:06:48.172434 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 05:06:48.174609 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Jan 30 05:06:48.179483 systemd-logind[1440]: Removed session 18. Jan 30 05:06:48.245479 sshd[4076]: Accepted publickey for core from 147.75.109.163 port 39550 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:06:48.247524 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:06:48.253996 systemd-logind[1440]: New session 19 of user core. Jan 30 05:06:48.261764 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 05:06:48.765798 sshd[4076]: pam_unix(sshd:session): session closed for user core Jan 30 05:06:48.775787 systemd[1]: sshd@18-24.144.82.28:22-147.75.109.163:39550.service: Deactivated successfully. Jan 30 05:06:48.780261 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 05:06:48.783316 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Jan 30 05:06:48.791065 systemd[1]: Started sshd@19-24.144.82.28:22-147.75.109.163:39556.service - OpenSSH per-connection server daemon (147.75.109.163:39556). Jan 30 05:06:48.796670 systemd-logind[1440]: Removed session 19. Jan 30 05:06:48.838174 sshd[4090]: Accepted publickey for core from 147.75.109.163 port 39556 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:06:48.840558 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:06:48.848206 systemd-logind[1440]: New session 20 of user core. Jan 30 05:06:48.851741 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 05:06:49.028246 sshd[4090]: pam_unix(sshd:session): session closed for user core Jan 30 05:06:49.033975 systemd[1]: sshd@19-24.144.82.28:22-147.75.109.163:39556.service: Deactivated successfully. Jan 30 05:06:49.037293 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 05:06:49.038768 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Jan 30 05:06:49.040346 systemd-logind[1440]: Removed session 20. Jan 30 05:06:54.055045 systemd[1]: Started sshd@20-24.144.82.28:22-147.75.109.163:39566.service - OpenSSH per-connection server daemon (147.75.109.163:39566). Jan 30 05:06:54.111700 sshd[4107]: Accepted publickey for core from 147.75.109.163 port 39566 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:06:54.114923 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:06:54.124030 systemd-logind[1440]: New session 21 of user core. Jan 30 05:06:54.131649 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 05:06:54.308936 sshd[4107]: pam_unix(sshd:session): session closed for user core Jan 30 05:06:54.316167 systemd[1]: sshd@20-24.144.82.28:22-147.75.109.163:39566.service: Deactivated successfully. Jan 30 05:06:54.319886 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 05:06:54.323934 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Jan 30 05:06:54.326008 systemd-logind[1440]: Removed session 21. Jan 30 05:06:59.099653 kubelet[2542]: E0130 05:06:59.099064 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:59.111037 kubelet[2542]: E0130 05:06:59.101245 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:06:59.332972 systemd[1]: Started sshd@21-24.144.82.28:22-147.75.109.163:38300.service - OpenSSH per-connection server daemon (147.75.109.163:38300). Jan 30 05:06:59.400548 sshd[4120]: Accepted publickey for core from 147.75.109.163 port 38300 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:06:59.402912 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:06:59.411262 systemd-logind[1440]: New session 22 of user core. Jan 30 05:06:59.416758 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 05:06:59.563433 sshd[4120]: pam_unix(sshd:session): session closed for user core Jan 30 05:06:59.571693 systemd[1]: sshd@21-24.144.82.28:22-147.75.109.163:38300.service: Deactivated successfully. Jan 30 05:06:59.578045 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 05:06:59.579900 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. Jan 30 05:06:59.581312 systemd-logind[1440]: Removed session 22. Jan 30 05:07:01.099475 kubelet[2542]: E0130 05:07:01.098588 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:07:03.100803 kubelet[2542]: E0130 05:07:03.100735 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:07:04.583925 systemd[1]: Started sshd@22-24.144.82.28:22-147.75.109.163:38314.service - OpenSSH per-connection server daemon (147.75.109.163:38314). Jan 30 05:07:04.643543 sshd[4133]: Accepted publickey for core from 147.75.109.163 port 38314 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:07:04.646227 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:07:04.655836 systemd-logind[1440]: New session 23 of user core. Jan 30 05:07:04.661784 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 05:07:04.834485 sshd[4133]: pam_unix(sshd:session): session closed for user core Jan 30 05:07:04.840693 systemd[1]: sshd@22-24.144.82.28:22-147.75.109.163:38314.service: Deactivated successfully. Jan 30 05:07:04.845791 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 05:07:04.847871 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. Jan 30 05:07:04.850307 systemd-logind[1440]: Removed session 23. Jan 30 05:07:05.098657 kubelet[2542]: E0130 05:07:05.098387 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:07:09.859159 systemd[1]: Started sshd@23-24.144.82.28:22-147.75.109.163:36190.service - OpenSSH per-connection server daemon (147.75.109.163:36190). Jan 30 05:07:09.929495 sshd[4146]: Accepted publickey for core from 147.75.109.163 port 36190 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:07:09.932112 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:07:09.938203 systemd-logind[1440]: New session 24 of user core. Jan 30 05:07:09.944703 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 05:07:10.141708 sshd[4146]: pam_unix(sshd:session): session closed for user core Jan 30 05:07:10.153066 systemd[1]: sshd@23-24.144.82.28:22-147.75.109.163:36190.service: Deactivated successfully. Jan 30 05:07:10.155877 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 05:07:10.158314 systemd-logind[1440]: Session 24 logged out. Waiting for processes to exit. Jan 30 05:07:10.160383 systemd-logind[1440]: Removed session 24. Jan 30 05:07:10.163854 systemd[1]: Started sshd@24-24.144.82.28:22-147.75.109.163:36196.service - OpenSSH per-connection server daemon (147.75.109.163:36196). Jan 30 05:07:10.213579 sshd[4159]: Accepted publickey for core from 147.75.109.163 port 36196 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:07:10.215982 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:07:10.223622 systemd-logind[1440]: New session 25 of user core. Jan 30 05:07:10.229800 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 05:07:12.300686 containerd[1460]: time="2025-01-30T05:07:12.300552207Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 05:07:12.343673 containerd[1460]: time="2025-01-30T05:07:12.343596594Z" level=info msg="StopContainer for \"e91e4e7cb64b52ac2769028d078b3130ab545e9d94a72baab29ce54f3e5aaad3\" with timeout 30 (s)" Jan 30 05:07:12.343976 containerd[1460]: time="2025-01-30T05:07:12.343874895Z" level=info msg="StopContainer for \"86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33\" with timeout 2 (s)" Jan 30 05:07:12.345557 containerd[1460]: time="2025-01-30T05:07:12.345222828Z" level=info msg="Stop container \"e91e4e7cb64b52ac2769028d078b3130ab545e9d94a72baab29ce54f3e5aaad3\" with signal terminated" Jan 30 05:07:12.345939 containerd[1460]: time="2025-01-30T05:07:12.345666731Z" level=info msg="Stop container \"86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33\" with signal terminated" Jan 30 05:07:12.358786 systemd-networkd[1367]: lxc_health: Link DOWN Jan 30 05:07:12.358810 systemd-networkd[1367]: lxc_health: Lost carrier Jan 30 05:07:12.393833 systemd[1]: cri-containerd-e91e4e7cb64b52ac2769028d078b3130ab545e9d94a72baab29ce54f3e5aaad3.scope: Deactivated successfully. Jan 30 05:07:12.395980 systemd[1]: cri-containerd-86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33.scope: Deactivated successfully. Jan 30 05:07:12.397282 systemd[1]: cri-containerd-86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33.scope: Consumed 10.108s CPU time. Jan 30 05:07:12.469645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33-rootfs.mount: Deactivated successfully. Jan 30 05:07:12.488105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e91e4e7cb64b52ac2769028d078b3130ab545e9d94a72baab29ce54f3e5aaad3-rootfs.mount: Deactivated successfully. Jan 30 05:07:12.490719 containerd[1460]: time="2025-01-30T05:07:12.488524855Z" level=info msg="shim disconnected" id=86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33 namespace=k8s.io Jan 30 05:07:12.490719 containerd[1460]: time="2025-01-30T05:07:12.488614877Z" level=warning msg="cleaning up after shim disconnected" id=86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33 namespace=k8s.io Jan 30 05:07:12.490719 containerd[1460]: time="2025-01-30T05:07:12.488628819Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:07:12.492327 containerd[1460]: time="2025-01-30T05:07:12.492110570Z" level=info msg="shim disconnected" id=e91e4e7cb64b52ac2769028d078b3130ab545e9d94a72baab29ce54f3e5aaad3 namespace=k8s.io Jan 30 05:07:12.492497 containerd[1460]: time="2025-01-30T05:07:12.492456895Z" level=warning msg="cleaning up after shim disconnected" id=e91e4e7cb64b52ac2769028d078b3130ab545e9d94a72baab29ce54f3e5aaad3 namespace=k8s.io Jan 30 05:07:12.492497 containerd[1460]: time="2025-01-30T05:07:12.492485620Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:07:12.525035 containerd[1460]: time="2025-01-30T05:07:12.524969846Z" level=info msg="StopContainer for \"e91e4e7cb64b52ac2769028d078b3130ab545e9d94a72baab29ce54f3e5aaad3\" returns successfully" Jan 30 05:07:12.526712 containerd[1460]: time="2025-01-30T05:07:12.526497641Z" level=info msg="StopPodSandbox for \"bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df\"" Jan 30 05:07:12.526712 containerd[1460]: time="2025-01-30T05:07:12.526575370Z" level=info msg="Container to stop \"e91e4e7cb64b52ac2769028d078b3130ab545e9d94a72baab29ce54f3e5aaad3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:07:12.529499 containerd[1460]: time="2025-01-30T05:07:12.529355483Z" level=info msg="StopContainer for \"86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33\" returns successfully" Jan 30 05:07:12.531857 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df-shm.mount: Deactivated successfully. Jan 30 05:07:12.534158 containerd[1460]: time="2025-01-30T05:07:12.532544165Z" level=info msg="StopPodSandbox for \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\"" Jan 30 05:07:12.534158 containerd[1460]: time="2025-01-30T05:07:12.532636767Z" level=info msg="Container to stop \"d8ccd921e559e3b764c5e7407bdf6e3acca97d6190fba7ca362a36791877c786\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:07:12.534158 containerd[1460]: time="2025-01-30T05:07:12.532658948Z" level=info msg="Container to stop \"34293682309b4e5fbbdba7382001287d6464f8bb761b89a5199cffbb05242ea4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:07:12.534158 containerd[1460]: time="2025-01-30T05:07:12.532698845Z" level=info msg="Container to stop \"5b8ccacbadb535d6d31d5c6b97e64ceb6d883e0186a56b613404a7fd724d8914\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:07:12.534158 containerd[1460]: time="2025-01-30T05:07:12.532732615Z" level=info msg="Container to stop \"86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:07:12.534158 containerd[1460]: time="2025-01-30T05:07:12.532886145Z" level=info msg="Container to stop \"3214fd01c12cc0be64a81d4a1cdf23b07f4770d4d64f3ee4988f584f0bb52183\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:07:12.538368 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74-shm.mount: Deactivated successfully. Jan 30 05:07:12.558328 systemd[1]: cri-containerd-bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df.scope: Deactivated successfully. Jan 30 05:07:12.568312 systemd[1]: cri-containerd-dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74.scope: Deactivated successfully. Jan 30 05:07:12.647457 containerd[1460]: time="2025-01-30T05:07:12.647326881Z" level=info msg="shim disconnected" id=dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74 namespace=k8s.io Jan 30 05:07:12.648955 containerd[1460]: time="2025-01-30T05:07:12.647810988Z" level=warning msg="cleaning up after shim disconnected" id=dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74 namespace=k8s.io Jan 30 05:07:12.652019 containerd[1460]: time="2025-01-30T05:07:12.649341462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:07:12.654116 containerd[1460]: time="2025-01-30T05:07:12.653755440Z" level=info msg="shim disconnected" id=bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df namespace=k8s.io Jan 30 05:07:12.654356 containerd[1460]: time="2025-01-30T05:07:12.654327853Z" level=warning msg="cleaning up after shim disconnected" id=bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df namespace=k8s.io Jan 30 05:07:12.654645 containerd[1460]: time="2025-01-30T05:07:12.654599927Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:07:12.694214 containerd[1460]: time="2025-01-30T05:07:12.694112796Z" level=info msg="TearDown network for sandbox \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\" successfully" Jan 30 05:07:12.694477 containerd[1460]: time="2025-01-30T05:07:12.694446505Z" level=info msg="StopPodSandbox for \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\" returns successfully" Jan 30 05:07:12.695302 containerd[1460]: time="2025-01-30T05:07:12.695193505Z" level=info msg="TearDown network for sandbox \"bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df\" successfully" Jan 30 05:07:12.695302 containerd[1460]: time="2025-01-30T05:07:12.695236511Z" level=info msg="StopPodSandbox for \"bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df\" returns successfully" Jan 30 05:07:12.786292 kubelet[2542]: I0130 05:07:12.786102 2542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds9sp\" (UniqueName: \"kubernetes.io/projected/97e2bb4b-bab9-4dff-b033-161fc213cb6e-kube-api-access-ds9sp\") pod \"97e2bb4b-bab9-4dff-b033-161fc213cb6e\" (UID: \"97e2bb4b-bab9-4dff-b033-161fc213cb6e\") " Jan 30 05:07:12.786292 kubelet[2542]: I0130 05:07:12.786214 2542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-host-proc-sys-net\") pod \"cd653454-ca48-417f-b59c-b6b05e5af714\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " Jan 30 05:07:12.786292 kubelet[2542]: I0130 05:07:12.786247 2542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-xtables-lock\") pod \"cd653454-ca48-417f-b59c-b6b05e5af714\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " Jan 30 05:07:12.786292 kubelet[2542]: I0130 05:07:12.786287 2542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd653454-ca48-417f-b59c-b6b05e5af714-cilium-config-path\") pod \"cd653454-ca48-417f-b59c-b6b05e5af714\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " Jan 30 05:07:12.786292 kubelet[2542]: I0130 05:07:12.786314 2542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-cni-path\") pod \"cd653454-ca48-417f-b59c-b6b05e5af714\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " Jan 30 05:07:12.787173 kubelet[2542]: I0130 05:07:12.786342 2542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd653454-ca48-417f-b59c-b6b05e5af714-clustermesh-secrets\") pod \"cd653454-ca48-417f-b59c-b6b05e5af714\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " Jan 30 05:07:12.787173 kubelet[2542]: I0130 05:07:12.786367 2542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-cilium-cgroup\") pod \"cd653454-ca48-417f-b59c-b6b05e5af714\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " Jan 30 05:07:12.787173 kubelet[2542]: I0130 05:07:12.786412 2542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97e2bb4b-bab9-4dff-b033-161fc213cb6e-cilium-config-path\") pod \"97e2bb4b-bab9-4dff-b033-161fc213cb6e\" (UID: \"97e2bb4b-bab9-4dff-b033-161fc213cb6e\") " Jan 30 05:07:12.787173 kubelet[2542]: I0130 05:07:12.786442 2542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd653454-ca48-417f-b59c-b6b05e5af714-hubble-tls\") pod \"cd653454-ca48-417f-b59c-b6b05e5af714\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " Jan 30 05:07:12.787173 kubelet[2542]: I0130 05:07:12.786468 2542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh9sz\" (UniqueName: \"kubernetes.io/projected/cd653454-ca48-417f-b59c-b6b05e5af714-kube-api-access-kh9sz\") pod \"cd653454-ca48-417f-b59c-b6b05e5af714\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " Jan 30 05:07:12.787173 kubelet[2542]: I0130 05:07:12.786493 2542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-bpf-maps\") pod \"cd653454-ca48-417f-b59c-b6b05e5af714\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " Jan 30 05:07:12.788816 kubelet[2542]: I0130 05:07:12.786518 2542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-etc-cni-netd\") pod \"cd653454-ca48-417f-b59c-b6b05e5af714\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " Jan 30 05:07:12.788816 kubelet[2542]: I0130 05:07:12.786544 2542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-cilium-run\") pod \"cd653454-ca48-417f-b59c-b6b05e5af714\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " Jan 30 05:07:12.788816 kubelet[2542]: I0130 05:07:12.786571 2542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-lib-modules\") pod \"cd653454-ca48-417f-b59c-b6b05e5af714\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " Jan 30 05:07:12.788816 kubelet[2542]: I0130 05:07:12.786593 2542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-hostproc\") pod \"cd653454-ca48-417f-b59c-b6b05e5af714\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " Jan 30 05:07:12.788816 kubelet[2542]: I0130 05:07:12.786619 2542 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-host-proc-sys-kernel\") pod \"cd653454-ca48-417f-b59c-b6b05e5af714\" (UID: \"cd653454-ca48-417f-b59c-b6b05e5af714\") " Jan 30 05:07:12.788816 kubelet[2542]: I0130 05:07:12.786756 2542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cd653454-ca48-417f-b59c-b6b05e5af714" (UID: "cd653454-ca48-417f-b59c-b6b05e5af714"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:07:12.792016 kubelet[2542]: I0130 05:07:12.791941 2542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97e2bb4b-bab9-4dff-b033-161fc213cb6e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "97e2bb4b-bab9-4dff-b033-161fc213cb6e" (UID: "97e2bb4b-bab9-4dff-b033-161fc213cb6e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:07:12.797230 kubelet[2542]: I0130 05:07:12.797169 2542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cd653454-ca48-417f-b59c-b6b05e5af714" (UID: "cd653454-ca48-417f-b59c-b6b05e5af714"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:07:12.798321 kubelet[2542]: I0130 05:07:12.797655 2542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cd653454-ca48-417f-b59c-b6b05e5af714" (UID: "cd653454-ca48-417f-b59c-b6b05e5af714"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:07:12.801713 kubelet[2542]: I0130 05:07:12.801623 2542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd653454-ca48-417f-b59c-b6b05e5af714-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cd653454-ca48-417f-b59c-b6b05e5af714" (UID: "cd653454-ca48-417f-b59c-b6b05e5af714"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:07:12.802455 kubelet[2542]: I0130 05:07:12.802036 2542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-cni-path" (OuterVolumeSpecName: "cni-path") pod "cd653454-ca48-417f-b59c-b6b05e5af714" (UID: "cd653454-ca48-417f-b59c-b6b05e5af714"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:07:12.807864 kubelet[2542]: I0130 05:07:12.807793 2542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd653454-ca48-417f-b59c-b6b05e5af714-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cd653454-ca48-417f-b59c-b6b05e5af714" (UID: "cd653454-ca48-417f-b59c-b6b05e5af714"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:07:12.808191 kubelet[2542]: I0130 05:07:12.808151 2542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cd653454-ca48-417f-b59c-b6b05e5af714" (UID: "cd653454-ca48-417f-b59c-b6b05e5af714"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:07:12.808921 kubelet[2542]: I0130 05:07:12.808334 2542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd653454-ca48-417f-b59c-b6b05e5af714-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cd653454-ca48-417f-b59c-b6b05e5af714" (UID: "cd653454-ca48-417f-b59c-b6b05e5af714"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:07:12.808921 kubelet[2542]: I0130 05:07:12.808602 2542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97e2bb4b-bab9-4dff-b033-161fc213cb6e-kube-api-access-ds9sp" (OuterVolumeSpecName: "kube-api-access-ds9sp") pod "97e2bb4b-bab9-4dff-b033-161fc213cb6e" (UID: "97e2bb4b-bab9-4dff-b033-161fc213cb6e"). InnerVolumeSpecName "kube-api-access-ds9sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:07:12.808921 kubelet[2542]: I0130 05:07:12.808666 2542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cd653454-ca48-417f-b59c-b6b05e5af714" (UID: "cd653454-ca48-417f-b59c-b6b05e5af714"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:07:12.808921 kubelet[2542]: I0130 05:07:12.808690 2542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cd653454-ca48-417f-b59c-b6b05e5af714" (UID: "cd653454-ca48-417f-b59c-b6b05e5af714"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:07:12.808921 kubelet[2542]: I0130 05:07:12.808709 2542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cd653454-ca48-417f-b59c-b6b05e5af714" (UID: "cd653454-ca48-417f-b59c-b6b05e5af714"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:07:12.809229 kubelet[2542]: I0130 05:07:12.808736 2542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cd653454-ca48-417f-b59c-b6b05e5af714" (UID: "cd653454-ca48-417f-b59c-b6b05e5af714"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:07:12.809229 kubelet[2542]: I0130 05:07:12.808758 2542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-hostproc" (OuterVolumeSpecName: "hostproc") pod "cd653454-ca48-417f-b59c-b6b05e5af714" (UID: "cd653454-ca48-417f-b59c-b6b05e5af714"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:07:12.809567 kubelet[2542]: I0130 05:07:12.809516 2542 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd653454-ca48-417f-b59c-b6b05e5af714-kube-api-access-kh9sz" (OuterVolumeSpecName: "kube-api-access-kh9sz") pod "cd653454-ca48-417f-b59c-b6b05e5af714" (UID: "cd653454-ca48-417f-b59c-b6b05e5af714"). InnerVolumeSpecName "kube-api-access-kh9sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:07:12.894841 kubelet[2542]: I0130 05:07:12.894542 2542 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-host-proc-sys-kernel\") on node \"ci-4081.3.0-c-6bfcfa9ae9\" DevicePath \"\"" Jan 30 05:07:12.894841 kubelet[2542]: I0130 05:07:12.894632 2542 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd653454-ca48-417f-b59c-b6b05e5af714-cilium-config-path\") on node \"ci-4081.3.0-c-6bfcfa9ae9\" DevicePath \"\"" Jan 30 05:07:12.894841 kubelet[2542]: I0130 05:07:12.894666 2542 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ds9sp\" (UniqueName: \"kubernetes.io/projected/97e2bb4b-bab9-4dff-b033-161fc213cb6e-kube-api-access-ds9sp\") on node \"ci-4081.3.0-c-6bfcfa9ae9\" DevicePath \"\"" Jan 30 05:07:12.894841 kubelet[2542]: I0130 05:07:12.894676 2542 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-host-proc-sys-net\") on node \"ci-4081.3.0-c-6bfcfa9ae9\" DevicePath \"\"" Jan 30 05:07:12.894841 kubelet[2542]: I0130 05:07:12.894687 2542 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-xtables-lock\") on node \"ci-4081.3.0-c-6bfcfa9ae9\" DevicePath \"\"" Jan 30 05:07:12.894841 kubelet[2542]: I0130 05:07:12.894699 2542 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-cni-path\") on node \"ci-4081.3.0-c-6bfcfa9ae9\" DevicePath \"\"" Jan 30 05:07:12.894841 kubelet[2542]: I0130 05:07:12.894708 2542 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd653454-ca48-417f-b59c-b6b05e5af714-clustermesh-secrets\") on node \"ci-4081.3.0-c-6bfcfa9ae9\" DevicePath \"\"" Jan 30 05:07:12.894841 kubelet[2542]: I0130 05:07:12.894723 2542 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-cilium-cgroup\") on node \"ci-4081.3.0-c-6bfcfa9ae9\" DevicePath \"\"" Jan 30 05:07:12.895869 kubelet[2542]: I0130 05:07:12.894732 2542 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/97e2bb4b-bab9-4dff-b033-161fc213cb6e-cilium-config-path\") on node \"ci-4081.3.0-c-6bfcfa9ae9\" DevicePath \"\"" Jan 30 05:07:12.895869 kubelet[2542]: I0130 05:07:12.894741 2542 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd653454-ca48-417f-b59c-b6b05e5af714-hubble-tls\") on node \"ci-4081.3.0-c-6bfcfa9ae9\" DevicePath \"\"" Jan 30 05:07:12.895869 kubelet[2542]: I0130 05:07:12.894755 2542 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kh9sz\" (UniqueName: \"kubernetes.io/projected/cd653454-ca48-417f-b59c-b6b05e5af714-kube-api-access-kh9sz\") on node \"ci-4081.3.0-c-6bfcfa9ae9\" DevicePath \"\"" Jan 30 05:07:12.895869 kubelet[2542]: I0130 05:07:12.894764 2542 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-etc-cni-netd\") on node \"ci-4081.3.0-c-6bfcfa9ae9\" DevicePath \"\"" Jan 30 05:07:12.895869 kubelet[2542]: I0130 05:07:12.894775 2542 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-bpf-maps\") on node \"ci-4081.3.0-c-6bfcfa9ae9\" DevicePath \"\"" Jan 30 05:07:12.895869 kubelet[2542]: I0130 05:07:12.894783 2542 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-cilium-run\") on node \"ci-4081.3.0-c-6bfcfa9ae9\" DevicePath \"\"" Jan 30 05:07:12.895869 kubelet[2542]: I0130 05:07:12.894794 2542 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-lib-modules\") on node \"ci-4081.3.0-c-6bfcfa9ae9\" DevicePath \"\"" Jan 30 05:07:12.895869 kubelet[2542]: I0130 05:07:12.894804 2542 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd653454-ca48-417f-b59c-b6b05e5af714-hostproc\") on node \"ci-4081.3.0-c-6bfcfa9ae9\" DevicePath \"\"" Jan 30 05:07:13.113658 systemd[1]: Removed slice kubepods-burstable-podcd653454_ca48_417f_b59c_b6b05e5af714.slice - libcontainer container kubepods-burstable-podcd653454_ca48_417f_b59c_b6b05e5af714.slice. Jan 30 05:07:13.114184 systemd[1]: kubepods-burstable-podcd653454_ca48_417f_b59c_b6b05e5af714.slice: Consumed 10.243s CPU time. Jan 30 05:07:13.117324 systemd[1]: Removed slice kubepods-besteffort-pod97e2bb4b_bab9_4dff_b033_161fc213cb6e.slice - libcontainer container kubepods-besteffort-pod97e2bb4b_bab9_4dff_b033_161fc213cb6e.slice. Jan 30 05:07:13.254514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74-rootfs.mount: Deactivated successfully. Jan 30 05:07:13.254719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df-rootfs.mount: Deactivated successfully. Jan 30 05:07:13.254843 systemd[1]: var-lib-kubelet-pods-cd653454\x2dca48\x2d417f\x2db59c\x2db6b05e5af714-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkh9sz.mount: Deactivated successfully. Jan 30 05:07:13.255138 systemd[1]: var-lib-kubelet-pods-97e2bb4b\x2dbab9\x2d4dff\x2db033\x2d161fc213cb6e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dds9sp.mount: Deactivated successfully. Jan 30 05:07:13.255273 systemd[1]: var-lib-kubelet-pods-cd653454\x2dca48\x2d417f\x2db59c\x2db6b05e5af714-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 05:07:13.255366 systemd[1]: var-lib-kubelet-pods-cd653454\x2dca48\x2d417f\x2db59c\x2db6b05e5af714-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 05:07:13.320302 kubelet[2542]: E0130 05:07:13.309694 2542 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 05:07:13.630352 kubelet[2542]: I0130 05:07:13.630208 2542 scope.go:117] "RemoveContainer" containerID="86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33" Jan 30 05:07:13.651282 containerd[1460]: time="2025-01-30T05:07:13.651227446Z" level=info msg="RemoveContainer for \"86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33\"" Jan 30 05:07:13.667972 containerd[1460]: time="2025-01-30T05:07:13.666885451Z" level=info msg="RemoveContainer for \"86e20469b1f0b28f33489b857c52051168299064d55ebc0effa9950143c97d33\" returns successfully" Jan 30 05:07:13.668141 kubelet[2542]: I0130 05:07:13.667712 2542 scope.go:117] "RemoveContainer" containerID="3214fd01c12cc0be64a81d4a1cdf23b07f4770d4d64f3ee4988f584f0bb52183" Jan 30 05:07:13.674511 containerd[1460]: time="2025-01-30T05:07:13.673993116Z" level=info msg="RemoveContainer for \"3214fd01c12cc0be64a81d4a1cdf23b07f4770d4d64f3ee4988f584f0bb52183\"" Jan 30 05:07:13.686186 containerd[1460]: time="2025-01-30T05:07:13.684893339Z" level=info msg="RemoveContainer for \"3214fd01c12cc0be64a81d4a1cdf23b07f4770d4d64f3ee4988f584f0bb52183\" returns successfully" Jan 30 05:07:13.691524 kubelet[2542]: I0130 05:07:13.690180 2542 scope.go:117] "RemoveContainer" containerID="5b8ccacbadb535d6d31d5c6b97e64ceb6d883e0186a56b613404a7fd724d8914" Jan 30 05:07:13.694620 containerd[1460]: time="2025-01-30T05:07:13.694555037Z" level=info msg="RemoveContainer for \"5b8ccacbadb535d6d31d5c6b97e64ceb6d883e0186a56b613404a7fd724d8914\"" Jan 30 05:07:13.698574 containerd[1460]: time="2025-01-30T05:07:13.698486855Z" level=info msg="RemoveContainer for \"5b8ccacbadb535d6d31d5c6b97e64ceb6d883e0186a56b613404a7fd724d8914\" returns successfully" Jan 30 05:07:13.699279 kubelet[2542]: I0130 05:07:13.699225 2542 scope.go:117] "RemoveContainer" containerID="34293682309b4e5fbbdba7382001287d6464f8bb761b89a5199cffbb05242ea4" Jan 30 05:07:13.703361 containerd[1460]: time="2025-01-30T05:07:13.703301971Z" level=info msg="RemoveContainer for \"34293682309b4e5fbbdba7382001287d6464f8bb761b89a5199cffbb05242ea4\"" Jan 30 05:07:13.709367 containerd[1460]: time="2025-01-30T05:07:13.709309226Z" level=info msg="RemoveContainer for \"34293682309b4e5fbbdba7382001287d6464f8bb761b89a5199cffbb05242ea4\" returns successfully" Jan 30 05:07:13.710075 kubelet[2542]: I0130 05:07:13.709912 2542 scope.go:117] "RemoveContainer" containerID="d8ccd921e559e3b764c5e7407bdf6e3acca97d6190fba7ca362a36791877c786" Jan 30 05:07:13.718428 containerd[1460]: time="2025-01-30T05:07:13.717859579Z" level=info msg="RemoveContainer for \"d8ccd921e559e3b764c5e7407bdf6e3acca97d6190fba7ca362a36791877c786\"" Jan 30 05:07:13.722718 containerd[1460]: time="2025-01-30T05:07:13.722648540Z" level=info msg="RemoveContainer for \"d8ccd921e559e3b764c5e7407bdf6e3acca97d6190fba7ca362a36791877c786\" returns successfully" Jan 30 05:07:13.723491 kubelet[2542]: I0130 05:07:13.723455 2542 scope.go:117] "RemoveContainer" containerID="e91e4e7cb64b52ac2769028d078b3130ab545e9d94a72baab29ce54f3e5aaad3" Jan 30 05:07:13.726777 containerd[1460]: time="2025-01-30T05:07:13.726716241Z" level=info msg="RemoveContainer for \"e91e4e7cb64b52ac2769028d078b3130ab545e9d94a72baab29ce54f3e5aaad3\"" Jan 30 05:07:13.731201 containerd[1460]: time="2025-01-30T05:07:13.730914558Z" level=info msg="RemoveContainer for \"e91e4e7cb64b52ac2769028d078b3130ab545e9d94a72baab29ce54f3e5aaad3\" returns successfully" Jan 30 05:07:14.120904 sshd[4159]: pam_unix(sshd:session): session closed for user core Jan 30 05:07:14.129761 systemd[1]: sshd@24-24.144.82.28:22-147.75.109.163:36196.service: Deactivated successfully. Jan 30 05:07:14.135024 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 05:07:14.135377 systemd[1]: session-25.scope: Consumed 1.233s CPU time. Jan 30 05:07:14.139768 systemd-logind[1440]: Session 25 logged out. Waiting for processes to exit. Jan 30 05:07:14.148094 systemd[1]: Started sshd@25-24.144.82.28:22-147.75.109.163:36198.service - OpenSSH per-connection server daemon (147.75.109.163:36198). Jan 30 05:07:14.151303 systemd-logind[1440]: Removed session 25. Jan 30 05:07:14.224517 sshd[4320]: Accepted publickey for core from 147.75.109.163 port 36198 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:07:14.227624 sshd[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:07:14.237194 systemd-logind[1440]: New session 26 of user core. Jan 30 05:07:14.244859 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 05:07:14.945857 sshd[4320]: pam_unix(sshd:session): session closed for user core Jan 30 05:07:14.961218 systemd[1]: sshd@25-24.144.82.28:22-147.75.109.163:36198.service: Deactivated successfully. Jan 30 05:07:14.964717 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 05:07:14.968256 systemd-logind[1440]: Session 26 logged out. Waiting for processes to exit. Jan 30 05:07:14.984625 systemd[1]: Started sshd@26-24.144.82.28:22-147.75.109.163:36202.service - OpenSSH per-connection server daemon (147.75.109.163:36202). Jan 30 05:07:14.989503 systemd-logind[1440]: Removed session 26. Jan 30 05:07:15.037502 kubelet[2542]: I0130 05:07:15.032955 2542 topology_manager.go:215] "Topology Admit Handler" podUID="f207dcbd-6262-4986-8a38-66f1cae34fcd" podNamespace="kube-system" podName="cilium-4gr8t" Jan 30 05:07:15.048452 kubelet[2542]: E0130 05:07:15.044984 2542 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd653454-ca48-417f-b59c-b6b05e5af714" containerName="mount-cgroup" Jan 30 05:07:15.048452 kubelet[2542]: E0130 05:07:15.045021 2542 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd653454-ca48-417f-b59c-b6b05e5af714" containerName="apply-sysctl-overwrites" Jan 30 05:07:15.048452 kubelet[2542]: E0130 05:07:15.045031 2542 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd653454-ca48-417f-b59c-b6b05e5af714" containerName="mount-bpf-fs" Jan 30 05:07:15.048452 kubelet[2542]: E0130 05:07:15.045040 2542 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd653454-ca48-417f-b59c-b6b05e5af714" containerName="clean-cilium-state" Jan 30 05:07:15.048452 kubelet[2542]: E0130 05:07:15.045053 2542 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="97e2bb4b-bab9-4dff-b033-161fc213cb6e" containerName="cilium-operator" Jan 30 05:07:15.048452 kubelet[2542]: E0130 05:07:15.045062 2542 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd653454-ca48-417f-b59c-b6b05e5af714" containerName="cilium-agent" Jan 30 05:07:15.056449 kubelet[2542]: I0130 05:07:15.045106 2542 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e2bb4b-bab9-4dff-b033-161fc213cb6e" containerName="cilium-operator" Jan 30 05:07:15.056449 kubelet[2542]: I0130 05:07:15.055634 2542 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd653454-ca48-417f-b59c-b6b05e5af714" containerName="cilium-agent" Jan 30 05:07:15.091149 sshd[4333]: Accepted publickey for core from 147.75.109.163 port 36202 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:07:15.098270 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:07:15.116025 systemd-logind[1440]: New session 27 of user core. Jan 30 05:07:15.118727 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 05:07:15.134135 kubelet[2542]: I0130 05:07:15.127928 2542 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97e2bb4b-bab9-4dff-b033-161fc213cb6e" path="/var/lib/kubelet/pods/97e2bb4b-bab9-4dff-b033-161fc213cb6e/volumes" Jan 30 05:07:15.134135 kubelet[2542]: I0130 05:07:15.134044 2542 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd653454-ca48-417f-b59c-b6b05e5af714" path="/var/lib/kubelet/pods/cd653454-ca48-417f-b59c-b6b05e5af714/volumes" Jan 30 05:07:15.146380 systemd[1]: Created slice kubepods-burstable-podf207dcbd_6262_4986_8a38_66f1cae34fcd.slice - libcontainer container kubepods-burstable-podf207dcbd_6262_4986_8a38_66f1cae34fcd.slice. Jan 30 05:07:15.204916 sshd[4333]: pam_unix(sshd:session): session closed for user core Jan 30 05:07:15.216981 systemd[1]: sshd@26-24.144.82.28:22-147.75.109.163:36202.service: Deactivated successfully. Jan 30 05:07:15.222251 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 05:07:15.222864 kubelet[2542]: I0130 05:07:15.222441 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f207dcbd-6262-4986-8a38-66f1cae34fcd-host-proc-sys-kernel\") pod \"cilium-4gr8t\" (UID: \"f207dcbd-6262-4986-8a38-66f1cae34fcd\") " pod="kube-system/cilium-4gr8t" Jan 30 05:07:15.223437 kubelet[2542]: I0130 05:07:15.223118 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f207dcbd-6262-4986-8a38-66f1cae34fcd-hubble-tls\") pod \"cilium-4gr8t\" (UID: \"f207dcbd-6262-4986-8a38-66f1cae34fcd\") " pod="kube-system/cilium-4gr8t" Jan 30 05:07:15.223437 kubelet[2542]: I0130 05:07:15.223228 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f207dcbd-6262-4986-8a38-66f1cae34fcd-cilium-cgroup\") pod \"cilium-4gr8t\" (UID: \"f207dcbd-6262-4986-8a38-66f1cae34fcd\") " pod="kube-system/cilium-4gr8t" Jan 30 05:07:15.224180 kubelet[2542]: I0130 05:07:15.223659 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f207dcbd-6262-4986-8a38-66f1cae34fcd-hostproc\") pod \"cilium-4gr8t\" (UID: \"f207dcbd-6262-4986-8a38-66f1cae34fcd\") " pod="kube-system/cilium-4gr8t" Jan 30 05:07:15.224180 kubelet[2542]: I0130 05:07:15.223716 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f207dcbd-6262-4986-8a38-66f1cae34fcd-cilium-run\") pod \"cilium-4gr8t\" (UID: \"f207dcbd-6262-4986-8a38-66f1cae34fcd\") " pod="kube-system/cilium-4gr8t" Jan 30 05:07:15.224180 kubelet[2542]: I0130 05:07:15.223748 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f207dcbd-6262-4986-8a38-66f1cae34fcd-cni-path\") pod \"cilium-4gr8t\" (UID: \"f207dcbd-6262-4986-8a38-66f1cae34fcd\") " pod="kube-system/cilium-4gr8t" Jan 30 05:07:15.224180 kubelet[2542]: I0130 05:07:15.223772 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f207dcbd-6262-4986-8a38-66f1cae34fcd-clustermesh-secrets\") pod \"cilium-4gr8t\" (UID: \"f207dcbd-6262-4986-8a38-66f1cae34fcd\") " pod="kube-system/cilium-4gr8t" Jan 30 05:07:15.224180 kubelet[2542]: I0130 05:07:15.223803 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f207dcbd-6262-4986-8a38-66f1cae34fcd-cilium-config-path\") pod \"cilium-4gr8t\" (UID: \"f207dcbd-6262-4986-8a38-66f1cae34fcd\") " pod="kube-system/cilium-4gr8t" Jan 30 05:07:15.224180 kubelet[2542]: I0130 05:07:15.223830 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f207dcbd-6262-4986-8a38-66f1cae34fcd-host-proc-sys-net\") pod \"cilium-4gr8t\" (UID: \"f207dcbd-6262-4986-8a38-66f1cae34fcd\") " pod="kube-system/cilium-4gr8t" Jan 30 05:07:15.224544 kubelet[2542]: I0130 05:07:15.223865 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xfv5\" (UniqueName: \"kubernetes.io/projected/f207dcbd-6262-4986-8a38-66f1cae34fcd-kube-api-access-6xfv5\") pod \"cilium-4gr8t\" (UID: \"f207dcbd-6262-4986-8a38-66f1cae34fcd\") " pod="kube-system/cilium-4gr8t" Jan 30 05:07:15.224544 kubelet[2542]: I0130 05:07:15.223890 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f207dcbd-6262-4986-8a38-66f1cae34fcd-bpf-maps\") pod \"cilium-4gr8t\" (UID: \"f207dcbd-6262-4986-8a38-66f1cae34fcd\") " pod="kube-system/cilium-4gr8t" Jan 30 05:07:15.224544 kubelet[2542]: I0130 05:07:15.223917 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f207dcbd-6262-4986-8a38-66f1cae34fcd-xtables-lock\") pod \"cilium-4gr8t\" (UID: \"f207dcbd-6262-4986-8a38-66f1cae34fcd\") " pod="kube-system/cilium-4gr8t" Jan 30 05:07:15.224544 kubelet[2542]: I0130 05:07:15.223944 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f207dcbd-6262-4986-8a38-66f1cae34fcd-etc-cni-netd\") pod \"cilium-4gr8t\" (UID: \"f207dcbd-6262-4986-8a38-66f1cae34fcd\") " pod="kube-system/cilium-4gr8t" Jan 30 05:07:15.224544 kubelet[2542]: I0130 05:07:15.223973 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f207dcbd-6262-4986-8a38-66f1cae34fcd-lib-modules\") pod \"cilium-4gr8t\" (UID: \"f207dcbd-6262-4986-8a38-66f1cae34fcd\") " pod="kube-system/cilium-4gr8t" Jan 30 05:07:15.224544 kubelet[2542]: I0130 05:07:15.223995 2542 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f207dcbd-6262-4986-8a38-66f1cae34fcd-cilium-ipsec-secrets\") pod \"cilium-4gr8t\" (UID: \"f207dcbd-6262-4986-8a38-66f1cae34fcd\") " pod="kube-system/cilium-4gr8t" Jan 30 05:07:15.226708 systemd-logind[1440]: Session 27 logged out. Waiting for processes to exit. Jan 30 05:07:15.233200 systemd[1]: Started sshd@27-24.144.82.28:22-147.75.109.163:36210.service - OpenSSH per-connection server daemon (147.75.109.163:36210). Jan 30 05:07:15.235982 systemd-logind[1440]: Removed session 27. Jan 30 05:07:15.297516 sshd[4341]: Accepted publickey for core from 147.75.109.163 port 36210 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:07:15.300475 sshd[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:07:15.316458 systemd-logind[1440]: New session 28 of user core. Jan 30 05:07:15.319783 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 05:07:15.424601 kubelet[2542]: I0130 05:07:15.424510 2542 setters.go:580] "Node became not ready" node="ci-4081.3.0-c-6bfcfa9ae9" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T05:07:15Z","lastTransitionTime":"2025-01-30T05:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 05:07:15.453811 kubelet[2542]: E0130 05:07:15.453747 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:07:15.456813 containerd[1460]: time="2025-01-30T05:07:15.456475843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4gr8t,Uid:f207dcbd-6262-4986-8a38-66f1cae34fcd,Namespace:kube-system,Attempt:0,}" Jan 30 05:07:15.513198 containerd[1460]: time="2025-01-30T05:07:15.512373525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:07:15.513198 containerd[1460]: time="2025-01-30T05:07:15.512501678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:07:15.513198 containerd[1460]: time="2025-01-30T05:07:15.512524315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:07:15.514421 containerd[1460]: time="2025-01-30T05:07:15.512677507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:07:15.559722 systemd[1]: Started cri-containerd-347dacc2f9fe8fa80e76255c02c08ca9f7196655af4eb42a99e82d536a0b191a.scope - libcontainer container 347dacc2f9fe8fa80e76255c02c08ca9f7196655af4eb42a99e82d536a0b191a. Jan 30 05:07:15.622627 containerd[1460]: time="2025-01-30T05:07:15.622537654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4gr8t,Uid:f207dcbd-6262-4986-8a38-66f1cae34fcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"347dacc2f9fe8fa80e76255c02c08ca9f7196655af4eb42a99e82d536a0b191a\"" Jan 30 05:07:15.624947 kubelet[2542]: E0130 05:07:15.624383 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:07:15.634431 containerd[1460]: time="2025-01-30T05:07:15.634251904Z" level=info msg="CreateContainer within sandbox \"347dacc2f9fe8fa80e76255c02c08ca9f7196655af4eb42a99e82d536a0b191a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 05:07:15.647580 containerd[1460]: time="2025-01-30T05:07:15.647513530Z" level=info msg="CreateContainer within sandbox \"347dacc2f9fe8fa80e76255c02c08ca9f7196655af4eb42a99e82d536a0b191a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"31bff75f433fb6deb9088804bd611fe42a303a2b7e153f95c929b45199209a8b\"" Jan 30 05:07:15.648729 containerd[1460]: time="2025-01-30T05:07:15.648554037Z" level=info msg="StartContainer for \"31bff75f433fb6deb9088804bd611fe42a303a2b7e153f95c929b45199209a8b\"" Jan 30 05:07:15.689880 systemd[1]: Started cri-containerd-31bff75f433fb6deb9088804bd611fe42a303a2b7e153f95c929b45199209a8b.scope - libcontainer container 31bff75f433fb6deb9088804bd611fe42a303a2b7e153f95c929b45199209a8b. Jan 30 05:07:15.737448 containerd[1460]: time="2025-01-30T05:07:15.736448679Z" level=info msg="StartContainer for \"31bff75f433fb6deb9088804bd611fe42a303a2b7e153f95c929b45199209a8b\" returns successfully" Jan 30 05:07:15.751526 systemd[1]: cri-containerd-31bff75f433fb6deb9088804bd611fe42a303a2b7e153f95c929b45199209a8b.scope: Deactivated successfully. Jan 30 05:07:15.798938 containerd[1460]: time="2025-01-30T05:07:15.798780725Z" level=info msg="shim disconnected" id=31bff75f433fb6deb9088804bd611fe42a303a2b7e153f95c929b45199209a8b namespace=k8s.io Jan 30 05:07:15.798938 containerd[1460]: time="2025-01-30T05:07:15.798898527Z" level=warning msg="cleaning up after shim disconnected" id=31bff75f433fb6deb9088804bd611fe42a303a2b7e153f95c929b45199209a8b namespace=k8s.io Jan 30 05:07:15.798938 containerd[1460]: time="2025-01-30T05:07:15.798908678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:07:16.673054 kubelet[2542]: E0130 05:07:16.673008 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:07:16.677467 containerd[1460]: time="2025-01-30T05:07:16.676459842Z" level=info msg="CreateContainer within sandbox \"347dacc2f9fe8fa80e76255c02c08ca9f7196655af4eb42a99e82d536a0b191a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 05:07:16.698721 containerd[1460]: time="2025-01-30T05:07:16.697916772Z" level=info msg="CreateContainer within sandbox \"347dacc2f9fe8fa80e76255c02c08ca9f7196655af4eb42a99e82d536a0b191a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0a403c5060c2bf3b4cc67258ed6b19f52fc4d7401cb817dbd049c815be877698\"" Jan 30 05:07:16.700552 containerd[1460]: time="2025-01-30T05:07:16.699143739Z" level=info msg="StartContainer for \"0a403c5060c2bf3b4cc67258ed6b19f52fc4d7401cb817dbd049c815be877698\"" Jan 30 05:07:16.763772 systemd[1]: Started cri-containerd-0a403c5060c2bf3b4cc67258ed6b19f52fc4d7401cb817dbd049c815be877698.scope - libcontainer container 0a403c5060c2bf3b4cc67258ed6b19f52fc4d7401cb817dbd049c815be877698. Jan 30 05:07:16.807788 containerd[1460]: time="2025-01-30T05:07:16.807734684Z" level=info msg="StartContainer for \"0a403c5060c2bf3b4cc67258ed6b19f52fc4d7401cb817dbd049c815be877698\" returns successfully" Jan 30 05:07:16.818908 systemd[1]: cri-containerd-0a403c5060c2bf3b4cc67258ed6b19f52fc4d7401cb817dbd049c815be877698.scope: Deactivated successfully. Jan 30 05:07:16.853845 containerd[1460]: time="2025-01-30T05:07:16.853758252Z" level=info msg="shim disconnected" id=0a403c5060c2bf3b4cc67258ed6b19f52fc4d7401cb817dbd049c815be877698 namespace=k8s.io Jan 30 05:07:16.854459 containerd[1460]: time="2025-01-30T05:07:16.854156870Z" level=warning msg="cleaning up after shim disconnected" id=0a403c5060c2bf3b4cc67258ed6b19f52fc4d7401cb817dbd049c815be877698 namespace=k8s.io Jan 30 05:07:16.854459 containerd[1460]: time="2025-01-30T05:07:16.854185869Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:07:17.337927 systemd[1]: run-containerd-runc-k8s.io-0a403c5060c2bf3b4cc67258ed6b19f52fc4d7401cb817dbd049c815be877698-runc.ehzy81.mount: Deactivated successfully. Jan 30 05:07:17.338058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a403c5060c2bf3b4cc67258ed6b19f52fc4d7401cb817dbd049c815be877698-rootfs.mount: Deactivated successfully. Jan 30 05:07:17.679826 kubelet[2542]: E0130 05:07:17.679671 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:07:17.688706 containerd[1460]: time="2025-01-30T05:07:17.687993537Z" level=info msg="CreateContainer within sandbox \"347dacc2f9fe8fa80e76255c02c08ca9f7196655af4eb42a99e82d536a0b191a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 05:07:17.715158 containerd[1460]: time="2025-01-30T05:07:17.713156767Z" level=info msg="CreateContainer within sandbox \"347dacc2f9fe8fa80e76255c02c08ca9f7196655af4eb42a99e82d536a0b191a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"24e00c15ffdd4bd83d5049d4fe9cc9ef57db04e17227207aaeeb8c25f23e8f43\"" Jan 30 05:07:17.716448 containerd[1460]: time="2025-01-30T05:07:17.715960419Z" level=info msg="StartContainer for \"24e00c15ffdd4bd83d5049d4fe9cc9ef57db04e17227207aaeeb8c25f23e8f43\"" Jan 30 05:07:17.776780 systemd[1]: Started cri-containerd-24e00c15ffdd4bd83d5049d4fe9cc9ef57db04e17227207aaeeb8c25f23e8f43.scope - libcontainer container 24e00c15ffdd4bd83d5049d4fe9cc9ef57db04e17227207aaeeb8c25f23e8f43. Jan 30 05:07:17.826802 containerd[1460]: time="2025-01-30T05:07:17.826721098Z" level=info msg="StartContainer for \"24e00c15ffdd4bd83d5049d4fe9cc9ef57db04e17227207aaeeb8c25f23e8f43\" returns successfully" Jan 30 05:07:17.837595 systemd[1]: cri-containerd-24e00c15ffdd4bd83d5049d4fe9cc9ef57db04e17227207aaeeb8c25f23e8f43.scope: Deactivated successfully. Jan 30 05:07:17.877002 containerd[1460]: time="2025-01-30T05:07:17.876915462Z" level=info msg="shim disconnected" id=24e00c15ffdd4bd83d5049d4fe9cc9ef57db04e17227207aaeeb8c25f23e8f43 namespace=k8s.io Jan 30 05:07:17.877002 containerd[1460]: time="2025-01-30T05:07:17.876987259Z" level=warning msg="cleaning up after shim disconnected" id=24e00c15ffdd4bd83d5049d4fe9cc9ef57db04e17227207aaeeb8c25f23e8f43 namespace=k8s.io Jan 30 05:07:17.877002 containerd[1460]: time="2025-01-30T05:07:17.877000277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:07:18.321908 kubelet[2542]: E0130 05:07:18.321837 2542 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 05:07:18.338215 systemd[1]: run-containerd-runc-k8s.io-24e00c15ffdd4bd83d5049d4fe9cc9ef57db04e17227207aaeeb8c25f23e8f43-runc.84VpvW.mount: Deactivated successfully. Jan 30 05:07:18.338451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24e00c15ffdd4bd83d5049d4fe9cc9ef57db04e17227207aaeeb8c25f23e8f43-rootfs.mount: Deactivated successfully. Jan 30 05:07:18.688534 kubelet[2542]: E0130 05:07:18.688496 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:07:18.691869 containerd[1460]: time="2025-01-30T05:07:18.691284027Z" level=info msg="CreateContainer within sandbox \"347dacc2f9fe8fa80e76255c02c08ca9f7196655af4eb42a99e82d536a0b191a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 05:07:18.715816 containerd[1460]: time="2025-01-30T05:07:18.715618803Z" level=info msg="CreateContainer within sandbox \"347dacc2f9fe8fa80e76255c02c08ca9f7196655af4eb42a99e82d536a0b191a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"90ecfe7d43281117cf514d62bc3a44fb19db9e5b67bdce21812a6a2c134baf31\"" Jan 30 05:07:18.717031 containerd[1460]: time="2025-01-30T05:07:18.716977013Z" level=info msg="StartContainer for \"90ecfe7d43281117cf514d62bc3a44fb19db9e5b67bdce21812a6a2c134baf31\"" Jan 30 05:07:18.759834 systemd[1]: Started cri-containerd-90ecfe7d43281117cf514d62bc3a44fb19db9e5b67bdce21812a6a2c134baf31.scope - libcontainer container 90ecfe7d43281117cf514d62bc3a44fb19db9e5b67bdce21812a6a2c134baf31. Jan 30 05:07:18.798063 systemd[1]: cri-containerd-90ecfe7d43281117cf514d62bc3a44fb19db9e5b67bdce21812a6a2c134baf31.scope: Deactivated successfully. Jan 30 05:07:18.802341 containerd[1460]: time="2025-01-30T05:07:18.802281450Z" level=info msg="StartContainer for \"90ecfe7d43281117cf514d62bc3a44fb19db9e5b67bdce21812a6a2c134baf31\" returns successfully" Jan 30 05:07:18.832385 containerd[1460]: time="2025-01-30T05:07:18.832271111Z" level=info msg="shim disconnected" id=90ecfe7d43281117cf514d62bc3a44fb19db9e5b67bdce21812a6a2c134baf31 namespace=k8s.io Jan 30 05:07:18.832385 containerd[1460]: time="2025-01-30T05:07:18.832385507Z" level=warning msg="cleaning up after shim disconnected" id=90ecfe7d43281117cf514d62bc3a44fb19db9e5b67bdce21812a6a2c134baf31 namespace=k8s.io Jan 30 05:07:18.832385 containerd[1460]: time="2025-01-30T05:07:18.832408360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:07:19.338675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90ecfe7d43281117cf514d62bc3a44fb19db9e5b67bdce21812a6a2c134baf31-rootfs.mount: Deactivated successfully. Jan 30 05:07:19.696259 kubelet[2542]: E0130 05:07:19.696028 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:07:19.703156 containerd[1460]: time="2025-01-30T05:07:19.703069712Z" level=info msg="CreateContainer within sandbox \"347dacc2f9fe8fa80e76255c02c08ca9f7196655af4eb42a99e82d536a0b191a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 05:07:19.733497 containerd[1460]: time="2025-01-30T05:07:19.733323382Z" level=info msg="CreateContainer within sandbox \"347dacc2f9fe8fa80e76255c02c08ca9f7196655af4eb42a99e82d536a0b191a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f6b64f1cd56863f288e46f3f508eba2afc4e251953ba6659faf227f40df23003\"" Jan 30 05:07:19.735474 containerd[1460]: time="2025-01-30T05:07:19.735425984Z" level=info msg="StartContainer for \"f6b64f1cd56863f288e46f3f508eba2afc4e251953ba6659faf227f40df23003\"" Jan 30 05:07:19.793715 systemd[1]: Started cri-containerd-f6b64f1cd56863f288e46f3f508eba2afc4e251953ba6659faf227f40df23003.scope - libcontainer container f6b64f1cd56863f288e46f3f508eba2afc4e251953ba6659faf227f40df23003. Jan 30 05:07:19.840256 containerd[1460]: time="2025-01-30T05:07:19.839836087Z" level=info msg="StartContainer for \"f6b64f1cd56863f288e46f3f508eba2afc4e251953ba6659faf227f40df23003\" returns successfully" Jan 30 05:07:20.437129 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 05:07:20.704001 kubelet[2542]: E0130 05:07:20.703767 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:07:20.737043 kubelet[2542]: I0130 05:07:20.736938 2542 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4gr8t" podStartSLOduration=6.736911006 podStartE2EDuration="6.736911006s" podCreationTimestamp="2025-01-30 05:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:07:20.734916733 +0000 UTC m=+107.878395792" watchObservedRunningTime="2025-01-30 05:07:20.736911006 +0000 UTC m=+107.880390074" Jan 30 05:07:21.710475 kubelet[2542]: E0130 05:07:21.707976 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:07:22.710063 kubelet[2542]: E0130 05:07:22.710015 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:07:24.413825 systemd-networkd[1367]: lxc_health: Link UP Jan 30 05:07:24.428576 systemd-networkd[1367]: lxc_health: Gained carrier Jan 30 05:07:25.459277 kubelet[2542]: E0130 05:07:25.459132 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:07:25.722761 kubelet[2542]: E0130 05:07:25.721929 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:07:26.231723 systemd-networkd[1367]: lxc_health: Gained IPv6LL Jan 30 05:07:26.706366 systemd[1]: run-containerd-runc-k8s.io-f6b64f1cd56863f288e46f3f508eba2afc4e251953ba6659faf227f40df23003-runc.zwfihc.mount: Deactivated successfully. Jan 30 05:07:26.726350 kubelet[2542]: E0130 05:07:26.726301 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:07:28.098938 kubelet[2542]: E0130 05:07:28.098790 2542 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:07:28.923732 systemd[1]: run-containerd-runc-k8s.io-f6b64f1cd56863f288e46f3f508eba2afc4e251953ba6659faf227f40df23003-runc.CMYWms.mount: Deactivated successfully. Jan 30 05:07:31.104001 systemd[1]: run-containerd-runc-k8s.io-f6b64f1cd56863f288e46f3f508eba2afc4e251953ba6659faf227f40df23003-runc.nnhuvK.mount: Deactivated successfully. Jan 30 05:07:31.181019 sshd[4341]: pam_unix(sshd:session): session closed for user core Jan 30 05:07:31.186784 systemd[1]: sshd@27-24.144.82.28:22-147.75.109.163:36210.service: Deactivated successfully. Jan 30 05:07:31.189938 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 05:07:31.192365 systemd-logind[1440]: Session 28 logged out. Waiting for processes to exit. Jan 30 05:07:31.194156 systemd-logind[1440]: Removed session 28. Jan 30 05:07:33.078852 containerd[1460]: time="2025-01-30T05:07:33.078777088Z" level=info msg="StopPodSandbox for \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\"" Jan 30 05:07:33.079536 containerd[1460]: time="2025-01-30T05:07:33.078905246Z" level=info msg="TearDown network for sandbox \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\" successfully" Jan 30 05:07:33.079536 containerd[1460]: time="2025-01-30T05:07:33.078918623Z" level=info msg="StopPodSandbox for \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\" returns successfully" Jan 30 05:07:33.079877 containerd[1460]: time="2025-01-30T05:07:33.079834549Z" level=info msg="RemovePodSandbox for \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\"" Jan 30 05:07:33.079973 containerd[1460]: time="2025-01-30T05:07:33.079884305Z" level=info msg="Forcibly stopping sandbox \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\"" Jan 30 05:07:33.080012 containerd[1460]: time="2025-01-30T05:07:33.079980000Z" level=info msg="TearDown network for sandbox \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\" successfully" Jan 30 05:07:33.084303 containerd[1460]: time="2025-01-30T05:07:33.084238524Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:07:33.084486 containerd[1460]: time="2025-01-30T05:07:33.084339217Z" level=info msg="RemovePodSandbox \"dfa8b123cfce835337a2634894b620eacb33c2bd03a4614cdcaae5de77121b74\" returns successfully" Jan 30 05:07:33.085015 containerd[1460]: time="2025-01-30T05:07:33.084983799Z" level=info msg="StopPodSandbox for \"bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df\"" Jan 30 05:07:33.085128 containerd[1460]: time="2025-01-30T05:07:33.085105422Z" level=info msg="TearDown network for sandbox \"bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df\" successfully" Jan 30 05:07:33.085175 containerd[1460]: time="2025-01-30T05:07:33.085125997Z" level=info msg="StopPodSandbox for \"bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df\" returns successfully" Jan 30 05:07:33.085654 containerd[1460]: time="2025-01-30T05:07:33.085624612Z" level=info msg="RemovePodSandbox for \"bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df\"" Jan 30 05:07:33.086350 containerd[1460]: time="2025-01-30T05:07:33.086004401Z" level=info msg="Forcibly stopping sandbox \"bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df\"" Jan 30 05:07:33.086350 containerd[1460]: time="2025-01-30T05:07:33.086142079Z" level=info msg="TearDown network for sandbox \"bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df\" successfully" Jan 30 05:07:33.089778 containerd[1460]: time="2025-01-30T05:07:33.089629879Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:07:33.089778 containerd[1460]: time="2025-01-30T05:07:33.089715947Z" level=info msg="RemovePodSandbox \"bb5e180f0cf7d45d356e52081d10dc4e01b67445e614dcd528c7a446da74b2df\" returns successfully"