Oct 9 00:54:20.923599 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 23:33:43 -00 2024 Oct 9 00:54:20.923628 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 00:54:20.923643 kernel: BIOS-provided physical RAM map: Oct 9 00:54:20.923652 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 9 00:54:20.923661 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 9 00:54:20.923670 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 9 00:54:20.923680 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 9 00:54:20.923689 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 9 00:54:20.923697 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 9 00:54:20.923706 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 9 00:54:20.923718 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 9 00:54:20.923727 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Oct 9 00:54:20.923736 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 9 00:54:20.923745 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 9 00:54:20.923756 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 9 00:54:20.923765 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 9 00:54:20.923778 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 9 00:54:20.923787 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 00:54:20.923797 kernel: BIOS-e820: [mem 0x00000000ffe00000-0x00000000ffffffff] reserved Oct 9 00:54:20.923815 kernel: NX (Execute Disable) protection: active Oct 9 00:54:20.923825 kernel: APIC: Static calls initialized Oct 9 00:54:20.923834 kernel: e820: update [mem 0x9b66b018-0x9b674c57] usable ==> usable Oct 9 00:54:20.923844 kernel: e820: update [mem 0x9b66b018-0x9b674c57] usable ==> usable Oct 9 00:54:20.923853 kernel: e820: update [mem 0x9b62e018-0x9b66ae57] usable ==> usable Oct 9 00:54:20.923863 kernel: e820: update [mem 0x9b62e018-0x9b66ae57] usable ==> usable Oct 9 00:54:20.923872 kernel: extended physical RAM map: Oct 9 00:54:20.923881 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 9 00:54:20.923894 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 9 00:54:20.923903 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 9 00:54:20.923912 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 9 00:54:20.923922 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 9 00:54:20.923931 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 9 00:54:20.923941 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 9 00:54:20.923951 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b62e017] usable Oct 9 00:54:20.923960 kernel: reserve setup_data: [mem 0x000000009b62e018-0x000000009b66ae57] usable Oct 9 00:54:20.923970 kernel: reserve setup_data: [mem 0x000000009b66ae58-0x000000009b66b017] usable Oct 9 00:54:20.923979 kernel: reserve setup_data: [mem 0x000000009b66b018-0x000000009b674c57] usable Oct 9 00:54:20.923989 kernel: reserve setup_data: [mem 0x000000009b674c58-0x000000009c8eefff] usable Oct 9 00:54:20.924001 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Oct 9 00:54:20.924011 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 9 00:54:20.924025 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 9 00:54:20.924035 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 9 00:54:20.924045 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 9 00:54:20.924055 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 9 00:54:20.924068 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 00:54:20.924078 kernel: reserve setup_data: [mem 0x00000000ffe00000-0x00000000ffffffff] reserved Oct 9 00:54:20.924088 kernel: efi: EFI v2.7 by EDK II Oct 9 00:54:20.924098 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b6b3018 RNG=0x9cb73018 Oct 9 00:54:20.924108 kernel: random: crng init done Oct 9 00:54:20.924119 kernel: efi: Remove mem127: MMIO range=[0xffe00000-0xffffffff] (2MB) from e820 map Oct 9 00:54:20.924129 kernel: e820: remove [mem 0xffe00000-0xffffffff] reserved Oct 9 00:54:20.924139 kernel: secureboot: Secure boot disabled Oct 9 00:54:20.924149 kernel: SMBIOS 2.8 present. Oct 9 00:54:20.924160 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Oct 9 00:54:20.924173 kernel: Hypervisor detected: KVM Oct 9 00:54:20.924183 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 00:54:20.924193 kernel: kvm-clock: using sched offset of 4653545683 cycles Oct 9 00:54:20.924204 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 00:54:20.924215 kernel: tsc: Detected 2794.750 MHz processor Oct 9 00:54:20.924226 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 00:54:20.924237 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 00:54:20.924247 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 9 00:54:20.924257 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 9 00:54:20.924268 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 00:54:20.924281 kernel: Using GB pages for direct mapping Oct 9 00:54:20.924291 kernel: ACPI: Early table checksum verification disabled Oct 9 00:54:20.924302 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 9 00:54:20.924313 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 9 00:54:20.924323 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:54:20.924334 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:54:20.924344 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 9 00:54:20.924355 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:54:20.924365 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:54:20.924379 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:54:20.924389 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:54:20.924400 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 9 00:54:20.924410 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 9 00:54:20.924421 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Oct 9 00:54:20.924431 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 9 00:54:20.924442 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 9 00:54:20.924452 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 9 00:54:20.924465 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 9 00:54:20.924476 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 9 00:54:20.924488 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 9 00:54:20.924500 kernel: No NUMA configuration found Oct 9 00:54:20.924595 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 9 00:54:20.924606 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 9 00:54:20.924617 kernel: Zone ranges: Oct 9 00:54:20.924627 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 00:54:20.924637 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 9 00:54:20.924648 kernel: Normal empty Oct 9 00:54:20.924662 kernel: Movable zone start for each node Oct 9 00:54:20.924673 kernel: Early memory node ranges Oct 9 00:54:20.924683 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 9 00:54:20.924694 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 9 00:54:20.924704 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 9 00:54:20.924714 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 9 00:54:20.924725 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 9 00:54:20.924735 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 9 00:54:20.924746 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 9 00:54:20.924758 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 00:54:20.924769 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 9 00:54:20.924780 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 9 00:54:20.924790 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 00:54:20.924799 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 9 00:54:20.924819 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 9 00:54:20.924829 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 9 00:54:20.924838 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 00:54:20.924848 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 00:54:20.924857 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 00:54:20.924870 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 00:54:20.924880 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 00:54:20.924889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 00:54:20.924899 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 00:54:20.924908 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 00:54:20.924918 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 00:54:20.924928 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 00:54:20.924937 kernel: TSC deadline timer available Oct 9 00:54:20.924983 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 9 00:54:20.925007 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 00:54:20.925018 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 9 00:54:20.925031 kernel: kvm-guest: setup PV sched yield Oct 9 00:54:20.925041 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 9 00:54:20.925051 kernel: Booting paravirtualized kernel on KVM Oct 9 00:54:20.925062 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 00:54:20.925073 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 9 00:54:20.925083 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 9 00:54:20.925093 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 9 00:54:20.925110 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 9 00:54:20.925120 kernel: kvm-guest: PV spinlocks enabled Oct 9 00:54:20.925131 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 9 00:54:20.925143 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 00:54:20.925154 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 00:54:20.925165 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 00:54:20.925176 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 00:54:20.925191 kernel: Fallback order for Node 0: 0 Oct 9 00:54:20.925202 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 9 00:54:20.925213 kernel: Policy zone: DMA32 Oct 9 00:54:20.925224 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 00:54:20.925235 kernel: Memory: 2395860K/2567000K available (12288K kernel code, 2305K rwdata, 22728K rodata, 42872K init, 2316K bss, 170884K reserved, 0K cma-reserved) Oct 9 00:54:20.925246 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 9 00:54:20.925257 kernel: ftrace: allocating 37786 entries in 148 pages Oct 9 00:54:20.925267 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 00:54:20.925278 kernel: Dynamic Preempt: voluntary Oct 9 00:54:20.925292 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 00:54:20.925304 kernel: rcu: RCU event tracing is enabled. Oct 9 00:54:20.925314 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 9 00:54:20.925325 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 00:54:20.925336 kernel: Rude variant of Tasks RCU enabled. Oct 9 00:54:20.925347 kernel: Tracing variant of Tasks RCU enabled. Oct 9 00:54:20.925358 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 00:54:20.925369 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 9 00:54:20.925380 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 9 00:54:20.925394 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 00:54:20.925404 kernel: Console: colour dummy device 80x25 Oct 9 00:54:20.925415 kernel: printk: console [ttyS0] enabled Oct 9 00:54:20.925426 kernel: ACPI: Core revision 20230628 Oct 9 00:54:20.925438 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 00:54:20.925448 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 00:54:20.925459 kernel: x2apic enabled Oct 9 00:54:20.925471 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 00:54:20.925482 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 9 00:54:20.925495 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 9 00:54:20.925521 kernel: kvm-guest: setup PV IPIs Oct 9 00:54:20.925532 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 00:54:20.925543 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 9 00:54:20.925555 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 9 00:54:20.925566 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 9 00:54:20.925577 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 9 00:54:20.925587 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 9 00:54:20.925599 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 00:54:20.925613 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 00:54:20.925624 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 00:54:20.925635 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 00:54:20.925646 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 9 00:54:20.925657 kernel: RETBleed: Mitigation: untrained return thunk Oct 9 00:54:20.925668 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 00:54:20.925679 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 00:54:20.925691 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 9 00:54:20.925702 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 9 00:54:20.925717 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 9 00:54:20.925728 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 00:54:20.925739 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 00:54:20.925750 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 00:54:20.925761 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 00:54:20.925772 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 9 00:54:20.925783 kernel: Freeing SMP alternatives memory: 32K Oct 9 00:54:20.925794 kernel: pid_max: default: 32768 minimum: 301 Oct 9 00:54:20.925815 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 00:54:20.925826 kernel: landlock: Up and running. Oct 9 00:54:20.925837 kernel: SELinux: Initializing. Oct 9 00:54:20.925849 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 00:54:20.925859 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 00:54:20.925870 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 9 00:54:20.925882 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:54:20.925893 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:54:20.925904 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:54:20.925918 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 9 00:54:20.925929 kernel: ... version: 0 Oct 9 00:54:20.925940 kernel: ... bit width: 48 Oct 9 00:54:20.925950 kernel: ... generic registers: 6 Oct 9 00:54:20.925962 kernel: ... value mask: 0000ffffffffffff Oct 9 00:54:20.925973 kernel: ... max period: 00007fffffffffff Oct 9 00:54:20.925983 kernel: ... fixed-purpose events: 0 Oct 9 00:54:20.925995 kernel: ... event mask: 000000000000003f Oct 9 00:54:20.926006 kernel: signal: max sigframe size: 1776 Oct 9 00:54:20.926019 kernel: rcu: Hierarchical SRCU implementation. Oct 9 00:54:20.926031 kernel: rcu: Max phase no-delay instances is 400. Oct 9 00:54:20.926042 kernel: smp: Bringing up secondary CPUs ... Oct 9 00:54:20.926053 kernel: smpboot: x86: Booting SMP configuration: Oct 9 00:54:20.926064 kernel: .... node #0, CPUs: #1 #2 #3 Oct 9 00:54:20.926075 kernel: smp: Brought up 1 node, 4 CPUs Oct 9 00:54:20.926086 kernel: smpboot: Max logical packages: 1 Oct 9 00:54:20.926097 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 9 00:54:20.926107 kernel: devtmpfs: initialized Oct 9 00:54:20.926117 kernel: x86/mm: Memory block size: 128MB Oct 9 00:54:20.926130 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 9 00:54:20.926140 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 9 00:54:20.926151 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 9 00:54:20.926161 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 9 00:54:20.926180 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 9 00:54:20.926189 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 00:54:20.926199 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 9 00:54:20.926208 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 00:54:20.926220 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 00:54:20.926229 kernel: audit: initializing netlink subsys (disabled) Oct 9 00:54:20.926239 kernel: audit: type=2000 audit(1728435261.638:1): state=initialized audit_enabled=0 res=1 Oct 9 00:54:20.926248 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 00:54:20.926257 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 00:54:20.926266 kernel: cpuidle: using governor menu Oct 9 00:54:20.926275 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 00:54:20.926285 kernel: dca service started, version 1.12.1 Oct 9 00:54:20.926294 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 9 00:54:20.926306 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 9 00:54:20.926315 kernel: PCI: Using configuration type 1 for base access Oct 9 00:54:20.926324 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 00:54:20.926333 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 00:54:20.926342 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 00:54:20.926351 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 00:54:20.926361 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 00:54:20.926370 kernel: ACPI: Added _OSI(Module Device) Oct 9 00:54:20.926379 kernel: ACPI: Added _OSI(Processor Device) Oct 9 00:54:20.926390 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 00:54:20.926400 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 00:54:20.926409 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 00:54:20.926418 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 00:54:20.926427 kernel: ACPI: Interpreter enabled Oct 9 00:54:20.926436 kernel: ACPI: PM: (supports S0 S3 S5) Oct 9 00:54:20.926445 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 00:54:20.926454 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 00:54:20.926464 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 00:54:20.926476 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 9 00:54:20.926487 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 00:54:20.926704 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 00:54:20.926857 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 9 00:54:20.926990 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 9 00:54:20.927002 kernel: PCI host bridge to bus 0000:00 Oct 9 00:54:20.927137 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 00:54:20.927265 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 00:54:20.927385 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 00:54:20.927556 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 9 00:54:20.927681 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 9 00:54:20.927800 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Oct 9 00:54:20.927931 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 00:54:20.928080 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 9 00:54:20.928228 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 9 00:54:20.928361 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 9 00:54:20.928492 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Oct 9 00:54:20.928669 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 9 00:54:20.928837 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Oct 9 00:54:20.928992 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 00:54:20.929164 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 00:54:20.929318 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Oct 9 00:54:20.929473 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Oct 9 00:54:20.929657 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 9 00:54:20.929833 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 9 00:54:20.929988 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Oct 9 00:54:20.930142 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 9 00:54:20.930304 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 9 00:54:20.930479 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 00:54:20.930659 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Oct 9 00:54:20.930798 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 9 00:54:20.930944 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 9 00:54:20.931077 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 9 00:54:20.931218 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 9 00:54:20.931358 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 9 00:54:20.931498 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 9 00:54:20.931682 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Oct 9 00:54:20.931849 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Oct 9 00:54:20.932013 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 9 00:54:20.932168 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Oct 9 00:54:20.932188 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 00:54:20.932199 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 00:54:20.932210 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 00:54:20.932221 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 00:54:20.932232 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 9 00:54:20.932243 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 9 00:54:20.932254 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 9 00:54:20.932265 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 9 00:54:20.932275 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 9 00:54:20.932290 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 9 00:54:20.932300 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 9 00:54:20.932311 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 9 00:54:20.932321 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 9 00:54:20.932331 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 9 00:54:20.932342 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 9 00:54:20.932352 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 9 00:54:20.932363 kernel: iommu: Default domain type: Translated Oct 9 00:54:20.932374 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 00:54:20.932384 kernel: efivars: Registered efivars operations Oct 9 00:54:20.932399 kernel: PCI: Using ACPI for IRQ routing Oct 9 00:54:20.932410 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 00:54:20.932420 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 9 00:54:20.932431 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 9 00:54:20.932441 kernel: e820: reserve RAM buffer [mem 0x9b62e018-0x9bffffff] Oct 9 00:54:20.932452 kernel: e820: reserve RAM buffer [mem 0x9b66b018-0x9bffffff] Oct 9 00:54:20.932463 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 9 00:54:20.932473 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 9 00:54:20.932705 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 9 00:54:20.932872 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 9 00:54:20.933028 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 00:54:20.933042 kernel: vgaarb: loaded Oct 9 00:54:20.933054 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 00:54:20.933065 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 00:54:20.933076 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 00:54:20.933087 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 00:54:20.933098 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 00:54:20.933113 kernel: pnp: PnP ACPI init Oct 9 00:54:20.933273 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 9 00:54:20.933289 kernel: pnp: PnP ACPI: found 6 devices Oct 9 00:54:20.933301 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 00:54:20.933311 kernel: NET: Registered PF_INET protocol family Oct 9 00:54:20.933322 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 00:54:20.933333 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 00:54:20.933344 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 00:54:20.933360 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 00:54:20.933370 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 00:54:20.933381 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 00:54:20.933392 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 00:54:20.933403 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 00:54:20.933413 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 00:54:20.933423 kernel: NET: Registered PF_XDP protocol family Oct 9 00:54:20.933605 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 9 00:54:20.933765 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 9 00:54:20.933917 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 00:54:20.934057 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 00:54:20.934194 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 00:54:20.934333 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 9 00:54:20.934471 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 9 00:54:20.934667 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Oct 9 00:54:20.934683 kernel: PCI: CLS 0 bytes, default 64 Oct 9 00:54:20.934699 kernel: Initialise system trusted keyrings Oct 9 00:54:20.934730 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 00:54:20.934744 kernel: Key type asymmetric registered Oct 9 00:54:20.934755 kernel: Asymmetric key parser 'x509' registered Oct 9 00:54:20.934767 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 00:54:20.934778 kernel: io scheduler mq-deadline registered Oct 9 00:54:20.934790 kernel: io scheduler kyber registered Oct 9 00:54:20.934801 kernel: io scheduler bfq registered Oct 9 00:54:20.934822 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 00:54:20.934837 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 9 00:54:20.934849 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 9 00:54:20.934860 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 9 00:54:20.934872 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 00:54:20.934883 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 00:54:20.934895 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 00:54:20.934906 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 00:54:20.934918 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 00:54:20.935073 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 9 00:54:20.935092 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 00:54:20.935230 kernel: rtc_cmos 00:04: registered as rtc0 Oct 9 00:54:20.935370 kernel: rtc_cmos 00:04: setting system clock to 2024-10-09T00:54:20 UTC (1728435260) Oct 9 00:54:20.935531 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 9 00:54:20.935547 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 9 00:54:20.935559 kernel: efifb: probing for efifb Oct 9 00:54:20.935570 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 9 00:54:20.935582 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 9 00:54:20.935597 kernel: efifb: scrolling: redraw Oct 9 00:54:20.935608 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 9 00:54:20.935620 kernel: Console: switching to colour frame buffer device 160x50 Oct 9 00:54:20.935631 kernel: fb0: EFI VGA frame buffer device Oct 9 00:54:20.935645 kernel: pstore: Using crash dump compression: deflate Oct 9 00:54:20.935656 kernel: pstore: Registered efi_pstore as persistent store backend Oct 9 00:54:20.935670 kernel: NET: Registered PF_INET6 protocol family Oct 9 00:54:20.935682 kernel: Segment Routing with IPv6 Oct 9 00:54:20.935693 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 00:54:20.935704 kernel: NET: Registered PF_PACKET protocol family Oct 9 00:54:20.935716 kernel: Key type dns_resolver registered Oct 9 00:54:20.935726 kernel: IPI shorthand broadcast: enabled Oct 9 00:54:20.935738 kernel: sched_clock: Marking stable (616002625, 136539171)->(767922855, -15381059) Oct 9 00:54:20.935749 kernel: registered taskstats version 1 Oct 9 00:54:20.935761 kernel: Loading compiled-in X.509 certificates Oct 9 00:54:20.935775 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 03ae66f5ce294ce3ab718ee0d7c4a4a6e8c5aae6' Oct 9 00:54:20.935786 kernel: Key type .fscrypt registered Oct 9 00:54:20.935797 kernel: Key type fscrypt-provisioning registered Oct 9 00:54:20.935818 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 00:54:20.935829 kernel: ima: Allocated hash algorithm: sha1 Oct 9 00:54:20.935841 kernel: ima: No architecture policies found Oct 9 00:54:20.935852 kernel: clk: Disabling unused clocks Oct 9 00:54:20.935863 kernel: Freeing unused kernel image (initmem) memory: 42872K Oct 9 00:54:20.935874 kernel: Write protecting the kernel read-only data: 36864k Oct 9 00:54:20.935889 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Oct 9 00:54:20.935901 kernel: Run /init as init process Oct 9 00:54:20.935911 kernel: with arguments: Oct 9 00:54:20.935923 kernel: /init Oct 9 00:54:20.935934 kernel: with environment: Oct 9 00:54:20.935944 kernel: HOME=/ Oct 9 00:54:20.935956 kernel: TERM=linux Oct 9 00:54:20.935967 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 00:54:20.935980 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 00:54:20.935998 systemd[1]: Detected virtualization kvm. Oct 9 00:54:20.936010 systemd[1]: Detected architecture x86-64. Oct 9 00:54:20.936021 systemd[1]: Running in initrd. Oct 9 00:54:20.936033 systemd[1]: No hostname configured, using default hostname. Oct 9 00:54:20.936045 systemd[1]: Hostname set to . Oct 9 00:54:20.936057 systemd[1]: Initializing machine ID from VM UUID. Oct 9 00:54:20.936070 systemd[1]: Queued start job for default target initrd.target. Oct 9 00:54:20.936084 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:54:20.936096 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:54:20.936109 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 00:54:20.936121 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 00:54:20.936134 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 00:54:20.936146 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 00:54:20.936160 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 00:54:20.936175 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 00:54:20.936187 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:54:20.936200 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:54:20.936212 systemd[1]: Reached target paths.target - Path Units. Oct 9 00:54:20.936224 systemd[1]: Reached target slices.target - Slice Units. Oct 9 00:54:20.936236 systemd[1]: Reached target swap.target - Swaps. Oct 9 00:54:20.936248 systemd[1]: Reached target timers.target - Timer Units. Oct 9 00:54:20.936259 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 00:54:20.936274 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 00:54:20.936286 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 00:54:20.936298 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 00:54:20.936310 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:54:20.936322 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 00:54:20.936334 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:54:20.936346 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 00:54:20.936358 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 00:54:20.936371 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 00:54:20.936385 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 00:54:20.936397 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 00:54:20.936408 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 00:54:20.936420 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 00:54:20.936432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:54:20.936444 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 00:54:20.936456 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:54:20.936468 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 00:54:20.936505 systemd-journald[193]: Collecting audit messages is disabled. Oct 9 00:54:20.936562 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 00:54:20.936575 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:54:20.936587 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:54:20.936600 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 00:54:20.936612 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 00:54:20.936625 systemd-journald[193]: Journal started Oct 9 00:54:20.936653 systemd-journald[193]: Runtime Journal (/run/log/journal/fe9d9b1de00c4bc3a66e08a247bb7f88) is 6.0M, max 48.3M, 42.2M free. Oct 9 00:54:20.923431 systemd-modules-load[194]: Inserted module 'overlay' Oct 9 00:54:20.938537 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 00:54:20.940413 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 00:54:20.949495 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:54:20.957706 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:54:20.963148 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:54:20.965954 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 00:54:20.965979 kernel: Bridge firewalling registered Oct 9 00:54:20.966530 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 9 00:54:20.969660 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 00:54:20.970898 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 00:54:20.974284 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:54:20.982276 dracut-cmdline[221]: dracut-dracut-053 Oct 9 00:54:20.984812 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 00:54:20.989403 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:54:20.998725 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 00:54:21.028615 systemd-resolved[248]: Positive Trust Anchors: Oct 9 00:54:21.028632 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 00:54:21.028663 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 00:54:21.031065 systemd-resolved[248]: Defaulting to hostname 'linux'. Oct 9 00:54:21.032131 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 00:54:21.038564 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:54:21.071543 kernel: SCSI subsystem initialized Oct 9 00:54:21.080536 kernel: Loading iSCSI transport class v2.0-870. Oct 9 00:54:21.091535 kernel: iscsi: registered transport (tcp) Oct 9 00:54:21.111617 kernel: iscsi: registered transport (qla4xxx) Oct 9 00:54:21.111636 kernel: QLogic iSCSI HBA Driver Oct 9 00:54:21.166701 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 00:54:21.174634 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 00:54:21.201785 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 00:54:21.201831 kernel: device-mapper: uevent: version 1.0.3 Oct 9 00:54:21.203034 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 00:54:21.246539 kernel: raid6: avx2x4 gen() 30656 MB/s Oct 9 00:54:21.263538 kernel: raid6: avx2x2 gen() 29705 MB/s Oct 9 00:54:21.280616 kernel: raid6: avx2x1 gen() 26018 MB/s Oct 9 00:54:21.280638 kernel: raid6: using algorithm avx2x4 gen() 30656 MB/s Oct 9 00:54:21.298603 kernel: raid6: .... xor() 6609 MB/s, rmw enabled Oct 9 00:54:21.298631 kernel: raid6: using avx2x2 recovery algorithm Oct 9 00:54:21.318527 kernel: xor: automatically using best checksumming function avx Oct 9 00:54:21.469545 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 00:54:21.483463 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 00:54:21.496855 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:54:21.512282 systemd-udevd[414]: Using default interface naming scheme 'v255'. Oct 9 00:54:21.518078 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:54:21.535877 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 00:54:21.553348 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Oct 9 00:54:21.592694 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 00:54:21.610867 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 00:54:21.685430 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:54:21.693701 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 00:54:21.711193 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 00:54:21.714442 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 00:54:21.715129 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:54:21.715484 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 00:54:21.725798 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 00:54:21.731561 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 9 00:54:21.731778 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 9 00:54:21.737454 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 00:54:21.738027 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 00:54:21.744190 kernel: GPT:9289727 != 19775487 Oct 9 00:54:21.744220 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 00:54:21.744234 kernel: GPT:9289727 != 19775487 Oct 9 00:54:21.744247 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 00:54:21.744268 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:54:21.748574 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 00:54:21.757603 kernel: libata version 3.00 loaded. Oct 9 00:54:21.766639 kernel: ahci 0000:00:1f.2: version 3.0 Oct 9 00:54:21.769121 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 9 00:54:21.769146 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 9 00:54:21.769326 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 9 00:54:21.771888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 00:54:21.772046 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:54:21.774069 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:54:21.774738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:54:21.775298 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:54:21.798155 kernel: scsi host0: ahci Oct 9 00:54:21.799928 kernel: BTRFS: device fsid 6ed52ce5-b2f8-4d16-8889-677a209bc377 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (472) Oct 9 00:54:21.795684 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:54:21.802611 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (470) Oct 9 00:54:21.804533 kernel: scsi host1: ahci Oct 9 00:54:21.807544 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 00:54:21.807604 kernel: scsi host2: ahci Oct 9 00:54:21.807822 kernel: AES CTR mode by8 optimization enabled Oct 9 00:54:21.807822 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:54:21.813054 kernel: scsi host3: ahci Oct 9 00:54:21.813258 kernel: scsi host4: ahci Oct 9 00:54:21.817870 kernel: scsi host5: ahci Oct 9 00:54:21.818107 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Oct 9 00:54:21.818121 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Oct 9 00:54:21.818132 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Oct 9 00:54:21.818144 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Oct 9 00:54:21.818161 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Oct 9 00:54:21.819585 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Oct 9 00:54:21.844847 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 00:54:21.850864 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 00:54:21.860965 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 00:54:21.867733 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 00:54:21.870580 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 00:54:21.892838 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 00:54:21.894157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:54:21.894238 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:54:21.896683 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:54:21.899956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:54:21.904240 disk-uuid[555]: Primary Header is updated. Oct 9 00:54:21.904240 disk-uuid[555]: Secondary Entries is updated. Oct 9 00:54:21.904240 disk-uuid[555]: Secondary Header is updated. Oct 9 00:54:21.908555 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:54:21.912542 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:54:21.919003 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:54:21.931811 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:54:21.951347 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:54:22.130543 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 9 00:54:22.130633 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 9 00:54:22.131536 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 9 00:54:22.132557 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 9 00:54:22.132647 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 9 00:54:22.133542 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 9 00:54:22.135012 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 9 00:54:22.135026 kernel: ata3.00: applying bridge limits Oct 9 00:54:22.136555 kernel: ata3.00: configured for UDMA/100 Oct 9 00:54:22.138544 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 9 00:54:22.183559 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 9 00:54:22.184022 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 9 00:54:22.197549 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 9 00:54:22.938118 disk-uuid[557]: The operation has completed successfully. Oct 9 00:54:22.939323 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:54:22.965426 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 00:54:22.965572 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 00:54:22.998651 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 00:54:23.002118 sh[597]: Success Oct 9 00:54:23.014561 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 9 00:54:23.045470 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 00:54:23.059938 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 00:54:23.062929 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 00:54:23.073042 kernel: BTRFS info (device dm-0): first mount of filesystem 6ed52ce5-b2f8-4d16-8889-677a209bc377 Oct 9 00:54:23.073072 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 00:54:23.073083 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 00:54:23.074092 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 00:54:23.074831 kernel: BTRFS info (device dm-0): using free space tree Oct 9 00:54:23.080165 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 00:54:23.081406 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 00:54:23.088654 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 00:54:23.089888 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 00:54:23.099808 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:54:23.099838 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 00:54:23.099859 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:54:23.103564 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:54:23.111627 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 00:54:23.113422 kernel: BTRFS info (device vda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:54:23.191655 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 00:54:23.205764 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 00:54:23.237999 systemd-networkd[775]: lo: Link UP Oct 9 00:54:23.238009 systemd-networkd[775]: lo: Gained carrier Oct 9 00:54:23.239530 systemd-networkd[775]: Enumeration completed Oct 9 00:54:23.239622 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 00:54:23.239908 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:54:23.239912 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 00:54:23.240716 systemd-networkd[775]: eth0: Link UP Oct 9 00:54:23.240720 systemd-networkd[775]: eth0: Gained carrier Oct 9 00:54:23.240726 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:54:23.241287 systemd[1]: Reached target network.target - Network. Oct 9 00:54:23.265549 systemd-networkd[775]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 00:54:23.330201 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 00:54:23.334781 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 00:54:23.392099 ignition[780]: Ignition 2.19.0 Oct 9 00:54:23.392113 ignition[780]: Stage: fetch-offline Oct 9 00:54:23.392159 ignition[780]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:54:23.392169 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:54:23.392293 ignition[780]: parsed url from cmdline: "" Oct 9 00:54:23.392298 ignition[780]: no config URL provided Oct 9 00:54:23.392306 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 00:54:23.392318 ignition[780]: no config at "/usr/lib/ignition/user.ign" Oct 9 00:54:23.392358 ignition[780]: op(1): [started] loading QEMU firmware config module Oct 9 00:54:23.392365 ignition[780]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 9 00:54:23.401360 systemd-resolved[248]: Detected conflict on linux IN A 10.0.0.13 Oct 9 00:54:23.401380 systemd-resolved[248]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Oct 9 00:54:23.403990 ignition[780]: op(1): [finished] loading QEMU firmware config module Oct 9 00:54:23.443474 ignition[780]: parsing config with SHA512: 3c67463675f7b4e3048e7cdba08e849f986dea7bbf998ef09019c05552591d4d527347bd46364f3daa5e4bdb6dd0f58c9976eb77e1937b1cf65d79d464332d3c Oct 9 00:54:23.447545 unknown[780]: fetched base config from "system" Oct 9 00:54:23.447565 unknown[780]: fetched user config from "qemu" Oct 9 00:54:23.450193 ignition[780]: fetch-offline: fetch-offline passed Oct 9 00:54:23.450269 ignition[780]: Ignition finished successfully Oct 9 00:54:23.454047 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 00:54:23.455452 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 9 00:54:23.465651 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 00:54:23.477928 ignition[789]: Ignition 2.19.0 Oct 9 00:54:23.477939 ignition[789]: Stage: kargs Oct 9 00:54:23.478102 ignition[789]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:54:23.478112 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:54:23.478948 ignition[789]: kargs: kargs passed Oct 9 00:54:23.478991 ignition[789]: Ignition finished successfully Oct 9 00:54:23.482021 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 00:54:23.493635 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 00:54:23.504617 ignition[797]: Ignition 2.19.0 Oct 9 00:54:23.504630 ignition[797]: Stage: disks Oct 9 00:54:23.504813 ignition[797]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:54:23.504825 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:54:23.505659 ignition[797]: disks: disks passed Oct 9 00:54:23.505705 ignition[797]: Ignition finished successfully Oct 9 00:54:23.511121 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 00:54:23.511537 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 00:54:23.513800 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 00:54:23.516884 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 00:54:23.517467 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 00:54:23.517972 systemd[1]: Reached target basic.target - Basic System. Oct 9 00:54:23.531787 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 00:54:23.545039 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 00:54:23.552230 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 00:54:23.557664 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 00:54:23.644531 kernel: EXT4-fs (vda9): mounted filesystem ba2945c1-be14-41c0-8c54-84d676c7a16b r/w with ordered data mode. Quota mode: none. Oct 9 00:54:23.644943 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 00:54:23.647277 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 00:54:23.661643 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 00:54:23.663666 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 00:54:23.665026 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 00:54:23.670710 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Oct 9 00:54:23.665076 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 00:54:23.665106 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 00:54:23.678675 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:54:23.678697 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 00:54:23.678712 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:54:23.678737 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:54:23.673798 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 00:54:23.680635 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 00:54:23.683205 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 00:54:23.718799 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 00:54:23.722304 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Oct 9 00:54:23.726716 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 00:54:23.730346 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 00:54:23.808636 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 00:54:23.825650 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 00:54:23.827667 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 00:54:23.835556 kernel: BTRFS info (device vda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:54:23.852241 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 00:54:23.856144 ignition[930]: INFO : Ignition 2.19.0 Oct 9 00:54:23.856144 ignition[930]: INFO : Stage: mount Oct 9 00:54:23.858909 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:54:23.858909 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:54:23.858909 ignition[930]: INFO : mount: mount passed Oct 9 00:54:23.858909 ignition[930]: INFO : Ignition finished successfully Oct 9 00:54:23.860966 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 00:54:23.866692 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 00:54:24.072269 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 00:54:24.088878 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 00:54:24.097546 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (944) Oct 9 00:54:24.097586 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:54:24.097601 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 00:54:24.099083 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:54:24.102528 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:54:24.103902 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 00:54:24.123762 ignition[961]: INFO : Ignition 2.19.0 Oct 9 00:54:24.123762 ignition[961]: INFO : Stage: files Oct 9 00:54:24.125821 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:54:24.125821 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:54:24.125821 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Oct 9 00:54:24.129608 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 00:54:24.129608 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 00:54:24.134250 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 00:54:24.135703 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 00:54:24.137486 unknown[961]: wrote ssh authorized keys file for user: core Oct 9 00:54:24.138653 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 00:54:24.140857 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 00:54:24.142876 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 00:54:24.142876 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 00:54:24.142876 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 00:54:24.180921 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 9 00:54:24.266983 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 00:54:24.269313 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 9 00:54:24.269313 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Oct 9 00:54:24.729425 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 9 00:54:24.803853 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 9 00:54:24.803853 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Oct 9 00:54:24.808288 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 00:54:24.808288 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 00:54:24.808288 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 00:54:24.808288 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 00:54:24.808288 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 00:54:24.808288 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 00:54:24.808288 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 00:54:24.808288 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 00:54:24.808288 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 00:54:24.808288 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 00:54:24.808288 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 00:54:24.808288 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 00:54:24.808288 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 9 00:54:25.091785 systemd-networkd[775]: eth0: Gained IPv6LL Oct 9 00:54:25.228618 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Oct 9 00:54:25.510053 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 00:54:25.510053 ignition[961]: INFO : files: op(d): [started] processing unit "containerd.service" Oct 9 00:54:25.513876 ignition[961]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 00:54:25.513876 ignition[961]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 00:54:25.513876 ignition[961]: INFO : files: op(d): [finished] processing unit "containerd.service" Oct 9 00:54:25.513876 ignition[961]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Oct 9 00:54:25.513876 ignition[961]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 00:54:25.513876 ignition[961]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 00:54:25.513876 ignition[961]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Oct 9 00:54:25.513876 ignition[961]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Oct 9 00:54:25.513876 ignition[961]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 00:54:25.513876 ignition[961]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 00:54:25.513876 ignition[961]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Oct 9 00:54:25.513876 ignition[961]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Oct 9 00:54:25.536137 ignition[961]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 00:54:25.538704 ignition[961]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 00:54:25.540298 ignition[961]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Oct 9 00:54:25.540298 ignition[961]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Oct 9 00:54:25.540298 ignition[961]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 00:54:25.540298 ignition[961]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 00:54:25.540298 ignition[961]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 00:54:25.540298 ignition[961]: INFO : files: files passed Oct 9 00:54:25.540298 ignition[961]: INFO : Ignition finished successfully Oct 9 00:54:25.541654 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 00:54:25.549878 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 00:54:25.552922 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 00:54:25.554650 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 00:54:25.554801 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 00:54:25.565236 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Oct 9 00:54:25.568178 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:54:25.568178 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:54:25.571364 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:54:25.571621 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 00:54:25.574417 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 00:54:25.583874 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 00:54:25.610687 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 00:54:25.610869 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 00:54:25.612273 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 00:54:25.615908 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 00:54:25.617912 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 00:54:25.639735 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 00:54:25.655872 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 00:54:25.669609 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 00:54:25.681964 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:54:25.682629 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:54:25.683142 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 00:54:25.683455 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 00:54:25.683643 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 00:54:25.690016 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 00:54:25.690867 systemd[1]: Stopped target basic.target - Basic System. Oct 9 00:54:25.691191 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 00:54:25.691562 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 00:54:25.692034 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 00:54:25.692394 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 00:54:25.692966 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 00:54:25.693335 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 00:54:25.693896 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 00:54:25.694250 systemd[1]: Stopped target swap.target - Swaps. Oct 9 00:54:25.694791 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 00:54:25.694928 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 00:54:25.712405 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:54:25.712958 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:54:25.713251 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 00:54:25.713386 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:54:25.717816 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 00:54:25.717936 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 00:54:25.720200 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 00:54:25.720300 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 00:54:25.723074 systemd[1]: Stopped target paths.target - Path Units. Oct 9 00:54:25.723315 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 00:54:25.726574 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:54:25.727092 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 00:54:25.727409 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 00:54:25.727911 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 00:54:25.728019 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 00:54:25.733908 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 00:54:25.734018 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 00:54:25.735695 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 00:54:25.735833 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 00:54:25.737482 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 00:54:25.737624 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 00:54:25.748692 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 00:54:25.751375 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 00:54:25.751864 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 00:54:25.752004 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:54:25.753937 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 00:54:25.754078 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 00:54:25.761522 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 00:54:25.761637 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 00:54:25.775435 ignition[1017]: INFO : Ignition 2.19.0 Oct 9 00:54:25.775435 ignition[1017]: INFO : Stage: umount Oct 9 00:54:25.777320 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:54:25.777320 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:54:25.778567 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 00:54:25.781248 ignition[1017]: INFO : umount: umount passed Oct 9 00:54:25.782103 ignition[1017]: INFO : Ignition finished successfully Oct 9 00:54:25.785083 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 00:54:25.785204 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 00:54:25.788660 systemd[1]: Stopped target network.target - Network. Oct 9 00:54:25.789110 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 00:54:25.789156 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 00:54:25.791916 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 00:54:25.791966 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 00:54:25.793914 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 00:54:25.793959 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 00:54:25.795843 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 00:54:25.795889 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 00:54:25.798027 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 00:54:25.800134 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 00:54:25.805578 systemd-networkd[775]: eth0: DHCPv6 lease lost Oct 9 00:54:25.808317 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 00:54:25.808532 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 00:54:25.809630 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 00:54:25.809690 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:54:25.829701 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 00:54:25.830246 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 00:54:25.830322 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 00:54:25.831017 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:54:25.836626 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 00:54:25.836826 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 00:54:25.841260 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 00:54:25.841338 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:54:25.841867 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 00:54:25.841922 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 00:54:25.842226 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 00:54:25.842280 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:54:25.860933 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 00:54:25.861185 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:54:25.861621 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 00:54:25.861693 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 00:54:25.862124 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 00:54:25.862168 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:54:25.862466 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 00:54:25.862621 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 00:54:25.863456 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 00:54:25.863529 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 00:54:25.864519 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 00:54:25.864574 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:54:25.866262 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 00:54:25.866917 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 00:54:25.866983 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:54:25.867542 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 00:54:25.867599 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 00:54:25.868112 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 00:54:25.868163 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:54:25.868489 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:54:25.868556 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:54:25.880490 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 00:54:25.880685 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 00:54:25.909678 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 00:54:25.909855 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 00:54:25.998270 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 00:54:25.998413 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 00:54:25.999364 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 00:54:26.001639 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 00:54:26.001701 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 00:54:26.016815 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 00:54:26.025185 systemd[1]: Switching root. Oct 9 00:54:26.063103 systemd-journald[193]: Journal stopped Oct 9 00:54:27.365225 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 9 00:54:27.365284 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 00:54:27.365303 kernel: SELinux: policy capability open_perms=1 Oct 9 00:54:27.365324 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 00:54:27.365337 kernel: SELinux: policy capability always_check_network=0 Oct 9 00:54:27.365352 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 00:54:27.365364 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 00:54:27.365379 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 00:54:27.365390 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 00:54:27.365402 kernel: audit: type=1403 audit(1728435266.635:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 00:54:27.365414 systemd[1]: Successfully loaded SELinux policy in 40.480ms. Oct 9 00:54:27.365439 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.674ms. Oct 9 00:54:27.365453 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 00:54:27.365465 systemd[1]: Detected virtualization kvm. Oct 9 00:54:27.365477 systemd[1]: Detected architecture x86-64. Oct 9 00:54:27.365489 systemd[1]: Detected first boot. Oct 9 00:54:27.365525 systemd[1]: Initializing machine ID from VM UUID. Oct 9 00:54:27.365541 zram_generator::config[1082]: No configuration found. Oct 9 00:54:27.365554 systemd[1]: Populated /etc with preset unit settings. Oct 9 00:54:27.365566 systemd[1]: Queued start job for default target multi-user.target. Oct 9 00:54:27.365581 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 00:54:27.365595 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 00:54:27.365608 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 00:54:27.365629 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 00:54:27.365650 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 00:54:27.365665 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 00:54:27.365680 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 00:54:27.365693 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 00:54:27.365704 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 00:54:27.365716 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:54:27.365728 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:54:27.365743 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 00:54:27.365756 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 00:54:27.365771 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 00:54:27.365783 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 00:54:27.365795 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 00:54:27.365807 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:54:27.365822 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 00:54:27.365839 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:54:27.365851 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 00:54:27.365863 systemd[1]: Reached target slices.target - Slice Units. Oct 9 00:54:27.365878 systemd[1]: Reached target swap.target - Swaps. Oct 9 00:54:27.365892 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 00:54:27.365904 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 00:54:27.365916 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 00:54:27.365928 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 00:54:27.365940 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:54:27.365952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 00:54:27.365965 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:54:27.365979 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 00:54:27.365993 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 00:54:27.366006 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 00:54:27.366018 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 00:54:27.366030 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:54:27.366043 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 00:54:27.366056 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 00:54:27.366069 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 00:54:27.366081 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 00:54:27.366093 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:54:27.366107 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 00:54:27.366121 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 00:54:27.366134 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:54:27.366148 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 00:54:27.366159 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:54:27.366172 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 00:54:27.366188 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:54:27.366203 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 00:54:27.366218 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 9 00:54:27.366231 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 9 00:54:27.366243 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 00:54:27.366255 kernel: loop: module loaded Oct 9 00:54:27.366266 kernel: fuse: init (API version 7.39) Oct 9 00:54:27.366280 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 00:54:27.366310 systemd-journald[1156]: Collecting audit messages is disabled. Oct 9 00:54:27.366334 systemd-journald[1156]: Journal started Oct 9 00:54:27.366356 systemd-journald[1156]: Runtime Journal (/run/log/journal/fe9d9b1de00c4bc3a66e08a247bb7f88) is 6.0M, max 48.3M, 42.2M free. Oct 9 00:54:27.370450 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 00:54:27.375702 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 00:54:27.395529 kernel: ACPI: bus type drm_connector registered Oct 9 00:54:27.403576 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 00:54:27.408549 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:54:27.412980 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 00:54:27.414400 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 00:54:27.415822 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 00:54:27.417787 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 00:54:27.418974 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 00:54:27.420450 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 00:54:27.421971 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 00:54:27.423810 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:54:27.425431 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 00:54:27.425679 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 00:54:27.427259 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:54:27.427480 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:54:27.429252 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 00:54:27.429544 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 00:54:27.431431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:54:27.431672 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:54:27.433473 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 00:54:27.433758 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 00:54:27.435643 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:54:27.435882 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:54:27.437649 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 00:54:27.439680 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 00:54:27.441689 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 00:54:27.456985 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 00:54:27.476586 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 00:54:27.479683 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 00:54:27.483310 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 00:54:27.486124 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 00:54:27.499758 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 00:54:27.501374 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 00:54:27.504749 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 00:54:27.507013 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 00:54:27.509012 systemd-journald[1156]: Time spent on flushing to /var/log/journal/fe9d9b1de00c4bc3a66e08a247bb7f88 is 13.251ms for 1016 entries. Oct 9 00:54:27.509012 systemd-journald[1156]: System Journal (/var/log/journal/fe9d9b1de00c4bc3a66e08a247bb7f88) is 8.0M, max 195.6M, 187.6M free. Oct 9 00:54:27.530831 systemd-journald[1156]: Received client request to flush runtime journal. Oct 9 00:54:27.510488 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:54:27.515109 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 00:54:27.522263 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 00:54:27.525906 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:54:27.527540 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 00:54:27.529018 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 00:54:27.531930 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 00:54:27.533867 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 00:54:27.544113 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 00:54:27.547837 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Oct 9 00:54:27.547859 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Oct 9 00:54:27.554696 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 00:54:27.556400 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:54:27.558001 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 00:54:27.566651 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 00:54:27.570523 udevadm[1228]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 9 00:54:27.594265 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 00:54:27.606764 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 00:54:27.623199 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Oct 9 00:54:27.623221 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Oct 9 00:54:27.629158 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:54:28.053373 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 00:54:28.061694 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:54:28.088474 systemd-udevd[1247]: Using default interface naming scheme 'v255'. Oct 9 00:54:28.108073 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:54:28.116660 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 00:54:28.128740 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 00:54:28.151545 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1257) Oct 9 00:54:28.153542 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1257) Oct 9 00:54:28.185535 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1253) Oct 9 00:54:28.190158 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Oct 9 00:54:28.195363 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 00:54:28.212727 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 00:54:28.228540 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 00:54:28.234554 kernel: ACPI: button: Power Button [PWRF] Oct 9 00:54:28.239614 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 9 00:54:28.239837 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 9 00:54:28.240971 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 9 00:54:28.241166 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 9 00:54:28.280559 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 9 00:54:28.280296 systemd-networkd[1256]: lo: Link UP Oct 9 00:54:28.280310 systemd-networkd[1256]: lo: Gained carrier Oct 9 00:54:28.282454 systemd-networkd[1256]: Enumeration completed Oct 9 00:54:28.282931 systemd-networkd[1256]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:54:28.282936 systemd-networkd[1256]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 00:54:28.284096 systemd-networkd[1256]: eth0: Link UP Oct 9 00:54:28.284104 systemd-networkd[1256]: eth0: Gained carrier Oct 9 00:54:28.284119 systemd-networkd[1256]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:54:28.288790 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:54:28.290631 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 00:54:28.296524 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 00:54:28.297572 systemd-networkd[1256]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 00:54:28.311818 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 00:54:28.321799 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:54:28.322234 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:54:28.330667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:54:28.402155 kernel: kvm_amd: TSC scaling supported Oct 9 00:54:28.402243 kernel: kvm_amd: Nested Virtualization enabled Oct 9 00:54:28.402263 kernel: kvm_amd: Nested Paging enabled Oct 9 00:54:28.402748 kernel: kvm_amd: LBR virtualization supported Oct 9 00:54:28.404061 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 9 00:54:28.404090 kernel: kvm_amd: Virtual GIF supported Oct 9 00:54:28.424537 kernel: EDAC MC: Ver: 3.0.0 Oct 9 00:54:28.433438 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:54:28.466127 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 00:54:28.475822 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 00:54:28.484268 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 00:54:28.517923 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 00:54:28.519532 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:54:28.530654 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 00:54:28.536455 lvm[1307]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 00:54:28.569358 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 00:54:28.570883 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 00:54:28.572160 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 00:54:28.572183 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 00:54:28.573243 systemd[1]: Reached target machines.target - Containers. Oct 9 00:54:28.575279 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 00:54:28.586626 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 00:54:28.589073 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 00:54:28.590309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:54:28.591256 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 00:54:28.595930 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 00:54:28.599949 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 00:54:28.602676 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 00:54:28.611204 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 00:54:28.617560 kernel: loop0: detected capacity change from 0 to 138192 Oct 9 00:54:28.627911 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 00:54:28.629055 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 00:54:28.641559 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 00:54:28.667549 kernel: loop1: detected capacity change from 0 to 211296 Oct 9 00:54:28.701535 kernel: loop2: detected capacity change from 0 to 140992 Oct 9 00:54:28.736551 kernel: loop3: detected capacity change from 0 to 138192 Oct 9 00:54:28.749544 kernel: loop4: detected capacity change from 0 to 211296 Oct 9 00:54:28.759534 kernel: loop5: detected capacity change from 0 to 140992 Oct 9 00:54:28.770029 (sd-merge)[1330]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 9 00:54:28.770794 (sd-merge)[1330]: Merged extensions into '/usr'. Oct 9 00:54:28.775440 systemd[1]: Reloading requested from client PID 1315 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 00:54:28.775456 systemd[1]: Reloading... Oct 9 00:54:28.838596 zram_generator::config[1361]: No configuration found. Oct 9 00:54:28.874386 ldconfig[1311]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 00:54:28.963592 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:54:29.048064 systemd[1]: Reloading finished in 272 ms. Oct 9 00:54:29.070842 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 00:54:29.072863 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 00:54:29.091879 systemd[1]: Starting ensure-sysext.service... Oct 9 00:54:29.094738 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 00:54:29.101019 systemd[1]: Reloading requested from client PID 1402 ('systemctl') (unit ensure-sysext.service)... Oct 9 00:54:29.101042 systemd[1]: Reloading... Oct 9 00:54:29.124503 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 00:54:29.124955 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 00:54:29.126025 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 00:54:29.126387 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Oct 9 00:54:29.126472 systemd-tmpfiles[1404]: ACLs are not supported, ignoring. Oct 9 00:54:29.130328 systemd-tmpfiles[1404]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 00:54:29.130341 systemd-tmpfiles[1404]: Skipping /boot Oct 9 00:54:29.141150 systemd-tmpfiles[1404]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 00:54:29.141283 systemd-tmpfiles[1404]: Skipping /boot Oct 9 00:54:29.168620 zram_generator::config[1435]: No configuration found. Oct 9 00:54:29.294261 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:54:29.358362 systemd[1]: Reloading finished in 256 ms. Oct 9 00:54:29.379383 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:54:29.380758 systemd-networkd[1256]: eth0: Gained IPv6LL Oct 9 00:54:29.394239 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 00:54:29.401226 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:54:29.403622 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 00:54:29.406020 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 00:54:29.409136 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 00:54:29.412651 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 00:54:29.423147 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:54:29.423351 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:54:29.426349 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:54:29.428575 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 00:54:29.433252 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:54:29.438889 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:54:29.440145 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:54:29.440266 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:54:29.441247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:54:29.441457 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:54:29.443630 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 00:54:29.443847 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 00:54:29.449263 systemd[1]: Finished ensure-sysext.service. Oct 9 00:54:29.451065 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:54:29.451278 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:54:29.454825 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:54:29.455776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:54:29.457419 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 00:54:29.465758 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 00:54:29.469624 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 00:54:29.469687 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 00:54:29.477978 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 00:54:29.480757 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 00:54:29.486408 augenrules[1525]: No rules Oct 9 00:54:29.488191 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:54:29.490435 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:54:29.493331 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 00:54:29.497234 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 00:54:29.499663 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 00:54:29.516128 systemd-resolved[1482]: Positive Trust Anchors: Oct 9 00:54:29.516151 systemd-resolved[1482]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 00:54:29.516184 systemd-resolved[1482]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 00:54:29.519710 systemd-resolved[1482]: Defaulting to hostname 'linux'. Oct 9 00:54:29.521818 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 00:54:29.523037 systemd[1]: Reached target network.target - Network. Oct 9 00:54:29.523947 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 00:54:29.525026 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:54:29.553782 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 00:54:31.321618 systemd-resolved[1482]: Clock change detected. Flushing caches. Oct 9 00:54:31.321664 systemd-timesyncd[1520]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 9 00:54:31.321705 systemd-timesyncd[1520]: Initial clock synchronization to Wed 2024-10-09 00:54:31.321578 UTC. Oct 9 00:54:31.322316 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 00:54:31.323536 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 00:54:31.324813 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 00:54:31.326061 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 00:54:31.327327 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 00:54:31.327364 systemd[1]: Reached target paths.target - Path Units. Oct 9 00:54:31.328271 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 00:54:31.329567 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 00:54:31.330764 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 00:54:31.331999 systemd[1]: Reached target timers.target - Timer Units. Oct 9 00:54:31.333699 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 00:54:31.336667 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 00:54:31.338784 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 00:54:31.343553 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 00:54:31.344631 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 00:54:31.345579 systemd[1]: Reached target basic.target - Basic System. Oct 9 00:54:31.346651 systemd[1]: System is tainted: cgroupsv1 Oct 9 00:54:31.346688 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 00:54:31.346708 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 00:54:31.347924 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 00:54:31.350055 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 9 00:54:31.352166 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 00:54:31.355477 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 00:54:31.359477 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 00:54:31.360625 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 00:54:31.362377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:54:31.368359 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 00:54:31.372523 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 00:54:31.376903 jq[1542]: false Oct 9 00:54:31.377248 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 00:54:31.378767 extend-filesystems[1545]: Found loop3 Oct 9 00:54:31.381283 extend-filesystems[1545]: Found loop4 Oct 9 00:54:31.381283 extend-filesystems[1545]: Found loop5 Oct 9 00:54:31.381283 extend-filesystems[1545]: Found sr0 Oct 9 00:54:31.381283 extend-filesystems[1545]: Found vda Oct 9 00:54:31.381283 extend-filesystems[1545]: Found vda1 Oct 9 00:54:31.381283 extend-filesystems[1545]: Found vda2 Oct 9 00:54:31.381283 extend-filesystems[1545]: Found vda3 Oct 9 00:54:31.381283 extend-filesystems[1545]: Found usr Oct 9 00:54:31.381283 extend-filesystems[1545]: Found vda4 Oct 9 00:54:31.381283 extend-filesystems[1545]: Found vda6 Oct 9 00:54:31.381283 extend-filesystems[1545]: Found vda7 Oct 9 00:54:31.381283 extend-filesystems[1545]: Found vda9 Oct 9 00:54:31.381283 extend-filesystems[1545]: Checking size of /dev/vda9 Oct 9 00:54:31.386198 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 00:54:31.393826 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 00:54:31.400821 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 00:54:31.403859 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 00:54:31.409542 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 00:54:31.407676 dbus-daemon[1541]: [system] SELinux support is enabled Oct 9 00:54:31.414884 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 00:54:31.418562 extend-filesystems[1545]: Resized partition /dev/vda9 Oct 9 00:54:31.419509 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 00:54:31.424210 extend-filesystems[1579]: resize2fs 1.47.1 (20-May-2024) Oct 9 00:54:31.442650 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1269) Oct 9 00:54:31.432157 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 00:54:31.432562 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 00:54:31.442910 jq[1577]: true Oct 9 00:54:31.447366 update_engine[1574]: I20241009 00:54:31.446816 1574 main.cc:92] Flatcar Update Engine starting Oct 9 00:54:31.459591 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 9 00:54:31.448008 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 00:54:31.459736 update_engine[1574]: I20241009 00:54:31.450275 1574 update_check_scheduler.cc:74] Next update check in 4m35s Oct 9 00:54:31.448359 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 00:54:31.449918 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 00:54:31.456710 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 00:54:31.457016 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 00:54:31.466462 (ntainerd)[1588]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 00:54:31.469963 jq[1587]: true Oct 9 00:54:31.470809 sshd_keygen[1578]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 00:54:31.479155 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 9 00:54:31.479535 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 9 00:54:31.497057 systemd[1]: Started update-engine.service - Update Engine. Oct 9 00:54:31.498503 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 00:54:31.498613 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 00:54:31.498640 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 00:54:31.499976 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 00:54:31.499996 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 00:54:31.501911 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 00:54:31.528421 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 00:54:31.530065 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 00:54:31.533639 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 00:54:31.545697 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 00:54:31.546012 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 00:54:31.557425 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 00:54:31.594967 systemd-logind[1569]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 00:54:31.594994 systemd-logind[1569]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 00:54:31.597211 systemd-logind[1569]: New seat seat0. Oct 9 00:54:31.601550 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 00:54:31.608169 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 00:54:31.620595 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 00:54:31.621440 locksmithd[1629]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 00:54:31.625715 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 00:54:31.628307 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 00:54:31.630063 tar[1586]: linux-amd64/helm Oct 9 00:54:31.832333 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 9 00:54:32.635333 containerd[1588]: time="2024-10-09T00:54:32.633081694Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 00:54:32.635698 extend-filesystems[1579]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 00:54:32.635698 extend-filesystems[1579]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 00:54:32.635698 extend-filesystems[1579]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 9 00:54:32.640434 extend-filesystems[1545]: Resized filesystem in /dev/vda9 Oct 9 00:54:32.642210 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 00:54:32.643007 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 00:54:32.660155 containerd[1588]: time="2024-10-09T00:54:32.660099262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:54:32.663085 containerd[1588]: time="2024-10-09T00:54:32.661728537Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:54:32.663085 containerd[1588]: time="2024-10-09T00:54:32.661758864Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 00:54:32.663085 containerd[1588]: time="2024-10-09T00:54:32.661773391Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 00:54:32.663085 containerd[1588]: time="2024-10-09T00:54:32.661949631Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 00:54:32.663085 containerd[1588]: time="2024-10-09T00:54:32.661963948Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 00:54:32.663085 containerd[1588]: time="2024-10-09T00:54:32.662024692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:54:32.663085 containerd[1588]: time="2024-10-09T00:54:32.662037145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:54:32.663085 containerd[1588]: time="2024-10-09T00:54:32.662274811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:54:32.663085 containerd[1588]: time="2024-10-09T00:54:32.662316870Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 00:54:32.663085 containerd[1588]: time="2024-10-09T00:54:32.662330035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:54:32.663085 containerd[1588]: time="2024-10-09T00:54:32.662338490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 00:54:32.663412 containerd[1588]: time="2024-10-09T00:54:32.662434581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:54:32.663412 containerd[1588]: time="2024-10-09T00:54:32.662697423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:54:32.663412 containerd[1588]: time="2024-10-09T00:54:32.662852033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:54:32.663412 containerd[1588]: time="2024-10-09T00:54:32.662863244Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 00:54:32.663412 containerd[1588]: time="2024-10-09T00:54:32.662957080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 00:54:32.663412 containerd[1588]: time="2024-10-09T00:54:32.663007425Z" level=info msg="metadata content store policy set" policy=shared Oct 9 00:54:32.779783 tar[1586]: linux-amd64/LICENSE Oct 9 00:54:32.780199 tar[1586]: linux-amd64/README.md Oct 9 00:54:32.792579 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 00:54:32.988492 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Oct 9 00:54:32.990701 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 00:54:32.993281 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 9 00:54:33.043614 containerd[1588]: time="2024-10-09T00:54:33.043549281Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 00:54:33.043614 containerd[1588]: time="2024-10-09T00:54:33.043616166Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 00:54:33.043733 containerd[1588]: time="2024-10-09T00:54:33.043637656Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 00:54:33.043733 containerd[1588]: time="2024-10-09T00:54:33.043657544Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 00:54:33.043733 containerd[1588]: time="2024-10-09T00:54:33.043678683Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 00:54:33.043880 containerd[1588]: time="2024-10-09T00:54:33.043848401Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 00:54:33.044848 containerd[1588]: time="2024-10-09T00:54:33.044821646Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 00:54:33.044966 containerd[1588]: time="2024-10-09T00:54:33.044946741Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 00:54:33.045000 containerd[1588]: time="2024-10-09T00:54:33.044966167Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 00:54:33.045000 containerd[1588]: time="2024-10-09T00:54:33.044979222Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 00:54:33.045000 containerd[1588]: time="2024-10-09T00:54:33.044992527Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 00:54:33.045063 containerd[1588]: time="2024-10-09T00:54:33.045004128Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 00:54:33.045063 containerd[1588]: time="2024-10-09T00:54:33.045015630Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 00:54:33.045063 containerd[1588]: time="2024-10-09T00:54:33.045028895Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 00:54:33.045063 containerd[1588]: time="2024-10-09T00:54:33.045045175Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 00:54:33.045063 containerd[1588]: time="2024-10-09T00:54:33.045056577Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 00:54:33.045145 containerd[1588]: time="2024-10-09T00:54:33.045067938Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 00:54:33.045145 containerd[1588]: time="2024-10-09T00:54:33.045079490Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 00:54:33.045145 containerd[1588]: time="2024-10-09T00:54:33.045097203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045145 containerd[1588]: time="2024-10-09T00:54:33.045109055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045145 containerd[1588]: time="2024-10-09T00:54:33.045120396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045145 containerd[1588]: time="2024-10-09T00:54:33.045131888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045145 containerd[1588]: time="2024-10-09T00:54:33.045143199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045280 containerd[1588]: time="2024-10-09T00:54:33.045155953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045280 containerd[1588]: time="2024-10-09T00:54:33.045166693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045280 containerd[1588]: time="2024-10-09T00:54:33.045177403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045280 containerd[1588]: time="2024-10-09T00:54:33.045188374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045280 containerd[1588]: time="2024-10-09T00:54:33.045202500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045280 containerd[1588]: time="2024-10-09T00:54:33.045213421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045280 containerd[1588]: time="2024-10-09T00:54:33.045224371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045280 containerd[1588]: time="2024-10-09T00:54:33.045234941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045280 containerd[1588]: time="2024-10-09T00:54:33.045247865Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 00:54:33.045280 containerd[1588]: time="2024-10-09T00:54:33.045264406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045280 containerd[1588]: time="2024-10-09T00:54:33.045276780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045495 containerd[1588]: time="2024-10-09T00:54:33.045310443Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 00:54:33.045495 containerd[1588]: time="2024-10-09T00:54:33.045356068Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 00:54:33.045495 containerd[1588]: time="2024-10-09T00:54:33.045371497Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 00:54:33.045495 containerd[1588]: time="2024-10-09T00:54:33.045381175Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 00:54:33.045495 containerd[1588]: time="2024-10-09T00:54:33.045392937Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 00:54:33.045495 containerd[1588]: time="2024-10-09T00:54:33.045402265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045495 containerd[1588]: time="2024-10-09T00:54:33.045414418Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 00:54:33.045495 containerd[1588]: time="2024-10-09T00:54:33.045424066Z" level=info msg="NRI interface is disabled by configuration." Oct 9 00:54:33.045495 containerd[1588]: time="2024-10-09T00:54:33.045433684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 00:54:33.045716 containerd[1588]: time="2024-10-09T00:54:33.045670538Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 00:54:33.045716 containerd[1588]: time="2024-10-09T00:54:33.045715773Z" level=info msg="Connect containerd service" Oct 9 00:54:33.045865 containerd[1588]: time="2024-10-09T00:54:33.045735880Z" level=info msg="using legacy CRI server" Oct 9 00:54:33.045865 containerd[1588]: time="2024-10-09T00:54:33.045742292Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 00:54:33.045865 containerd[1588]: time="2024-10-09T00:54:33.045822132Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 00:54:33.046366 containerd[1588]: time="2024-10-09T00:54:33.046335946Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 00:54:33.046544 containerd[1588]: time="2024-10-09T00:54:33.046483392Z" level=info msg="Start subscribing containerd event" Oct 9 00:54:33.046633 containerd[1588]: time="2024-10-09T00:54:33.046574062Z" level=info msg="Start recovering state" Oct 9 00:54:33.046902 containerd[1588]: time="2024-10-09T00:54:33.046637331Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 00:54:33.046902 containerd[1588]: time="2024-10-09T00:54:33.046686874Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 00:54:33.046902 containerd[1588]: time="2024-10-09T00:54:33.046696101Z" level=info msg="Start event monitor" Oct 9 00:54:33.046902 containerd[1588]: time="2024-10-09T00:54:33.046716629Z" level=info msg="Start snapshots syncer" Oct 9 00:54:33.046902 containerd[1588]: time="2024-10-09T00:54:33.046727790Z" level=info msg="Start cni network conf syncer for default" Oct 9 00:54:33.046902 containerd[1588]: time="2024-10-09T00:54:33.046736827Z" level=info msg="Start streaming server" Oct 9 00:54:33.046902 containerd[1588]: time="2024-10-09T00:54:33.046819833Z" level=info msg="containerd successfully booted in 1.033291s" Oct 9 00:54:33.046987 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 00:54:33.355569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:54:33.357451 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 00:54:33.359022 systemd[1]: Startup finished in 6.682s (kernel) + 4.995s (userspace) = 11.677s. Oct 9 00:54:33.361396 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 00:54:33.819520 kubelet[1680]: E1009 00:54:33.819261 1680 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 00:54:33.823887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 00:54:33.824165 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 00:54:36.839767 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 00:54:36.846497 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:57406.service - OpenSSH per-connection server daemon (10.0.0.1:57406). Oct 9 00:54:36.887365 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 57406 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:54:36.889402 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:36.898532 systemd-logind[1569]: New session 1 of user core. Oct 9 00:54:36.899652 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 00:54:36.910480 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 00:54:36.921997 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 00:54:36.924574 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 00:54:36.932937 (systemd)[1700]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 00:54:37.040918 systemd[1700]: Queued start job for default target default.target. Oct 9 00:54:37.041308 systemd[1700]: Created slice app.slice - User Application Slice. Oct 9 00:54:37.041327 systemd[1700]: Reached target paths.target - Paths. Oct 9 00:54:37.041339 systemd[1700]: Reached target timers.target - Timers. Oct 9 00:54:37.055369 systemd[1700]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 00:54:37.062853 systemd[1700]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 00:54:37.062943 systemd[1700]: Reached target sockets.target - Sockets. Oct 9 00:54:37.062957 systemd[1700]: Reached target basic.target - Basic System. Oct 9 00:54:37.063015 systemd[1700]: Reached target default.target - Main User Target. Oct 9 00:54:37.063051 systemd[1700]: Startup finished in 122ms. Oct 9 00:54:37.063517 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 00:54:37.064889 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 00:54:37.129638 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:50410.service - OpenSSH per-connection server daemon (10.0.0.1:50410). Oct 9 00:54:37.159844 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 50410 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:54:37.161289 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:37.165395 systemd-logind[1569]: New session 2 of user core. Oct 9 00:54:37.176829 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 00:54:37.231281 sshd[1712]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:37.244497 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:50416.service - OpenSSH per-connection server daemon (10.0.0.1:50416). Oct 9 00:54:37.244935 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:50410.service: Deactivated successfully. Oct 9 00:54:37.247505 systemd-logind[1569]: Session 2 logged out. Waiting for processes to exit. Oct 9 00:54:37.248302 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 00:54:37.249243 systemd-logind[1569]: Removed session 2. Oct 9 00:54:37.275734 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 50416 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:54:37.277214 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:37.280931 systemd-logind[1569]: New session 3 of user core. Oct 9 00:54:37.292518 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 00:54:37.342878 sshd[1717]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:37.355496 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:50418.service - OpenSSH per-connection server daemon (10.0.0.1:50418). Oct 9 00:54:37.355936 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:50416.service: Deactivated successfully. Oct 9 00:54:37.358390 systemd-logind[1569]: Session 3 logged out. Waiting for processes to exit. Oct 9 00:54:37.359520 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 00:54:37.360575 systemd-logind[1569]: Removed session 3. Oct 9 00:54:37.385498 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 50418 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:54:37.386999 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:37.390780 systemd-logind[1569]: New session 4 of user core. Oct 9 00:54:37.400676 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 00:54:37.455544 sshd[1725]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:37.464508 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:50426.service - OpenSSH per-connection server daemon (10.0.0.1:50426). Oct 9 00:54:37.464967 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:50418.service: Deactivated successfully. Oct 9 00:54:37.467271 systemd-logind[1569]: Session 4 logged out. Waiting for processes to exit. Oct 9 00:54:37.468425 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 00:54:37.469595 systemd-logind[1569]: Removed session 4. Oct 9 00:54:37.497690 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 50426 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:54:37.499320 sshd[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:37.503287 systemd-logind[1569]: New session 5 of user core. Oct 9 00:54:37.513565 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 00:54:37.572132 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 00:54:37.572493 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:54:37.588528 sudo[1740]: pam_unix(sudo:session): session closed for user root Oct 9 00:54:37.590812 sshd[1733]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:37.605521 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:50442.service - OpenSSH per-connection server daemon (10.0.0.1:50442). Oct 9 00:54:37.605978 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:50426.service: Deactivated successfully. Oct 9 00:54:37.608558 systemd-logind[1569]: Session 5 logged out. Waiting for processes to exit. Oct 9 00:54:37.609886 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 00:54:37.610724 systemd-logind[1569]: Removed session 5. Oct 9 00:54:37.636256 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 50442 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:54:37.637781 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:37.641841 systemd-logind[1569]: New session 6 of user core. Oct 9 00:54:37.652546 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 00:54:37.707049 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 00:54:37.707407 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:54:37.711242 sudo[1750]: pam_unix(sudo:session): session closed for user root Oct 9 00:54:37.717376 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 00:54:37.717716 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:54:37.735549 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:54:37.766120 augenrules[1772]: No rules Oct 9 00:54:37.767932 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:54:37.768314 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:54:37.769626 sudo[1749]: pam_unix(sudo:session): session closed for user root Oct 9 00:54:37.771435 sshd[1742]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:37.781206 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:50450.service - OpenSSH per-connection server daemon (10.0.0.1:50450). Oct 9 00:54:37.781762 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:50442.service: Deactivated successfully. Oct 9 00:54:37.784522 systemd-logind[1569]: Session 6 logged out. Waiting for processes to exit. Oct 9 00:54:37.785941 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 00:54:37.787072 systemd-logind[1569]: Removed session 6. Oct 9 00:54:37.811933 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 50450 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:54:37.813713 sshd[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:37.817850 systemd-logind[1569]: New session 7 of user core. Oct 9 00:54:37.827596 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 00:54:37.880234 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 00:54:37.880593 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:54:38.147517 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 00:54:38.147776 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 00:54:38.390480 dockerd[1806]: time="2024-10-09T00:54:38.390413199Z" level=info msg="Starting up" Oct 9 00:54:39.024889 dockerd[1806]: time="2024-10-09T00:54:39.024838747Z" level=info msg="Loading containers: start." Oct 9 00:54:39.195317 kernel: Initializing XFRM netlink socket Oct 9 00:54:39.275655 systemd-networkd[1256]: docker0: Link UP Oct 9 00:54:39.315807 dockerd[1806]: time="2024-10-09T00:54:39.315759162Z" level=info msg="Loading containers: done." Oct 9 00:54:39.330068 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck620478734-merged.mount: Deactivated successfully. Oct 9 00:54:39.332994 dockerd[1806]: time="2024-10-09T00:54:39.332956928Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 00:54:39.333059 dockerd[1806]: time="2024-10-09T00:54:39.333041958Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 00:54:39.333191 dockerd[1806]: time="2024-10-09T00:54:39.333170178Z" level=info msg="Daemon has completed initialization" Oct 9 00:54:39.371009 dockerd[1806]: time="2024-10-09T00:54:39.370947380Z" level=info msg="API listen on /run/docker.sock" Oct 9 00:54:39.371232 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 00:54:40.245917 containerd[1588]: time="2024-10-09T00:54:40.245873857Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 00:54:40.848559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1759682944.mount: Deactivated successfully. Oct 9 00:54:42.521399 containerd[1588]: time="2024-10-09T00:54:42.521345766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:42.522184 containerd[1588]: time="2024-10-09T00:54:42.522139064Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213841" Oct 9 00:54:42.523362 containerd[1588]: time="2024-10-09T00:54:42.523335006Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:42.526256 containerd[1588]: time="2024-10-09T00:54:42.526222090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:42.527140 containerd[1588]: time="2024-10-09T00:54:42.527116527Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 2.281199428s" Oct 9 00:54:42.527192 containerd[1588]: time="2024-10-09T00:54:42.527150841Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 9 00:54:42.547852 containerd[1588]: time="2024-10-09T00:54:42.547816419Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 00:54:44.074474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 00:54:44.084451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:54:44.231444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:54:44.236874 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 00:54:44.293221 kubelet[2087]: E1009 00:54:44.293067 2087 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 00:54:44.300698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 00:54:44.301043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 00:54:44.847596 containerd[1588]: time="2024-10-09T00:54:44.847529781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:44.848602 containerd[1588]: time="2024-10-09T00:54:44.848544664Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208673" Oct 9 00:54:44.849791 containerd[1588]: time="2024-10-09T00:54:44.849760173Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:44.852590 containerd[1588]: time="2024-10-09T00:54:44.852558179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:44.853635 containerd[1588]: time="2024-10-09T00:54:44.853580726Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 2.30573357s" Oct 9 00:54:44.853635 containerd[1588]: time="2024-10-09T00:54:44.853632764Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 9 00:54:44.875255 containerd[1588]: time="2024-10-09T00:54:44.875214430Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 00:54:46.397389 containerd[1588]: time="2024-10-09T00:54:46.397320568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:46.398502 containerd[1588]: time="2024-10-09T00:54:46.398072588Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320456" Oct 9 00:54:46.399756 containerd[1588]: time="2024-10-09T00:54:46.399716871Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:46.403134 containerd[1588]: time="2024-10-09T00:54:46.403076951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:46.404440 containerd[1588]: time="2024-10-09T00:54:46.404387218Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 1.529129498s" Oct 9 00:54:46.404440 containerd[1588]: time="2024-10-09T00:54:46.404434587Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 9 00:54:46.428006 containerd[1588]: time="2024-10-09T00:54:46.427960628Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 00:54:47.502325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3626139717.mount: Deactivated successfully. Oct 9 00:54:48.151590 containerd[1588]: time="2024-10-09T00:54:48.151507438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:48.152707 containerd[1588]: time="2024-10-09T00:54:48.152668736Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601750" Oct 9 00:54:48.154013 containerd[1588]: time="2024-10-09T00:54:48.153975195Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:48.156102 containerd[1588]: time="2024-10-09T00:54:48.156045998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:48.156633 containerd[1588]: time="2024-10-09T00:54:48.156589607Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 1.72858713s" Oct 9 00:54:48.156681 containerd[1588]: time="2024-10-09T00:54:48.156633119Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 9 00:54:48.177835 containerd[1588]: time="2024-10-09T00:54:48.177791290Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 00:54:48.782421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4237537721.mount: Deactivated successfully. Oct 9 00:54:49.485680 containerd[1588]: time="2024-10-09T00:54:49.485634957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:49.486659 containerd[1588]: time="2024-10-09T00:54:49.486630854Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 00:54:49.488156 containerd[1588]: time="2024-10-09T00:54:49.488110999Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:49.493686 containerd[1588]: time="2024-10-09T00:54:49.493654614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:49.494766 containerd[1588]: time="2024-10-09T00:54:49.494718298Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.316882815s" Oct 9 00:54:49.494766 containerd[1588]: time="2024-10-09T00:54:49.494750328Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 00:54:49.516333 containerd[1588]: time="2024-10-09T00:54:49.516256893Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 00:54:50.037130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount764775966.mount: Deactivated successfully. Oct 9 00:54:50.044513 containerd[1588]: time="2024-10-09T00:54:50.044455285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:50.045335 containerd[1588]: time="2024-10-09T00:54:50.045285742Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 9 00:54:50.046393 containerd[1588]: time="2024-10-09T00:54:50.046358042Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:50.048613 containerd[1588]: time="2024-10-09T00:54:50.048568547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:50.049225 containerd[1588]: time="2024-10-09T00:54:50.049198418Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 532.725511ms" Oct 9 00:54:50.049275 containerd[1588]: time="2024-10-09T00:54:50.049224247Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 00:54:50.071220 containerd[1588]: time="2024-10-09T00:54:50.071157702Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 00:54:50.741448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2947145677.mount: Deactivated successfully. Oct 9 00:54:52.790229 containerd[1588]: time="2024-10-09T00:54:52.790167605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:52.791123 containerd[1588]: time="2024-10-09T00:54:52.791083622Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Oct 9 00:54:52.792429 containerd[1588]: time="2024-10-09T00:54:52.792384572Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:52.795163 containerd[1588]: time="2024-10-09T00:54:52.795131983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:54:52.796459 containerd[1588]: time="2024-10-09T00:54:52.796418946Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.725223504s" Oct 9 00:54:52.796501 containerd[1588]: time="2024-10-09T00:54:52.796456356Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 9 00:54:54.551175 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 00:54:54.561429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:54:54.704440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:54:54.707100 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 00:54:54.749153 kubelet[2320]: E1009 00:54:54.749054 2320 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 00:54:54.753996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 00:54:54.754265 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 00:54:55.652990 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:54:55.670495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:54:55.690786 systemd[1]: Reloading requested from client PID 2338 ('systemctl') (unit session-7.scope)... Oct 9 00:54:55.690802 systemd[1]: Reloading... Oct 9 00:54:55.778366 zram_generator::config[2383]: No configuration found. Oct 9 00:54:56.354915 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:54:56.427059 systemd[1]: Reloading finished in 735 ms. Oct 9 00:54:56.469173 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 00:54:56.469283 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 00:54:56.469785 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:54:56.471678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:54:56.611120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:54:56.615705 (kubelet)[2437]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 00:54:56.657396 kubelet[2437]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:54:56.657396 kubelet[2437]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 00:54:56.657396 kubelet[2437]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:54:56.657766 kubelet[2437]: I1009 00:54:56.657435 2437 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 00:54:57.080135 kubelet[2437]: I1009 00:54:57.080023 2437 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 00:54:57.080135 kubelet[2437]: I1009 00:54:57.080056 2437 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 00:54:57.080307 kubelet[2437]: I1009 00:54:57.080276 2437 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 00:54:57.094476 kubelet[2437]: E1009 00:54:57.094442 2437 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:57.095182 kubelet[2437]: I1009 00:54:57.095159 2437 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 00:54:57.109759 kubelet[2437]: I1009 00:54:57.109728 2437 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 00:54:57.110188 kubelet[2437]: I1009 00:54:57.110165 2437 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 00:54:57.110385 kubelet[2437]: I1009 00:54:57.110360 2437 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 00:54:57.110471 kubelet[2437]: I1009 00:54:57.110392 2437 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 00:54:57.110471 kubelet[2437]: I1009 00:54:57.110402 2437 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 00:54:57.110538 kubelet[2437]: I1009 00:54:57.110524 2437 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:54:57.110658 kubelet[2437]: I1009 00:54:57.110639 2437 kubelet.go:396] "Attempting to sync node with API server" Oct 9 00:54:57.110658 kubelet[2437]: I1009 00:54:57.110657 2437 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 00:54:57.110708 kubelet[2437]: I1009 00:54:57.110688 2437 kubelet.go:312] "Adding apiserver pod source" Oct 9 00:54:57.110708 kubelet[2437]: I1009 00:54:57.110700 2437 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 00:54:57.111803 kubelet[2437]: I1009 00:54:57.111784 2437 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 00:54:57.112245 kubelet[2437]: W1009 00:54:57.112146 2437 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:57.112245 kubelet[2437]: E1009 00:54:57.112202 2437 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:57.112720 kubelet[2437]: W1009 00:54:57.112677 2437 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:57.112720 kubelet[2437]: E1009 00:54:57.112716 2437 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:57.114311 kubelet[2437]: I1009 00:54:57.114273 2437 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 00:54:57.115407 kubelet[2437]: W1009 00:54:57.115383 2437 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 00:54:57.116188 kubelet[2437]: I1009 00:54:57.115996 2437 server.go:1256] "Started kubelet" Oct 9 00:54:57.116188 kubelet[2437]: I1009 00:54:57.116088 2437 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 00:54:57.117001 kubelet[2437]: I1009 00:54:57.116976 2437 server.go:461] "Adding debug handlers to kubelet server" Oct 9 00:54:57.118259 kubelet[2437]: I1009 00:54:57.117439 2437 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 00:54:57.118259 kubelet[2437]: I1009 00:54:57.117968 2437 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 00:54:57.118259 kubelet[2437]: I1009 00:54:57.118166 2437 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 00:54:57.119488 kubelet[2437]: E1009 00:54:57.119078 2437 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:54:57.119488 kubelet[2437]: I1009 00:54:57.119108 2437 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 00:54:57.119488 kubelet[2437]: I1009 00:54:57.119187 2437 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 00:54:57.119488 kubelet[2437]: I1009 00:54:57.119229 2437 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 00:54:57.119593 kubelet[2437]: W1009 00:54:57.119557 2437 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:57.119631 kubelet[2437]: E1009 00:54:57.119593 2437 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:57.119769 kubelet[2437]: E1009 00:54:57.119745 2437 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 00:54:57.119959 kubelet[2437]: E1009 00:54:57.119936 2437 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" Oct 9 00:54:57.120920 kubelet[2437]: I1009 00:54:57.120473 2437 factory.go:221] Registration of the systemd container factory successfully Oct 9 00:54:57.120920 kubelet[2437]: I1009 00:54:57.120533 2437 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 00:54:57.121451 kubelet[2437]: I1009 00:54:57.121426 2437 factory.go:221] Registration of the containerd container factory successfully Oct 9 00:54:57.121864 kubelet[2437]: E1009 00:54:57.121842 2437 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fca2bebdd81282 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 00:54:57.11596813 +0000 UTC m=+0.496131137,LastTimestamp:2024-10-09 00:54:57.11596813 +0000 UTC m=+0.496131137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 00:54:57.136153 kubelet[2437]: I1009 00:54:57.136106 2437 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 00:54:57.137549 kubelet[2437]: I1009 00:54:57.137526 2437 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 00:54:57.137587 kubelet[2437]: I1009 00:54:57.137557 2437 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 00:54:57.137587 kubelet[2437]: I1009 00:54:57.137579 2437 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 00:54:57.137664 kubelet[2437]: E1009 00:54:57.137638 2437 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 00:54:57.138347 kubelet[2437]: W1009 00:54:57.138307 2437 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:57.138391 kubelet[2437]: E1009 00:54:57.138367 2437 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:57.145921 kubelet[2437]: I1009 00:54:57.145896 2437 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 00:54:57.145921 kubelet[2437]: I1009 00:54:57.145916 2437 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 00:54:57.146033 kubelet[2437]: I1009 00:54:57.145931 2437 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:54:57.221424 kubelet[2437]: I1009 00:54:57.221395 2437 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:54:57.221845 kubelet[2437]: E1009 00:54:57.221812 2437 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Oct 9 00:54:57.237855 kubelet[2437]: E1009 00:54:57.237830 2437 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 00:54:57.320569 kubelet[2437]: E1009 00:54:57.320536 2437 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" Oct 9 00:54:57.423079 kubelet[2437]: I1009 00:54:57.422963 2437 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:54:57.423354 kubelet[2437]: E1009 00:54:57.423335 2437 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Oct 9 00:54:57.433606 kubelet[2437]: I1009 00:54:57.433567 2437 policy_none.go:49] "None policy: Start" Oct 9 00:54:57.434173 kubelet[2437]: I1009 00:54:57.434140 2437 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 00:54:57.434173 kubelet[2437]: I1009 00:54:57.434166 2437 state_mem.go:35] "Initializing new in-memory state store" Oct 9 00:54:57.437992 kubelet[2437]: E1009 00:54:57.437971 2437 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 00:54:57.440800 kubelet[2437]: I1009 00:54:57.440775 2437 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 00:54:57.441066 kubelet[2437]: I1009 00:54:57.441039 2437 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 00:54:57.442535 kubelet[2437]: E1009 00:54:57.442518 2437 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 9 00:54:57.721351 kubelet[2437]: E1009 00:54:57.721222 2437 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" Oct 9 00:54:57.824673 kubelet[2437]: I1009 00:54:57.824636 2437 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:54:57.824889 kubelet[2437]: E1009 00:54:57.824864 2437 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Oct 9 00:54:57.839077 kubelet[2437]: I1009 00:54:57.839033 2437 topology_manager.go:215] "Topology Admit Handler" podUID="1680974ad8ea60a6d2ae47bd95476758" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 00:54:57.840210 kubelet[2437]: I1009 00:54:57.840174 2437 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 00:54:57.840848 kubelet[2437]: I1009 00:54:57.840831 2437 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 00:54:57.923407 kubelet[2437]: I1009 00:54:57.923366 2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1680974ad8ea60a6d2ae47bd95476758-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1680974ad8ea60a6d2ae47bd95476758\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:54:57.923407 kubelet[2437]: I1009 00:54:57.923409 2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:54:57.923542 kubelet[2437]: I1009 00:54:57.923429 2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:54:57.923542 kubelet[2437]: I1009 00:54:57.923459 2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:54:57.923542 kubelet[2437]: I1009 00:54:57.923489 2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1680974ad8ea60a6d2ae47bd95476758-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1680974ad8ea60a6d2ae47bd95476758\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:54:57.923542 kubelet[2437]: I1009 00:54:57.923508 2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1680974ad8ea60a6d2ae47bd95476758-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1680974ad8ea60a6d2ae47bd95476758\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:54:57.923542 kubelet[2437]: I1009 00:54:57.923526 2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:54:57.923662 kubelet[2437]: I1009 00:54:57.923543 2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:54:57.923662 kubelet[2437]: I1009 00:54:57.923582 2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 9 00:54:58.057368 kubelet[2437]: W1009 00:54:58.057203 2437 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:58.057368 kubelet[2437]: E1009 00:54:58.057276 2437 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:58.144308 kubelet[2437]: E1009 00:54:58.144258 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:58.144585 kubelet[2437]: E1009 00:54:58.144545 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:58.144834 containerd[1588]: time="2024-10-09T00:54:58.144792886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1680974ad8ea60a6d2ae47bd95476758,Namespace:kube-system,Attempt:0,}" Oct 9 00:54:58.145179 containerd[1588]: time="2024-10-09T00:54:58.144814646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 9 00:54:58.146023 kubelet[2437]: E1009 00:54:58.145999 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:58.146347 containerd[1588]: time="2024-10-09T00:54:58.146230100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 9 00:54:58.429240 kubelet[2437]: W1009 00:54:58.429053 2437 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:58.429240 kubelet[2437]: E1009 00:54:58.429112 2437 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:58.521857 kubelet[2437]: E1009 00:54:58.521829 2437 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="1.6s" Oct 9 00:54:58.614720 kubelet[2437]: W1009 00:54:58.614674 2437 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:58.614772 kubelet[2437]: E1009 00:54:58.614726 2437 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:58.626867 kubelet[2437]: I1009 00:54:58.626841 2437 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:54:58.627133 kubelet[2437]: E1009 00:54:58.627109 2437 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Oct 9 00:54:58.697527 kubelet[2437]: W1009 00:54:58.697423 2437 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:58.697527 kubelet[2437]: E1009 00:54:58.697450 2437 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:59.186977 kubelet[2437]: E1009 00:54:59.186874 2437 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.13:6443: connect: connection refused Oct 9 00:54:59.246834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2236562249.mount: Deactivated successfully. Oct 9 00:54:59.255071 containerd[1588]: time="2024-10-09T00:54:59.255028357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:54:59.255967 containerd[1588]: time="2024-10-09T00:54:59.255926331Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 00:54:59.256920 containerd[1588]: time="2024-10-09T00:54:59.256883636Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:54:59.257979 containerd[1588]: time="2024-10-09T00:54:59.257927663Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:54:59.258821 containerd[1588]: time="2024-10-09T00:54:59.258781274Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 00:54:59.259923 containerd[1588]: time="2024-10-09T00:54:59.259884943Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:54:59.260780 containerd[1588]: time="2024-10-09T00:54:59.260737802Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 00:54:59.262528 containerd[1588]: time="2024-10-09T00:54:59.262498804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:54:59.264142 containerd[1588]: time="2024-10-09T00:54:59.264112339Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.119090254s" Oct 9 00:54:59.264774 containerd[1588]: time="2024-10-09T00:54:59.264750867Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.118468298s" Oct 9 00:54:59.267070 containerd[1588]: time="2024-10-09T00:54:59.267035410Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.122151744s" Oct 9 00:54:59.401169 containerd[1588]: time="2024-10-09T00:54:59.399403260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:54:59.401169 containerd[1588]: time="2024-10-09T00:54:59.401003460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:54:59.401169 containerd[1588]: time="2024-10-09T00:54:59.401020372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:54:59.401169 containerd[1588]: time="2024-10-09T00:54:59.401124177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:54:59.401600 containerd[1588]: time="2024-10-09T00:54:59.401340692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:54:59.401600 containerd[1588]: time="2024-10-09T00:54:59.401396988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:54:59.401600 containerd[1588]: time="2024-10-09T00:54:59.401412237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:54:59.401600 containerd[1588]: time="2024-10-09T00:54:59.401511643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:54:59.404043 containerd[1588]: time="2024-10-09T00:54:59.403937852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:54:59.404043 containerd[1588]: time="2024-10-09T00:54:59.404009587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:54:59.404043 containerd[1588]: time="2024-10-09T00:54:59.404025817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:54:59.404250 containerd[1588]: time="2024-10-09T00:54:59.404114834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:54:59.459224 containerd[1588]: time="2024-10-09T00:54:59.458468527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0ea580e7348651af620fb930697c7016359448b38f7558a1b4714b3e788dd2f\"" Oct 9 00:54:59.459882 kubelet[2437]: E1009 00:54:59.459598 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:59.462770 containerd[1588]: time="2024-10-09T00:54:59.462734576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1680974ad8ea60a6d2ae47bd95476758,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcec7e6ad89eeccb8af8c65dff370d45e30c9679b50dc3bf97e4d69d006c0620\"" Oct 9 00:54:59.463687 kubelet[2437]: E1009 00:54:59.463663 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:59.464651 containerd[1588]: time="2024-10-09T00:54:59.464633146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b659aa09e0c67d441ee0c0ec5040cc7e60886e51be69269217d338e3c12fd40\"" Oct 9 00:54:59.464841 containerd[1588]: time="2024-10-09T00:54:59.464801411Z" level=info msg="CreateContainer within sandbox \"e0ea580e7348651af620fb930697c7016359448b38f7558a1b4714b3e788dd2f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 00:54:59.466318 kubelet[2437]: E1009 00:54:59.466278 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:59.468197 containerd[1588]: time="2024-10-09T00:54:59.468086932Z" level=info msg="CreateContainer within sandbox \"fcec7e6ad89eeccb8af8c65dff370d45e30c9679b50dc3bf97e4d69d006c0620\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 00:54:59.469436 containerd[1588]: time="2024-10-09T00:54:59.469406907Z" level=info msg="CreateContainer within sandbox \"8b659aa09e0c67d441ee0c0ec5040cc7e60886e51be69269217d338e3c12fd40\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 00:54:59.490519 containerd[1588]: time="2024-10-09T00:54:59.490497301Z" level=info msg="CreateContainer within sandbox \"e0ea580e7348651af620fb930697c7016359448b38f7558a1b4714b3e788dd2f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"be649a466bb56d197ecbd59ba49b3b163db601b69b6c8f91bb90cf70e4cbddb7\"" Oct 9 00:54:59.491192 containerd[1588]: time="2024-10-09T00:54:59.491146879Z" level=info msg="StartContainer for \"be649a466bb56d197ecbd59ba49b3b163db601b69b6c8f91bb90cf70e4cbddb7\"" Oct 9 00:54:59.497014 containerd[1588]: time="2024-10-09T00:54:59.496981950Z" level=info msg="CreateContainer within sandbox \"fcec7e6ad89eeccb8af8c65dff370d45e30c9679b50dc3bf97e4d69d006c0620\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9b4e71fdf8202f5d96b2ec97bc0ae29ed5eadeba09c75111660db18e066dfa95\"" Oct 9 00:54:59.497448 containerd[1588]: time="2024-10-09T00:54:59.497426473Z" level=info msg="StartContainer for \"9b4e71fdf8202f5d96b2ec97bc0ae29ed5eadeba09c75111660db18e066dfa95\"" Oct 9 00:54:59.497898 containerd[1588]: time="2024-10-09T00:54:59.497870666Z" level=info msg="CreateContainer within sandbox \"8b659aa09e0c67d441ee0c0ec5040cc7e60886e51be69269217d338e3c12fd40\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1c939d1deff1de4ff17db882aa8f4c8bf0fc9fabc622219b0ccfef6706eb8f1d\"" Oct 9 00:54:59.498838 containerd[1588]: time="2024-10-09T00:54:59.498816600Z" level=info msg="StartContainer for \"1c939d1deff1de4ff17db882aa8f4c8bf0fc9fabc622219b0ccfef6706eb8f1d\"" Oct 9 00:54:59.568236 containerd[1588]: time="2024-10-09T00:54:59.568203430Z" level=info msg="StartContainer for \"be649a466bb56d197ecbd59ba49b3b163db601b69b6c8f91bb90cf70e4cbddb7\" returns successfully" Oct 9 00:54:59.572488 containerd[1588]: time="2024-10-09T00:54:59.572395691Z" level=info msg="StartContainer for \"9b4e71fdf8202f5d96b2ec97bc0ae29ed5eadeba09c75111660db18e066dfa95\" returns successfully" Oct 9 00:54:59.572488 containerd[1588]: time="2024-10-09T00:54:59.572453319Z" level=info msg="StartContainer for \"1c939d1deff1de4ff17db882aa8f4c8bf0fc9fabc622219b0ccfef6706eb8f1d\" returns successfully" Oct 9 00:55:00.146399 kubelet[2437]: E1009 00:55:00.146366 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:00.147822 kubelet[2437]: E1009 00:55:00.147790 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:00.148938 kubelet[2437]: E1009 00:55:00.148913 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:00.232610 kubelet[2437]: I1009 00:55:00.232574 2437 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:55:00.500588 kubelet[2437]: E1009 00:55:00.500109 2437 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 9 00:55:00.588686 kubelet[2437]: I1009 00:55:00.588651 2437 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 00:55:00.594397 kubelet[2437]: E1009 00:55:00.594352 2437 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:55:00.694950 kubelet[2437]: E1009 00:55:00.694908 2437 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:55:00.795554 kubelet[2437]: E1009 00:55:00.795433 2437 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:55:00.896358 kubelet[2437]: E1009 00:55:00.896322 2437 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:55:00.996915 kubelet[2437]: E1009 00:55:00.996890 2437 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:55:01.097611 kubelet[2437]: E1009 00:55:01.097552 2437 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:55:01.150624 kubelet[2437]: E1009 00:55:01.150586 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:01.150624 kubelet[2437]: E1009 00:55:01.150614 2437 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:01.198492 kubelet[2437]: E1009 00:55:01.198443 2437 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:55:02.114530 kubelet[2437]: I1009 00:55:02.114441 2437 apiserver.go:52] "Watching apiserver" Oct 9 00:55:02.119865 kubelet[2437]: I1009 00:55:02.119822 2437 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 00:55:03.206996 systemd[1]: Reloading requested from client PID 2716 ('systemctl') (unit session-7.scope)... Oct 9 00:55:03.207016 systemd[1]: Reloading... Oct 9 00:55:03.288328 zram_generator::config[2758]: No configuration found. Oct 9 00:55:03.407319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:55:03.483952 systemd[1]: Reloading finished in 276 ms. Oct 9 00:55:03.517103 kubelet[2437]: I1009 00:55:03.517061 2437 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 00:55:03.517105 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:55:03.534763 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 00:55:03.535254 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:55:03.545540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:55:03.701642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:55:03.706943 (kubelet)[2810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 00:55:03.760071 kubelet[2810]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:55:03.760071 kubelet[2810]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 00:55:03.760071 kubelet[2810]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:55:03.760463 kubelet[2810]: I1009 00:55:03.760073 2810 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 00:55:03.765676 kubelet[2810]: I1009 00:55:03.765622 2810 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 00:55:03.765676 kubelet[2810]: I1009 00:55:03.765645 2810 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 00:55:03.765904 kubelet[2810]: I1009 00:55:03.765884 2810 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 00:55:03.767541 kubelet[2810]: I1009 00:55:03.767523 2810 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 00:55:03.771272 kubelet[2810]: I1009 00:55:03.771250 2810 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 00:55:03.777196 sudo[2825]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 9 00:55:03.777582 sudo[2825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 9 00:55:03.779057 kubelet[2810]: I1009 00:55:03.779037 2810 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 00:55:03.779749 kubelet[2810]: I1009 00:55:03.779719 2810 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 00:55:03.779934 kubelet[2810]: I1009 00:55:03.779915 2810 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 00:55:03.780013 kubelet[2810]: I1009 00:55:03.779948 2810 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 00:55:03.780013 kubelet[2810]: I1009 00:55:03.779960 2810 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 00:55:03.780013 kubelet[2810]: I1009 00:55:03.779997 2810 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:55:03.780130 kubelet[2810]: I1009 00:55:03.780102 2810 kubelet.go:396] "Attempting to sync node with API server" Oct 9 00:55:03.780130 kubelet[2810]: I1009 00:55:03.780122 2810 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 00:55:03.780176 kubelet[2810]: I1009 00:55:03.780150 2810 kubelet.go:312] "Adding apiserver pod source" Oct 9 00:55:03.780176 kubelet[2810]: I1009 00:55:03.780168 2810 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 00:55:03.781557 kubelet[2810]: I1009 00:55:03.780912 2810 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 00:55:03.781557 kubelet[2810]: I1009 00:55:03.781190 2810 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 00:55:03.784456 kubelet[2810]: I1009 00:55:03.783700 2810 server.go:1256] "Started kubelet" Oct 9 00:55:03.784456 kubelet[2810]: I1009 00:55:03.784324 2810 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 00:55:03.784800 kubelet[2810]: I1009 00:55:03.784775 2810 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 00:55:03.784842 kubelet[2810]: I1009 00:55:03.784817 2810 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 00:55:03.786027 kubelet[2810]: I1009 00:55:03.785746 2810 server.go:461] "Adding debug handlers to kubelet server" Oct 9 00:55:03.786609 kubelet[2810]: I1009 00:55:03.786583 2810 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 00:55:03.793976 kubelet[2810]: I1009 00:55:03.793942 2810 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 00:55:03.794383 kubelet[2810]: I1009 00:55:03.794111 2810 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 00:55:03.794383 kubelet[2810]: I1009 00:55:03.794255 2810 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 00:55:03.807576 kubelet[2810]: I1009 00:55:03.807542 2810 factory.go:221] Registration of the systemd container factory successfully Oct 9 00:55:03.807695 kubelet[2810]: I1009 00:55:03.807653 2810 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 00:55:03.811319 kubelet[2810]: I1009 00:55:03.810874 2810 factory.go:221] Registration of the containerd container factory successfully Oct 9 00:55:03.813072 kubelet[2810]: E1009 00:55:03.813046 2810 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 00:55:03.815845 kubelet[2810]: I1009 00:55:03.815668 2810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 00:55:03.817304 kubelet[2810]: I1009 00:55:03.816860 2810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 00:55:03.817304 kubelet[2810]: I1009 00:55:03.816901 2810 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 00:55:03.817304 kubelet[2810]: I1009 00:55:03.816923 2810 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 00:55:03.817304 kubelet[2810]: E1009 00:55:03.816978 2810 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 00:55:03.863923 kubelet[2810]: I1009 00:55:03.863888 2810 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 00:55:03.863923 kubelet[2810]: I1009 00:55:03.863916 2810 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 00:55:03.863923 kubelet[2810]: I1009 00:55:03.863934 2810 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:55:03.864592 kubelet[2810]: I1009 00:55:03.864089 2810 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 00:55:03.864592 kubelet[2810]: I1009 00:55:03.864124 2810 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 00:55:03.864592 kubelet[2810]: I1009 00:55:03.864147 2810 policy_none.go:49] "None policy: Start" Oct 9 00:55:03.864763 kubelet[2810]: I1009 00:55:03.864723 2810 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 00:55:03.864806 kubelet[2810]: I1009 00:55:03.864771 2810 state_mem.go:35] "Initializing new in-memory state store" Oct 9 00:55:03.864920 kubelet[2810]: I1009 00:55:03.864905 2810 state_mem.go:75] "Updated machine memory state" Oct 9 00:55:03.866365 kubelet[2810]: I1009 00:55:03.866350 2810 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 00:55:03.866643 kubelet[2810]: I1009 00:55:03.866582 2810 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 00:55:03.898164 kubelet[2810]: I1009 00:55:03.898132 2810 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:55:03.905150 kubelet[2810]: I1009 00:55:03.905101 2810 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 9 00:55:03.905216 kubelet[2810]: I1009 00:55:03.905160 2810 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 00:55:03.917525 kubelet[2810]: I1009 00:55:03.917505 2810 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 00:55:03.917611 kubelet[2810]: I1009 00:55:03.917575 2810 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 00:55:03.917611 kubelet[2810]: I1009 00:55:03.917601 2810 topology_manager.go:215] "Topology Admit Handler" podUID="1680974ad8ea60a6d2ae47bd95476758" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 00:55:03.995692 kubelet[2810]: I1009 00:55:03.995407 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1680974ad8ea60a6d2ae47bd95476758-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1680974ad8ea60a6d2ae47bd95476758\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:55:03.995692 kubelet[2810]: I1009 00:55:03.995459 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:55:03.995692 kubelet[2810]: I1009 00:55:03.995484 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:55:03.995692 kubelet[2810]: I1009 00:55:03.995507 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1680974ad8ea60a6d2ae47bd95476758-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1680974ad8ea60a6d2ae47bd95476758\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:55:03.995692 kubelet[2810]: I1009 00:55:03.995533 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 9 00:55:03.995915 kubelet[2810]: I1009 00:55:03.995573 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1680974ad8ea60a6d2ae47bd95476758-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1680974ad8ea60a6d2ae47bd95476758\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:55:03.995915 kubelet[2810]: I1009 00:55:03.995605 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:55:03.995915 kubelet[2810]: I1009 00:55:03.995631 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:55:03.995915 kubelet[2810]: I1009 00:55:03.995661 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:55:04.226431 kubelet[2810]: E1009 00:55:04.226224 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:04.226431 kubelet[2810]: E1009 00:55:04.226225 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:04.226739 kubelet[2810]: E1009 00:55:04.226697 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:04.299998 sudo[2825]: pam_unix(sudo:session): session closed for user root Oct 9 00:55:04.782049 kubelet[2810]: I1009 00:55:04.782001 2810 apiserver.go:52] "Watching apiserver" Oct 9 00:55:04.794311 kubelet[2810]: I1009 00:55:04.794251 2810 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 00:55:04.918918 kubelet[2810]: E1009 00:55:04.918875 2810 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 00:55:04.919862 kubelet[2810]: I1009 00:55:04.919123 2810 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9190852980000002 podStartE2EDuration="1.919085298s" podCreationTimestamp="2024-10-09 00:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:55:04.918934409 +0000 UTC m=+1.207246337" watchObservedRunningTime="2024-10-09 00:55:04.919085298 +0000 UTC m=+1.207397206" Oct 9 00:55:04.919862 kubelet[2810]: E1009 00:55:04.919380 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:04.921160 kubelet[2810]: E1009 00:55:04.921133 2810 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 9 00:55:04.921559 kubelet[2810]: E1009 00:55:04.921537 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:04.924347 kubelet[2810]: E1009 00:55:04.924311 2810 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 9 00:55:04.924679 kubelet[2810]: E1009 00:55:04.924549 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:04.926060 kubelet[2810]: I1009 00:55:04.926018 2810 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.925990892 podStartE2EDuration="1.925990892s" podCreationTimestamp="2024-10-09 00:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:55:04.92584957 +0000 UTC m=+1.214161488" watchObservedRunningTime="2024-10-09 00:55:04.925990892 +0000 UTC m=+1.214302800" Oct 9 00:55:05.620329 sudo[1785]: pam_unix(sudo:session): session closed for user root Oct 9 00:55:05.622003 sshd[1778]: pam_unix(sshd:session): session closed for user core Oct 9 00:55:05.625747 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:50450.service: Deactivated successfully. Oct 9 00:55:05.627791 systemd-logind[1569]: Session 7 logged out. Waiting for processes to exit. Oct 9 00:55:05.627928 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 00:55:05.628984 systemd-logind[1569]: Removed session 7. Oct 9 00:55:05.835287 kubelet[2810]: E1009 00:55:05.835254 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:05.835287 kubelet[2810]: E1009 00:55:05.835261 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:05.835922 kubelet[2810]: E1009 00:55:05.835359 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:06.836859 kubelet[2810]: E1009 00:55:06.836821 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:11.422417 kubelet[2810]: E1009 00:55:11.422381 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:11.433033 kubelet[2810]: I1009 00:55:11.432993 2810 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=8.432956285 podStartE2EDuration="8.432956285s" podCreationTimestamp="2024-10-09 00:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:55:04.931954222 +0000 UTC m=+1.220266131" watchObservedRunningTime="2024-10-09 00:55:11.432956285 +0000 UTC m=+7.721268193" Oct 9 00:55:11.844505 kubelet[2810]: E1009 00:55:11.844184 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:13.308071 kubelet[2810]: E1009 00:55:13.308042 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:13.847224 kubelet[2810]: E1009 00:55:13.847189 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:15.087865 kubelet[2810]: E1009 00:55:15.087833 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:15.653906 kubelet[2810]: I1009 00:55:15.653872 2810 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 00:55:15.654319 containerd[1588]: time="2024-10-09T00:55:15.654262691Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 00:55:15.654719 kubelet[2810]: I1009 00:55:15.654482 2810 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 00:55:16.685715 update_engine[1574]: I20241009 00:55:16.685644 1574 update_attempter.cc:509] Updating boot flags... Oct 9 00:55:16.732609 kubelet[2810]: I1009 00:55:16.732169 2810 topology_manager.go:215] "Topology Admit Handler" podUID="8d5a6d5b-4ba6-46d8-8af4-29a0993170e2" podNamespace="kube-system" podName="kube-proxy-j2skp" Oct 9 00:55:16.740383 kubelet[2810]: I1009 00:55:16.738918 2810 topology_manager.go:215] "Topology Admit Handler" podUID="93f8e8c4-e661-48f0-9abb-505c45725ad5" podNamespace="kube-system" podName="cilium-4hjfm" Oct 9 00:55:16.752380 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2895) Oct 9 00:55:16.777476 kubelet[2810]: I1009 00:55:16.777424 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-lib-modules\") pod \"cilium-4hjfm\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " pod="kube-system/cilium-4hjfm" Oct 9 00:55:16.777614 kubelet[2810]: I1009 00:55:16.777486 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-bpf-maps\") pod \"cilium-4hjfm\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " pod="kube-system/cilium-4hjfm" Oct 9 00:55:16.777614 kubelet[2810]: I1009 00:55:16.777511 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-hostproc\") pod \"cilium-4hjfm\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " pod="kube-system/cilium-4hjfm" Oct 9 00:55:16.777614 kubelet[2810]: I1009 00:55:16.777538 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-cilium-cgroup\") pod \"cilium-4hjfm\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " pod="kube-system/cilium-4hjfm" Oct 9 00:55:16.777614 kubelet[2810]: I1009 00:55:16.777567 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nmtf\" (UniqueName: \"kubernetes.io/projected/8d5a6d5b-4ba6-46d8-8af4-29a0993170e2-kube-api-access-2nmtf\") pod \"kube-proxy-j2skp\" (UID: \"8d5a6d5b-4ba6-46d8-8af4-29a0993170e2\") " pod="kube-system/kube-proxy-j2skp" Oct 9 00:55:16.777614 kubelet[2810]: I1009 00:55:16.777593 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93f8e8c4-e661-48f0-9abb-505c45725ad5-clustermesh-secrets\") pod \"cilium-4hjfm\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " pod="kube-system/cilium-4hjfm" Oct 9 00:55:16.777614 kubelet[2810]: I1009 00:55:16.777615 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-xtables-lock\") pod \"cilium-4hjfm\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " pod="kube-system/cilium-4hjfm" Oct 9 00:55:16.777813 kubelet[2810]: I1009 00:55:16.777638 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93f8e8c4-e661-48f0-9abb-505c45725ad5-hubble-tls\") pod \"cilium-4hjfm\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " pod="kube-system/cilium-4hjfm" Oct 9 00:55:16.777813 kubelet[2810]: I1009 00:55:16.777662 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-host-proc-sys-kernel\") pod \"cilium-4hjfm\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " pod="kube-system/cilium-4hjfm" Oct 9 00:55:16.777813 kubelet[2810]: I1009 00:55:16.777690 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8d5a6d5b-4ba6-46d8-8af4-29a0993170e2-kube-proxy\") pod \"kube-proxy-j2skp\" (UID: \"8d5a6d5b-4ba6-46d8-8af4-29a0993170e2\") " pod="kube-system/kube-proxy-j2skp" Oct 9 00:55:16.777813 kubelet[2810]: I1009 00:55:16.777717 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d5a6d5b-4ba6-46d8-8af4-29a0993170e2-xtables-lock\") pod \"kube-proxy-j2skp\" (UID: \"8d5a6d5b-4ba6-46d8-8af4-29a0993170e2\") " pod="kube-system/kube-proxy-j2skp" Oct 9 00:55:16.777813 kubelet[2810]: I1009 00:55:16.777740 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-etc-cni-netd\") pod \"cilium-4hjfm\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " pod="kube-system/cilium-4hjfm" Oct 9 00:55:16.777813 kubelet[2810]: I1009 00:55:16.777782 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93f8e8c4-e661-48f0-9abb-505c45725ad5-cilium-config-path\") pod \"cilium-4hjfm\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " pod="kube-system/cilium-4hjfm" Oct 9 00:55:16.778007 kubelet[2810]: I1009 00:55:16.777807 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-cilium-run\") pod \"cilium-4hjfm\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " pod="kube-system/cilium-4hjfm" Oct 9 00:55:16.778007 kubelet[2810]: I1009 00:55:16.777832 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-cni-path\") pod \"cilium-4hjfm\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " pod="kube-system/cilium-4hjfm" Oct 9 00:55:16.778007 kubelet[2810]: I1009 00:55:16.777866 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d5a6d5b-4ba6-46d8-8af4-29a0993170e2-lib-modules\") pod \"kube-proxy-j2skp\" (UID: \"8d5a6d5b-4ba6-46d8-8af4-29a0993170e2\") " pod="kube-system/kube-proxy-j2skp" Oct 9 00:55:16.778007 kubelet[2810]: I1009 00:55:16.777912 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-host-proc-sys-net\") pod \"cilium-4hjfm\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " pod="kube-system/cilium-4hjfm" Oct 9 00:55:16.778007 kubelet[2810]: I1009 00:55:16.777952 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx6w9\" (UniqueName: \"kubernetes.io/projected/93f8e8c4-e661-48f0-9abb-505c45725ad5-kube-api-access-xx6w9\") pod \"cilium-4hjfm\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " pod="kube-system/cilium-4hjfm" Oct 9 00:55:16.783309 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2898) Oct 9 00:55:17.047618 kubelet[2810]: E1009 00:55:17.047506 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:17.048323 containerd[1588]: time="2024-10-09T00:55:17.048101994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4hjfm,Uid:93f8e8c4-e661-48f0-9abb-505c45725ad5,Namespace:kube-system,Attempt:0,}" Oct 9 00:55:17.103317 kubelet[2810]: I1009 00:55:17.101817 2810 topology_manager.go:215] "Topology Admit Handler" podUID="b25ccd21-9459-4491-b294-b8daba6a4ca4" podNamespace="kube-system" podName="cilium-operator-5cc964979-47z9s" Oct 9 00:55:17.139838 containerd[1588]: time="2024-10-09T00:55:17.139757130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:55:17.141057 containerd[1588]: time="2024-10-09T00:55:17.140445977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:55:17.141057 containerd[1588]: time="2024-10-09T00:55:17.140517201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:55:17.141057 containerd[1588]: time="2024-10-09T00:55:17.140682535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:55:17.182118 kubelet[2810]: I1009 00:55:17.182081 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b25ccd21-9459-4491-b294-b8daba6a4ca4-cilium-config-path\") pod \"cilium-operator-5cc964979-47z9s\" (UID: \"b25ccd21-9459-4491-b294-b8daba6a4ca4\") " pod="kube-system/cilium-operator-5cc964979-47z9s" Oct 9 00:55:17.182247 kubelet[2810]: I1009 00:55:17.182166 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnmws\" (UniqueName: \"kubernetes.io/projected/b25ccd21-9459-4491-b294-b8daba6a4ca4-kube-api-access-hnmws\") pod \"cilium-operator-5cc964979-47z9s\" (UID: \"b25ccd21-9459-4491-b294-b8daba6a4ca4\") " pod="kube-system/cilium-operator-5cc964979-47z9s" Oct 9 00:55:17.193376 containerd[1588]: time="2024-10-09T00:55:17.193325926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4hjfm,Uid:93f8e8c4-e661-48f0-9abb-505c45725ad5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10\"" Oct 9 00:55:17.194179 kubelet[2810]: E1009 00:55:17.194151 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:17.195682 containerd[1588]: time="2024-10-09T00:55:17.195651275Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 9 00:55:17.340540 kubelet[2810]: E1009 00:55:17.340217 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:17.340687 containerd[1588]: time="2024-10-09T00:55:17.340647795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j2skp,Uid:8d5a6d5b-4ba6-46d8-8af4-29a0993170e2,Namespace:kube-system,Attempt:0,}" Oct 9 00:55:17.363796 containerd[1588]: time="2024-10-09T00:55:17.363716003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:55:17.363796 containerd[1588]: time="2024-10-09T00:55:17.363782569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:55:17.363796 containerd[1588]: time="2024-10-09T00:55:17.363795483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:55:17.363939 containerd[1588]: time="2024-10-09T00:55:17.363897497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:55:17.403659 containerd[1588]: time="2024-10-09T00:55:17.403617214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j2skp,Uid:8d5a6d5b-4ba6-46d8-8af4-29a0993170e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"83c03d7901df5c739c43820fcfdbfbf98353a693c7ccb6126fbd24b6b58d8126\"" Oct 9 00:55:17.404288 kubelet[2810]: E1009 00:55:17.404262 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:17.406427 containerd[1588]: time="2024-10-09T00:55:17.406390133Z" level=info msg="CreateContainer within sandbox \"83c03d7901df5c739c43820fcfdbfbf98353a693c7ccb6126fbd24b6b58d8126\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 00:55:17.423074 containerd[1588]: time="2024-10-09T00:55:17.423037325Z" level=info msg="CreateContainer within sandbox \"83c03d7901df5c739c43820fcfdbfbf98353a693c7ccb6126fbd24b6b58d8126\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f1b512b8ddd4c91b20ebd013babb4161de06d47cfee223e569f801e5504f495\"" Oct 9 00:55:17.423768 containerd[1588]: time="2024-10-09T00:55:17.423732473Z" level=info msg="StartContainer for \"3f1b512b8ddd4c91b20ebd013babb4161de06d47cfee223e569f801e5504f495\"" Oct 9 00:55:17.424365 kubelet[2810]: E1009 00:55:17.423923 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:17.424595 containerd[1588]: time="2024-10-09T00:55:17.424537128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-47z9s,Uid:b25ccd21-9459-4491-b294-b8daba6a4ca4,Namespace:kube-system,Attempt:0,}" Oct 9 00:55:17.454371 containerd[1588]: time="2024-10-09T00:55:17.453717715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:55:17.454371 containerd[1588]: time="2024-10-09T00:55:17.453792226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:55:17.454371 containerd[1588]: time="2024-10-09T00:55:17.453807024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:55:17.454680 containerd[1588]: time="2024-10-09T00:55:17.454537920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:55:17.490802 containerd[1588]: time="2024-10-09T00:55:17.490763323Z" level=info msg="StartContainer for \"3f1b512b8ddd4c91b20ebd013babb4161de06d47cfee223e569f801e5504f495\" returns successfully" Oct 9 00:55:17.513421 containerd[1588]: time="2024-10-09T00:55:17.513344506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-47z9s,Uid:b25ccd21-9459-4491-b294-b8daba6a4ca4,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf6286b0889a6e85f252056df88a4738015eefddda29af3dc614940e1d317ec5\"" Oct 9 00:55:17.513999 kubelet[2810]: E1009 00:55:17.513975 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:17.857229 kubelet[2810]: E1009 00:55:17.857197 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:17.864889 kubelet[2810]: I1009 00:55:17.864784 2810 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-j2skp" podStartSLOduration=1.864745356 podStartE2EDuration="1.864745356s" podCreationTimestamp="2024-10-09 00:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:55:17.864300453 +0000 UTC m=+14.152612361" watchObservedRunningTime="2024-10-09 00:55:17.864745356 +0000 UTC m=+14.153057264" Oct 9 00:55:21.086433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1831806851.mount: Deactivated successfully. Oct 9 00:55:23.343025 containerd[1588]: time="2024-10-09T00:55:23.342972915Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:55:23.343824 containerd[1588]: time="2024-10-09T00:55:23.343791240Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735343" Oct 9 00:55:23.345067 containerd[1588]: time="2024-10-09T00:55:23.345014332Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:55:23.346403 containerd[1588]: time="2024-10-09T00:55:23.346376144Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.150623858s" Oct 9 00:55:23.346442 containerd[1588]: time="2024-10-09T00:55:23.346407434Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 9 00:55:23.346881 containerd[1588]: time="2024-10-09T00:55:23.346855670Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 9 00:55:23.349005 containerd[1588]: time="2024-10-09T00:55:23.348985173Z" level=info msg="CreateContainer within sandbox \"d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 9 00:55:23.360773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1581195125.mount: Deactivated successfully. Oct 9 00:55:23.363336 containerd[1588]: time="2024-10-09T00:55:23.363275260Z" level=info msg="CreateContainer within sandbox \"d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d\"" Oct 9 00:55:23.363737 containerd[1588]: time="2024-10-09T00:55:23.363712888Z" level=info msg="StartContainer for \"15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d\"" Oct 9 00:55:23.411008 containerd[1588]: time="2024-10-09T00:55:23.410960099Z" level=info msg="StartContainer for \"15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d\" returns successfully" Oct 9 00:55:23.868138 kubelet[2810]: E1009 00:55:23.868106 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:23.909466 containerd[1588]: time="2024-10-09T00:55:23.909425293Z" level=error msg="collecting metrics for 15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d" error="cgroups: cgroup deleted: unknown" Oct 9 00:55:24.036969 containerd[1588]: time="2024-10-09T00:55:24.036893690Z" level=info msg="shim disconnected" id=15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d namespace=k8s.io Oct 9 00:55:24.036969 containerd[1588]: time="2024-10-09T00:55:24.036953092Z" level=warning msg="cleaning up after shim disconnected" id=15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d namespace=k8s.io Oct 9 00:55:24.036969 containerd[1588]: time="2024-10-09T00:55:24.036961318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:55:24.050173 containerd[1588]: time="2024-10-09T00:55:24.050110698Z" level=warning msg="cleanup warnings time=\"2024-10-09T00:55:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 00:55:24.358988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d-rootfs.mount: Deactivated successfully. Oct 9 00:55:24.871471 kubelet[2810]: E1009 00:55:24.871434 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:24.873628 containerd[1588]: time="2024-10-09T00:55:24.873593127Z" level=info msg="CreateContainer within sandbox \"d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 9 00:55:24.891882 containerd[1588]: time="2024-10-09T00:55:24.891840175Z" level=info msg="CreateContainer within sandbox \"d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052\"" Oct 9 00:55:24.892185 containerd[1588]: time="2024-10-09T00:55:24.892166100Z" level=info msg="StartContainer for \"5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052\"" Oct 9 00:55:24.950683 containerd[1588]: time="2024-10-09T00:55:24.950535161Z" level=info msg="StartContainer for \"5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052\" returns successfully" Oct 9 00:55:24.962465 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 00:55:24.962814 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:55:24.962880 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:55:24.971829 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:55:24.990757 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:55:25.067628 containerd[1588]: time="2024-10-09T00:55:25.067564549Z" level=info msg="shim disconnected" id=5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052 namespace=k8s.io Oct 9 00:55:25.067628 containerd[1588]: time="2024-10-09T00:55:25.067619272Z" level=warning msg="cleaning up after shim disconnected" id=5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052 namespace=k8s.io Oct 9 00:55:25.067628 containerd[1588]: time="2024-10-09T00:55:25.067628229Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:55:25.361557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052-rootfs.mount: Deactivated successfully. Oct 9 00:55:25.457042 containerd[1588]: time="2024-10-09T00:55:25.456980279Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:55:25.458311 containerd[1588]: time="2024-10-09T00:55:25.458258612Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907245" Oct 9 00:55:25.459866 containerd[1588]: time="2024-10-09T00:55:25.459815761Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:55:25.461275 containerd[1588]: time="2024-10-09T00:55:25.461244819Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.114359132s" Oct 9 00:55:25.461382 containerd[1588]: time="2024-10-09T00:55:25.461280516Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 9 00:55:25.464973 containerd[1588]: time="2024-10-09T00:55:25.464934212Z" level=info msg="CreateContainer within sandbox \"bf6286b0889a6e85f252056df88a4738015eefddda29af3dc614940e1d317ec5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 9 00:55:25.476209 containerd[1588]: time="2024-10-09T00:55:25.476161584Z" level=info msg="CreateContainer within sandbox \"bf6286b0889a6e85f252056df88a4738015eefddda29af3dc614940e1d317ec5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43\"" Oct 9 00:55:25.476734 containerd[1588]: time="2024-10-09T00:55:25.476700221Z" level=info msg="StartContainer for \"c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43\"" Oct 9 00:55:25.530859 containerd[1588]: time="2024-10-09T00:55:25.530809003Z" level=info msg="StartContainer for \"c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43\" returns successfully" Oct 9 00:55:25.873967 kubelet[2810]: E1009 00:55:25.873937 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:25.876264 kubelet[2810]: E1009 00:55:25.876244 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:25.877857 containerd[1588]: time="2024-10-09T00:55:25.877824542Z" level=info msg="CreateContainer within sandbox \"d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 9 00:55:26.200449 containerd[1588]: time="2024-10-09T00:55:26.200318119Z" level=info msg="CreateContainer within sandbox \"d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4\"" Oct 9 00:55:26.201224 containerd[1588]: time="2024-10-09T00:55:26.201039781Z" level=info msg="StartContainer for \"756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4\"" Oct 9 00:55:26.221468 kubelet[2810]: I1009 00:55:26.220862 2810 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-47z9s" podStartSLOduration=1.273925458 podStartE2EDuration="9.220817266s" podCreationTimestamp="2024-10-09 00:55:17 +0000 UTC" firstStartedPulling="2024-10-09 00:55:17.51466503 +0000 UTC m=+13.802976928" lastFinishedPulling="2024-10-09 00:55:25.461556828 +0000 UTC m=+21.749868736" observedRunningTime="2024-10-09 00:55:26.077859091 +0000 UTC m=+22.366170999" watchObservedRunningTime="2024-10-09 00:55:26.220817266 +0000 UTC m=+22.509129174" Oct 9 00:55:26.305859 containerd[1588]: time="2024-10-09T00:55:26.305807099Z" level=info msg="StartContainer for \"756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4\" returns successfully" Oct 9 00:55:26.586972 containerd[1588]: time="2024-10-09T00:55:26.586821260Z" level=info msg="shim disconnected" id=756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4 namespace=k8s.io Oct 9 00:55:26.586972 containerd[1588]: time="2024-10-09T00:55:26.586885922Z" level=warning msg="cleaning up after shim disconnected" id=756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4 namespace=k8s.io Oct 9 00:55:26.586972 containerd[1588]: time="2024-10-09T00:55:26.586895259Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:55:26.886914 kubelet[2810]: E1009 00:55:26.886827 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:26.886914 kubelet[2810]: E1009 00:55:26.886855 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:26.888754 containerd[1588]: time="2024-10-09T00:55:26.888700978Z" level=info msg="CreateContainer within sandbox \"d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 9 00:55:26.907692 containerd[1588]: time="2024-10-09T00:55:26.907653778Z" level=info msg="CreateContainer within sandbox \"d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf\"" Oct 9 00:55:26.908148 containerd[1588]: time="2024-10-09T00:55:26.908095230Z" level=info msg="StartContainer for \"56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf\"" Oct 9 00:55:26.962112 containerd[1588]: time="2024-10-09T00:55:26.962071800Z" level=info msg="StartContainer for \"56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf\" returns successfully" Oct 9 00:55:26.984477 containerd[1588]: time="2024-10-09T00:55:26.984407481Z" level=info msg="shim disconnected" id=56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf namespace=k8s.io Oct 9 00:55:26.984477 containerd[1588]: time="2024-10-09T00:55:26.984464689Z" level=warning msg="cleaning up after shim disconnected" id=56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf namespace=k8s.io Oct 9 00:55:26.984477 containerd[1588]: time="2024-10-09T00:55:26.984475009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:55:27.358554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf-rootfs.mount: Deactivated successfully. Oct 9 00:55:27.896476 kubelet[2810]: E1009 00:55:27.896444 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:27.899179 containerd[1588]: time="2024-10-09T00:55:27.899124885Z" level=info msg="CreateContainer within sandbox \"d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 9 00:55:27.919700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount787707944.mount: Deactivated successfully. Oct 9 00:55:27.924450 containerd[1588]: time="2024-10-09T00:55:27.924405745Z" level=info msg="CreateContainer within sandbox \"d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0\"" Oct 9 00:55:27.924923 containerd[1588]: time="2024-10-09T00:55:27.924891220Z" level=info msg="StartContainer for \"92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0\"" Oct 9 00:55:27.977916 containerd[1588]: time="2024-10-09T00:55:27.977858938Z" level=info msg="StartContainer for \"92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0\" returns successfully" Oct 9 00:55:28.114167 kubelet[2810]: I1009 00:55:28.114129 2810 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 00:55:28.131477 kubelet[2810]: I1009 00:55:28.131446 2810 topology_manager.go:215] "Topology Admit Handler" podUID="166a9ef0-3320-47e4-9b54-803e22af78bf" podNamespace="kube-system" podName="coredns-76f75df574-gggwf" Oct 9 00:55:28.132640 kubelet[2810]: I1009 00:55:28.132531 2810 topology_manager.go:215] "Topology Admit Handler" podUID="54606357-c810-4e27-a220-ffb71a63912b" podNamespace="kube-system" podName="coredns-76f75df574-kw65r" Oct 9 00:55:28.155368 kubelet[2810]: I1009 00:55:28.155218 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k6q4\" (UniqueName: \"kubernetes.io/projected/54606357-c810-4e27-a220-ffb71a63912b-kube-api-access-7k6q4\") pod \"coredns-76f75df574-kw65r\" (UID: \"54606357-c810-4e27-a220-ffb71a63912b\") " pod="kube-system/coredns-76f75df574-kw65r" Oct 9 00:55:28.155368 kubelet[2810]: I1009 00:55:28.155256 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn9qk\" (UniqueName: \"kubernetes.io/projected/166a9ef0-3320-47e4-9b54-803e22af78bf-kube-api-access-bn9qk\") pod \"coredns-76f75df574-gggwf\" (UID: \"166a9ef0-3320-47e4-9b54-803e22af78bf\") " pod="kube-system/coredns-76f75df574-gggwf" Oct 9 00:55:28.155368 kubelet[2810]: I1009 00:55:28.155361 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/166a9ef0-3320-47e4-9b54-803e22af78bf-config-volume\") pod \"coredns-76f75df574-gggwf\" (UID: \"166a9ef0-3320-47e4-9b54-803e22af78bf\") " pod="kube-system/coredns-76f75df574-gggwf" Oct 9 00:55:28.155539 kubelet[2810]: I1009 00:55:28.155399 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54606357-c810-4e27-a220-ffb71a63912b-config-volume\") pod \"coredns-76f75df574-kw65r\" (UID: \"54606357-c810-4e27-a220-ffb71a63912b\") " pod="kube-system/coredns-76f75df574-kw65r" Oct 9 00:55:28.438248 kubelet[2810]: E1009 00:55:28.438134 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:28.438833 containerd[1588]: time="2024-10-09T00:55:28.438791958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gggwf,Uid:166a9ef0-3320-47e4-9b54-803e22af78bf,Namespace:kube-system,Attempt:0,}" Oct 9 00:55:28.439731 kubelet[2810]: E1009 00:55:28.439590 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:28.440081 containerd[1588]: time="2024-10-09T00:55:28.440044049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kw65r,Uid:54606357-c810-4e27-a220-ffb71a63912b,Namespace:kube-system,Attempt:0,}" Oct 9 00:55:28.901122 kubelet[2810]: E1009 00:55:28.901081 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:29.030190 kubelet[2810]: I1009 00:55:29.030141 2810 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-4hjfm" podStartSLOduration=6.878640492 podStartE2EDuration="13.030091376s" podCreationTimestamp="2024-10-09 00:55:16 +0000 UTC" firstStartedPulling="2024-10-09 00:55:17.195213014 +0000 UTC m=+13.483524922" lastFinishedPulling="2024-10-09 00:55:23.346663898 +0000 UTC m=+19.634975806" observedRunningTime="2024-10-09 00:55:29.029693787 +0000 UTC m=+25.318005715" watchObservedRunningTime="2024-10-09 00:55:29.030091376 +0000 UTC m=+25.318403284" Oct 9 00:55:29.469582 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:59414.service - OpenSSH per-connection server daemon (10.0.0.1:59414). Oct 9 00:55:29.503213 sshd[3646]: Accepted publickey for core from 10.0.0.1 port 59414 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:55:29.504838 sshd[3646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:55:29.509033 systemd-logind[1569]: New session 8 of user core. Oct 9 00:55:29.516602 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 00:55:29.641440 sshd[3646]: pam_unix(sshd:session): session closed for user core Oct 9 00:55:29.645838 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:59414.service: Deactivated successfully. Oct 9 00:55:29.647901 systemd-logind[1569]: Session 8 logged out. Waiting for processes to exit. Oct 9 00:55:29.648097 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 00:55:29.649428 systemd-logind[1569]: Removed session 8. Oct 9 00:55:29.903204 kubelet[2810]: E1009 00:55:29.903174 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:30.226508 systemd-networkd[1256]: cilium_host: Link UP Oct 9 00:55:30.226684 systemd-networkd[1256]: cilium_net: Link UP Oct 9 00:55:30.226688 systemd-networkd[1256]: cilium_net: Gained carrier Oct 9 00:55:30.226872 systemd-networkd[1256]: cilium_host: Gained carrier Oct 9 00:55:30.227203 systemd-networkd[1256]: cilium_host: Gained IPv6LL Oct 9 00:55:30.231673 systemd-networkd[1256]: cilium_net: Gained IPv6LL Oct 9 00:55:30.324142 systemd-networkd[1256]: cilium_vxlan: Link UP Oct 9 00:55:30.324153 systemd-networkd[1256]: cilium_vxlan: Gained carrier Oct 9 00:55:30.522337 kernel: NET: Registered PF_ALG protocol family Oct 9 00:55:30.904771 kubelet[2810]: E1009 00:55:30.904733 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:31.161746 systemd-networkd[1256]: lxc_health: Link UP Oct 9 00:55:31.171100 systemd-networkd[1256]: lxc_health: Gained carrier Oct 9 00:55:31.480566 systemd-networkd[1256]: lxc6134e9e6b6ba: Link UP Oct 9 00:55:31.487323 kernel: eth0: renamed from tmp62f8c Oct 9 00:55:31.492617 systemd-networkd[1256]: lxc6134e9e6b6ba: Gained carrier Oct 9 00:55:31.503056 systemd-networkd[1256]: lxcc639d544723e: Link UP Oct 9 00:55:31.511347 kernel: eth0: renamed from tmp2cd1b Oct 9 00:55:31.516486 systemd-networkd[1256]: lxcc639d544723e: Gained carrier Oct 9 00:55:31.821437 systemd-networkd[1256]: cilium_vxlan: Gained IPv6LL Oct 9 00:55:32.650437 systemd-networkd[1256]: lxcc639d544723e: Gained IPv6LL Oct 9 00:55:32.778489 systemd-networkd[1256]: lxc_health: Gained IPv6LL Oct 9 00:55:33.051785 kubelet[2810]: E1009 00:55:33.051573 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:33.098490 systemd-networkd[1256]: lxc6134e9e6b6ba: Gained IPv6LL Oct 9 00:55:34.655524 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:59428.service - OpenSSH per-connection server daemon (10.0.0.1:59428). Oct 9 00:55:34.687390 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 59428 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:55:34.688806 sshd[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:55:34.692758 systemd-logind[1569]: New session 9 of user core. Oct 9 00:55:34.703524 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 00:55:34.818488 sshd[4039]: pam_unix(sshd:session): session closed for user core Oct 9 00:55:34.822255 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:59428.service: Deactivated successfully. Oct 9 00:55:34.825070 systemd-logind[1569]: Session 9 logged out. Waiting for processes to exit. Oct 9 00:55:34.825327 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 00:55:34.826591 systemd-logind[1569]: Removed session 9. Oct 9 00:55:35.136840 containerd[1588]: time="2024-10-09T00:55:35.136756271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:55:35.136840 containerd[1588]: time="2024-10-09T00:55:35.136807858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:55:35.136840 containerd[1588]: time="2024-10-09T00:55:35.136818448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:55:35.137362 containerd[1588]: time="2024-10-09T00:55:35.136895854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:55:35.160256 systemd-resolved[1482]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:55:35.185180 containerd[1588]: time="2024-10-09T00:55:35.184895077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:55:35.185180 containerd[1588]: time="2024-10-09T00:55:35.184946725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:55:35.185180 containerd[1588]: time="2024-10-09T00:55:35.184956413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:55:35.185180 containerd[1588]: time="2024-10-09T00:55:35.185028097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:55:35.186305 containerd[1588]: time="2024-10-09T00:55:35.186264023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kw65r,Uid:54606357-c810-4e27-a220-ffb71a63912b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cd1b5fdc2c1e2562015c478239277dbfd5f75829b0d6496903f3e99fc0391e9\"" Oct 9 00:55:35.189721 kubelet[2810]: E1009 00:55:35.189689 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:35.192389 containerd[1588]: time="2024-10-09T00:55:35.191986220Z" level=info msg="CreateContainer within sandbox \"2cd1b5fdc2c1e2562015c478239277dbfd5f75829b0d6496903f3e99fc0391e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 00:55:35.213341 containerd[1588]: time="2024-10-09T00:55:35.213285355Z" level=info msg="CreateContainer within sandbox \"2cd1b5fdc2c1e2562015c478239277dbfd5f75829b0d6496903f3e99fc0391e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"499c75bfd79e69891a9c6bd0ca2060ab28bfc971d9a119f42f43cd617cbad230\"" Oct 9 00:55:35.214381 containerd[1588]: time="2024-10-09T00:55:35.213616168Z" level=info msg="StartContainer for \"499c75bfd79e69891a9c6bd0ca2060ab28bfc971d9a119f42f43cd617cbad230\"" Oct 9 00:55:35.214986 systemd-resolved[1482]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:55:35.251953 containerd[1588]: time="2024-10-09T00:55:35.251911733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gggwf,Uid:166a9ef0-3320-47e4-9b54-803e22af78bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"62f8cf413c31419418d8444bce4cf1c19845cb0f20ad56b7441a50c77b4339d0\"" Oct 9 00:55:35.252489 kubelet[2810]: E1009 00:55:35.252465 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:35.257055 containerd[1588]: time="2024-10-09T00:55:35.257011169Z" level=info msg="CreateContainer within sandbox \"62f8cf413c31419418d8444bce4cf1c19845cb0f20ad56b7441a50c77b4339d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 00:55:35.269976 containerd[1588]: time="2024-10-09T00:55:35.269918976Z" level=info msg="StartContainer for \"499c75bfd79e69891a9c6bd0ca2060ab28bfc971d9a119f42f43cd617cbad230\" returns successfully" Oct 9 00:55:35.273813 containerd[1588]: time="2024-10-09T00:55:35.273775582Z" level=info msg="CreateContainer within sandbox \"62f8cf413c31419418d8444bce4cf1c19845cb0f20ad56b7441a50c77b4339d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ffa0ab130cb90eb88263464ec5b2466b109e58738060d6b222f5e1033802a211\"" Oct 9 00:55:35.280717 containerd[1588]: time="2024-10-09T00:55:35.280684742Z" level=info msg="StartContainer for \"ffa0ab130cb90eb88263464ec5b2466b109e58738060d6b222f5e1033802a211\"" Oct 9 00:55:35.342400 containerd[1588]: time="2024-10-09T00:55:35.342356050Z" level=info msg="StartContainer for \"ffa0ab130cb90eb88263464ec5b2466b109e58738060d6b222f5e1033802a211\" returns successfully" Oct 9 00:55:35.916825 kubelet[2810]: E1009 00:55:35.916307 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:35.918659 kubelet[2810]: E1009 00:55:35.918588 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:35.950180 kubelet[2810]: I1009 00:55:35.950042 2810 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kw65r" podStartSLOduration=18.950000605 podStartE2EDuration="18.950000605s" podCreationTimestamp="2024-10-09 00:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:55:35.926515065 +0000 UTC m=+32.214826983" watchObservedRunningTime="2024-10-09 00:55:35.950000605 +0000 UTC m=+32.238312513" Oct 9 00:55:35.960942 kubelet[2810]: I1009 00:55:35.960838 2810 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-gggwf" podStartSLOduration=18.960783524 podStartE2EDuration="18.960783524s" podCreationTimestamp="2024-10-09 00:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:55:35.949991428 +0000 UTC m=+32.238303336" watchObservedRunningTime="2024-10-09 00:55:35.960783524 +0000 UTC m=+32.249095432" Oct 9 00:55:36.832862 kubelet[2810]: I1009 00:55:36.832816 2810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 00:55:36.835347 kubelet[2810]: E1009 00:55:36.834505 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:36.920432 kubelet[2810]: E1009 00:55:36.920407 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:36.920600 kubelet[2810]: E1009 00:55:36.920590 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:36.920833 kubelet[2810]: E1009 00:55:36.920810 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:37.922060 kubelet[2810]: E1009 00:55:37.922026 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:37.922060 kubelet[2810]: E1009 00:55:37.922061 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:39.837757 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:50070.service - OpenSSH per-connection server daemon (10.0.0.1:50070). Oct 9 00:55:39.873054 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 50070 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:55:39.874956 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:55:39.879338 systemd-logind[1569]: New session 10 of user core. Oct 9 00:55:39.889816 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 00:55:40.016380 sshd[4228]: pam_unix(sshd:session): session closed for user core Oct 9 00:55:40.019938 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:50070.service: Deactivated successfully. Oct 9 00:55:40.022425 systemd-logind[1569]: Session 10 logged out. Waiting for processes to exit. Oct 9 00:55:40.022486 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 00:55:40.023375 systemd-logind[1569]: Removed session 10. Oct 9 00:55:45.032504 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:50076.service - OpenSSH per-connection server daemon (10.0.0.1:50076). Oct 9 00:55:45.062357 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 50076 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:55:45.063993 sshd[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:55:45.067847 systemd-logind[1569]: New session 11 of user core. Oct 9 00:55:45.074605 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 00:55:45.185323 sshd[4244]: pam_unix(sshd:session): session closed for user core Oct 9 00:55:45.188834 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:50076.service: Deactivated successfully. Oct 9 00:55:45.191067 systemd-logind[1569]: Session 11 logged out. Waiting for processes to exit. Oct 9 00:55:45.191152 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 00:55:45.192352 systemd-logind[1569]: Removed session 11. Oct 9 00:55:50.195781 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:54436.service - OpenSSH per-connection server daemon (10.0.0.1:54436). Oct 9 00:55:50.225323 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 54436 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:55:50.226820 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:55:50.231025 systemd-logind[1569]: New session 12 of user core. Oct 9 00:55:50.246642 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 00:55:50.364603 sshd[4262]: pam_unix(sshd:session): session closed for user core Oct 9 00:55:50.374595 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:54440.service - OpenSSH per-connection server daemon (10.0.0.1:54440). Oct 9 00:55:50.375222 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:54436.service: Deactivated successfully. Oct 9 00:55:50.377180 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 00:55:50.378933 systemd-logind[1569]: Session 12 logged out. Waiting for processes to exit. Oct 9 00:55:50.380038 systemd-logind[1569]: Removed session 12. Oct 9 00:55:50.406390 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 54440 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:55:50.408063 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:55:50.412323 systemd-logind[1569]: New session 13 of user core. Oct 9 00:55:50.424709 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 00:55:50.589061 sshd[4275]: pam_unix(sshd:session): session closed for user core Oct 9 00:55:50.595088 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:54446.service - OpenSSH per-connection server daemon (10.0.0.1:54446). Oct 9 00:55:50.595756 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:54440.service: Deactivated successfully. Oct 9 00:55:50.604561 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 00:55:50.606704 systemd-logind[1569]: Session 13 logged out. Waiting for processes to exit. Oct 9 00:55:50.609133 systemd-logind[1569]: Removed session 13. Oct 9 00:55:50.641272 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 54446 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:55:50.643235 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:55:50.647970 systemd-logind[1569]: New session 14 of user core. Oct 9 00:55:50.655603 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 00:55:50.782124 sshd[4289]: pam_unix(sshd:session): session closed for user core Oct 9 00:55:50.786077 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:54446.service: Deactivated successfully. Oct 9 00:55:50.788400 systemd-logind[1569]: Session 14 logged out. Waiting for processes to exit. Oct 9 00:55:50.788485 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 00:55:50.789939 systemd-logind[1569]: Removed session 14. Oct 9 00:55:55.796516 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:54454.service - OpenSSH per-connection server daemon (10.0.0.1:54454). Oct 9 00:55:55.827104 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 54454 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:55:55.829025 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:55:55.833199 systemd-logind[1569]: New session 15 of user core. Oct 9 00:55:55.843671 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 00:55:55.952528 sshd[4307]: pam_unix(sshd:session): session closed for user core Oct 9 00:55:55.956203 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:54454.service: Deactivated successfully. Oct 9 00:55:55.958818 systemd-logind[1569]: Session 15 logged out. Waiting for processes to exit. Oct 9 00:55:55.959008 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 00:55:55.960145 systemd-logind[1569]: Removed session 15. Oct 9 00:56:00.960493 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:45170.service - OpenSSH per-connection server daemon (10.0.0.1:45170). Oct 9 00:56:00.991048 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 45170 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:56:00.992366 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:56:00.995854 systemd-logind[1569]: New session 16 of user core. Oct 9 00:56:01.002580 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 00:56:01.104104 sshd[4322]: pam_unix(sshd:session): session closed for user core Oct 9 00:56:01.111496 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:45178.service - OpenSSH per-connection server daemon (10.0.0.1:45178). Oct 9 00:56:01.111964 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:45170.service: Deactivated successfully. Oct 9 00:56:01.114843 systemd-logind[1569]: Session 16 logged out. Waiting for processes to exit. Oct 9 00:56:01.115496 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 00:56:01.116698 systemd-logind[1569]: Removed session 16. Oct 9 00:56:01.140950 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 45178 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:56:01.142380 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:56:01.145996 systemd-logind[1569]: New session 17 of user core. Oct 9 00:56:01.152566 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 00:56:01.322216 sshd[4334]: pam_unix(sshd:session): session closed for user core Oct 9 00:56:01.330637 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:45186.service - OpenSSH per-connection server daemon (10.0.0.1:45186). Oct 9 00:56:01.331253 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:45178.service: Deactivated successfully. Oct 9 00:56:01.333585 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 00:56:01.335217 systemd-logind[1569]: Session 17 logged out. Waiting for processes to exit. Oct 9 00:56:01.336396 systemd-logind[1569]: Removed session 17. Oct 9 00:56:01.364124 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 45186 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:56:01.365580 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:56:01.369376 systemd-logind[1569]: New session 18 of user core. Oct 9 00:56:01.379554 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 00:56:02.785839 sshd[4349]: pam_unix(sshd:session): session closed for user core Oct 9 00:56:02.794126 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:45192.service - OpenSSH per-connection server daemon (10.0.0.1:45192). Oct 9 00:56:02.794702 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:45186.service: Deactivated successfully. Oct 9 00:56:02.801586 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 00:56:02.802773 systemd-logind[1569]: Session 18 logged out. Waiting for processes to exit. Oct 9 00:56:02.803869 systemd-logind[1569]: Removed session 18. Oct 9 00:56:02.827047 sshd[4370]: Accepted publickey for core from 10.0.0.1 port 45192 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:56:02.828742 sshd[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:56:02.832898 systemd-logind[1569]: New session 19 of user core. Oct 9 00:56:02.844559 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 00:56:03.088382 sshd[4370]: pam_unix(sshd:session): session closed for user core Oct 9 00:56:03.099657 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:45196.service - OpenSSH per-connection server daemon (10.0.0.1:45196). Oct 9 00:56:03.100147 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:45192.service: Deactivated successfully. Oct 9 00:56:03.102957 systemd-logind[1569]: Session 19 logged out. Waiting for processes to exit. Oct 9 00:56:03.104181 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 00:56:03.105809 systemd-logind[1569]: Removed session 19. Oct 9 00:56:03.135160 sshd[4383]: Accepted publickey for core from 10.0.0.1 port 45196 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:56:03.137035 sshd[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:56:03.141567 systemd-logind[1569]: New session 20 of user core. Oct 9 00:56:03.151636 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 00:56:03.259765 sshd[4383]: pam_unix(sshd:session): session closed for user core Oct 9 00:56:03.264162 systemd-logind[1569]: Session 20 logged out. Waiting for processes to exit. Oct 9 00:56:03.264511 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:45196.service: Deactivated successfully. Oct 9 00:56:03.267382 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 00:56:03.268477 systemd-logind[1569]: Removed session 20. Oct 9 00:56:08.279585 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:60124.service - OpenSSH per-connection server daemon (10.0.0.1:60124). Oct 9 00:56:08.309479 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 60124 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:56:08.310982 sshd[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:56:08.314763 systemd-logind[1569]: New session 21 of user core. Oct 9 00:56:08.326551 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 00:56:08.432704 sshd[4403]: pam_unix(sshd:session): session closed for user core Oct 9 00:56:08.436759 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:60124.service: Deactivated successfully. Oct 9 00:56:08.439418 systemd-logind[1569]: Session 21 logged out. Waiting for processes to exit. Oct 9 00:56:08.439428 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 00:56:08.441017 systemd-logind[1569]: Removed session 21. Oct 9 00:56:13.444708 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:60130.service - OpenSSH per-connection server daemon (10.0.0.1:60130). Oct 9 00:56:13.475751 sshd[4421]: Accepted publickey for core from 10.0.0.1 port 60130 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:56:13.477534 sshd[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:56:13.481558 systemd-logind[1569]: New session 22 of user core. Oct 9 00:56:13.486544 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 00:56:13.592132 sshd[4421]: pam_unix(sshd:session): session closed for user core Oct 9 00:56:13.596234 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:60130.service: Deactivated successfully. Oct 9 00:56:13.598722 systemd-logind[1569]: Session 22 logged out. Waiting for processes to exit. Oct 9 00:56:13.598773 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 00:56:13.599816 systemd-logind[1569]: Removed session 22. Oct 9 00:56:16.817900 kubelet[2810]: E1009 00:56:16.817866 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:18.612655 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:51766.service - OpenSSH per-connection server daemon (10.0.0.1:51766). Oct 9 00:56:18.644346 sshd[4439]: Accepted publickey for core from 10.0.0.1 port 51766 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:56:18.646387 sshd[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:56:18.650749 systemd-logind[1569]: New session 23 of user core. Oct 9 00:56:18.660114 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 00:56:18.771534 sshd[4439]: pam_unix(sshd:session): session closed for user core Oct 9 00:56:18.775148 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:51766.service: Deactivated successfully. Oct 9 00:56:18.777438 systemd-logind[1569]: Session 23 logged out. Waiting for processes to exit. Oct 9 00:56:18.777557 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 00:56:18.778697 systemd-logind[1569]: Removed session 23. Oct 9 00:56:23.782530 systemd[1]: Started sshd@23-10.0.0.13:22-10.0.0.1:51768.service - OpenSSH per-connection server daemon (10.0.0.1:51768). Oct 9 00:56:23.813350 sshd[4454]: Accepted publickey for core from 10.0.0.1 port 51768 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:56:23.814965 sshd[4454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:56:23.819432 systemd-logind[1569]: New session 24 of user core. Oct 9 00:56:23.831556 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 00:56:23.939432 sshd[4454]: pam_unix(sshd:session): session closed for user core Oct 9 00:56:23.946650 systemd[1]: Started sshd@24-10.0.0.13:22-10.0.0.1:51778.service - OpenSSH per-connection server daemon (10.0.0.1:51778). Oct 9 00:56:23.947329 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:51768.service: Deactivated successfully. Oct 9 00:56:23.952672 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 00:56:23.953745 systemd-logind[1569]: Session 24 logged out. Waiting for processes to exit. Oct 9 00:56:23.954861 systemd-logind[1569]: Removed session 24. Oct 9 00:56:23.979223 sshd[4467]: Accepted publickey for core from 10.0.0.1 port 51778 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:56:23.980950 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:56:23.985277 systemd-logind[1569]: New session 25 of user core. Oct 9 00:56:23.999684 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 00:56:25.326357 containerd[1588]: time="2024-10-09T00:56:25.323202124Z" level=info msg="StopContainer for \"c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43\" with timeout 30 (s)" Oct 9 00:56:25.329367 containerd[1588]: time="2024-10-09T00:56:25.329332023Z" level=info msg="Stop container \"c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43\" with signal terminated" Oct 9 00:56:25.357766 containerd[1588]: time="2024-10-09T00:56:25.357686003Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 00:56:25.363333 containerd[1588]: time="2024-10-09T00:56:25.361477018Z" level=info msg="StopContainer for \"92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0\" with timeout 2 (s)" Oct 9 00:56:25.363333 containerd[1588]: time="2024-10-09T00:56:25.361719593Z" level=info msg="Stop container \"92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0\" with signal terminated" Oct 9 00:56:25.362633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43-rootfs.mount: Deactivated successfully. Oct 9 00:56:25.366351 containerd[1588]: time="2024-10-09T00:56:25.366280795Z" level=info msg="shim disconnected" id=c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43 namespace=k8s.io Oct 9 00:56:25.366351 containerd[1588]: time="2024-10-09T00:56:25.366343475Z" level=warning msg="cleaning up after shim disconnected" id=c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43 namespace=k8s.io Oct 9 00:56:25.366421 containerd[1588]: time="2024-10-09T00:56:25.366355217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:56:25.368907 systemd-networkd[1256]: lxc_health: Link DOWN Oct 9 00:56:25.368918 systemd-networkd[1256]: lxc_health: Lost carrier Oct 9 00:56:25.384536 containerd[1588]: time="2024-10-09T00:56:25.384489642Z" level=info msg="StopContainer for \"c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43\" returns successfully" Oct 9 00:56:25.385059 containerd[1588]: time="2024-10-09T00:56:25.385031050Z" level=info msg="StopPodSandbox for \"bf6286b0889a6e85f252056df88a4738015eefddda29af3dc614940e1d317ec5\"" Oct 9 00:56:25.385159 containerd[1588]: time="2024-10-09T00:56:25.385061298Z" level=info msg="Container to stop \"c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:56:25.387795 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf6286b0889a6e85f252056df88a4738015eefddda29af3dc614940e1d317ec5-shm.mount: Deactivated successfully. Oct 9 00:56:25.418312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf6286b0889a6e85f252056df88a4738015eefddda29af3dc614940e1d317ec5-rootfs.mount: Deactivated successfully. Oct 9 00:56:25.421117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0-rootfs.mount: Deactivated successfully. Oct 9 00:56:25.423861 containerd[1588]: time="2024-10-09T00:56:25.423695329Z" level=info msg="shim disconnected" id=bf6286b0889a6e85f252056df88a4738015eefddda29af3dc614940e1d317ec5 namespace=k8s.io Oct 9 00:56:25.423989 containerd[1588]: time="2024-10-09T00:56:25.423889171Z" level=warning msg="cleaning up after shim disconnected" id=bf6286b0889a6e85f252056df88a4738015eefddda29af3dc614940e1d317ec5 namespace=k8s.io Oct 9 00:56:25.423989 containerd[1588]: time="2024-10-09T00:56:25.423901515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:56:25.424049 containerd[1588]: time="2024-10-09T00:56:25.423885564Z" level=info msg="shim disconnected" id=92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0 namespace=k8s.io Oct 9 00:56:25.424049 containerd[1588]: time="2024-10-09T00:56:25.424003951Z" level=warning msg="cleaning up after shim disconnected" id=92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0 namespace=k8s.io Oct 9 00:56:25.424049 containerd[1588]: time="2024-10-09T00:56:25.424011776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:56:25.438279 containerd[1588]: time="2024-10-09T00:56:25.438238251Z" level=info msg="TearDown network for sandbox \"bf6286b0889a6e85f252056df88a4738015eefddda29af3dc614940e1d317ec5\" successfully" Oct 9 00:56:25.438279 containerd[1588]: time="2024-10-09T00:56:25.438269882Z" level=info msg="StopPodSandbox for \"bf6286b0889a6e85f252056df88a4738015eefddda29af3dc614940e1d317ec5\" returns successfully" Oct 9 00:56:25.442341 containerd[1588]: time="2024-10-09T00:56:25.442212197Z" level=info msg="StopContainer for \"92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0\" returns successfully" Oct 9 00:56:25.442666 containerd[1588]: time="2024-10-09T00:56:25.442649505Z" level=info msg="StopPodSandbox for \"d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10\"" Oct 9 00:56:25.442723 containerd[1588]: time="2024-10-09T00:56:25.442683160Z" level=info msg="Container to stop \"92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:56:25.442723 containerd[1588]: time="2024-10-09T00:56:25.442711735Z" level=info msg="Container to stop \"5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:56:25.442781 containerd[1588]: time="2024-10-09T00:56:25.442720311Z" level=info msg="Container to stop \"756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:56:25.442781 containerd[1588]: time="2024-10-09T00:56:25.442731713Z" level=info msg="Container to stop \"56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:56:25.442781 containerd[1588]: time="2024-10-09T00:56:25.442750078Z" level=info msg="Container to stop \"15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:56:25.471354 containerd[1588]: time="2024-10-09T00:56:25.471275607Z" level=info msg="shim disconnected" id=d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10 namespace=k8s.io Oct 9 00:56:25.471354 containerd[1588]: time="2024-10-09T00:56:25.471334210Z" level=warning msg="cleaning up after shim disconnected" id=d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10 namespace=k8s.io Oct 9 00:56:25.471354 containerd[1588]: time="2024-10-09T00:56:25.471341583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:56:25.485000 containerd[1588]: time="2024-10-09T00:56:25.484936077Z" level=info msg="TearDown network for sandbox \"d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10\" successfully" Oct 9 00:56:25.485000 containerd[1588]: time="2024-10-09T00:56:25.484975403Z" level=info msg="StopPodSandbox for \"d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10\" returns successfully" Oct 9 00:56:25.542727 kubelet[2810]: I1009 00:56:25.542679 2810 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b25ccd21-9459-4491-b294-b8daba6a4ca4-cilium-config-path\") pod \"b25ccd21-9459-4491-b294-b8daba6a4ca4\" (UID: \"b25ccd21-9459-4491-b294-b8daba6a4ca4\") " Oct 9 00:56:25.542727 kubelet[2810]: I1009 00:56:25.542724 2810 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnmws\" (UniqueName: \"kubernetes.io/projected/b25ccd21-9459-4491-b294-b8daba6a4ca4-kube-api-access-hnmws\") pod \"b25ccd21-9459-4491-b294-b8daba6a4ca4\" (UID: \"b25ccd21-9459-4491-b294-b8daba6a4ca4\") " Oct 9 00:56:25.546570 kubelet[2810]: I1009 00:56:25.546511 2810 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b25ccd21-9459-4491-b294-b8daba6a4ca4-kube-api-access-hnmws" (OuterVolumeSpecName: "kube-api-access-hnmws") pod "b25ccd21-9459-4491-b294-b8daba6a4ca4" (UID: "b25ccd21-9459-4491-b294-b8daba6a4ca4"). InnerVolumeSpecName "kube-api-access-hnmws". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 00:56:25.546764 kubelet[2810]: I1009 00:56:25.546728 2810 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b25ccd21-9459-4491-b294-b8daba6a4ca4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b25ccd21-9459-4491-b294-b8daba6a4ca4" (UID: "b25ccd21-9459-4491-b294-b8daba6a4ca4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 00:56:25.643408 kubelet[2810]: I1009 00:56:25.643381 2810 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-xtables-lock\") pod \"93f8e8c4-e661-48f0-9abb-505c45725ad5\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " Oct 9 00:56:25.643496 kubelet[2810]: I1009 00:56:25.643416 2810 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-host-proc-sys-kernel\") pod \"93f8e8c4-e661-48f0-9abb-505c45725ad5\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " Oct 9 00:56:25.643496 kubelet[2810]: I1009 00:56:25.643439 2810 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx6w9\" (UniqueName: \"kubernetes.io/projected/93f8e8c4-e661-48f0-9abb-505c45725ad5-kube-api-access-xx6w9\") pod \"93f8e8c4-e661-48f0-9abb-505c45725ad5\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " Oct 9 00:56:25.643496 kubelet[2810]: I1009 00:56:25.643455 2810 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-hostproc\") pod \"93f8e8c4-e661-48f0-9abb-505c45725ad5\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " Oct 9 00:56:25.643496 kubelet[2810]: I1009 00:56:25.643476 2810 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-cilium-cgroup\") pod \"93f8e8c4-e661-48f0-9abb-505c45725ad5\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " Oct 9 00:56:25.643496 kubelet[2810]: I1009 00:56:25.643497 2810 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-lib-modules\") pod \"93f8e8c4-e661-48f0-9abb-505c45725ad5\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " Oct 9 00:56:25.643610 kubelet[2810]: I1009 00:56:25.643517 2810 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-etc-cni-netd\") pod \"93f8e8c4-e661-48f0-9abb-505c45725ad5\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " Oct 9 00:56:25.643610 kubelet[2810]: I1009 00:56:25.643543 2810 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93f8e8c4-e661-48f0-9abb-505c45725ad5-cilium-config-path\") pod \"93f8e8c4-e661-48f0-9abb-505c45725ad5\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " Oct 9 00:56:25.643610 kubelet[2810]: I1009 00:56:25.643535 2810 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "93f8e8c4-e661-48f0-9abb-505c45725ad5" (UID: "93f8e8c4-e661-48f0-9abb-505c45725ad5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:56:25.643610 kubelet[2810]: I1009 00:56:25.643566 2810 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-cilium-run\") pod \"93f8e8c4-e661-48f0-9abb-505c45725ad5\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " Oct 9 00:56:25.643610 kubelet[2810]: I1009 00:56:25.643587 2810 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-cni-path\") pod \"93f8e8c4-e661-48f0-9abb-505c45725ad5\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " Oct 9 00:56:25.643723 kubelet[2810]: I1009 00:56:25.643550 2810 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-hostproc" (OuterVolumeSpecName: "hostproc") pod "93f8e8c4-e661-48f0-9abb-505c45725ad5" (UID: "93f8e8c4-e661-48f0-9abb-505c45725ad5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:56:25.643723 kubelet[2810]: I1009 00:56:25.643612 2810 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93f8e8c4-e661-48f0-9abb-505c45725ad5-clustermesh-secrets\") pod \"93f8e8c4-e661-48f0-9abb-505c45725ad5\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " Oct 9 00:56:25.643723 kubelet[2810]: I1009 00:56:25.643616 2810 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "93f8e8c4-e661-48f0-9abb-505c45725ad5" (UID: "93f8e8c4-e661-48f0-9abb-505c45725ad5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:56:25.643723 kubelet[2810]: I1009 00:56:25.643635 2810 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93f8e8c4-e661-48f0-9abb-505c45725ad5-hubble-tls\") pod \"93f8e8c4-e661-48f0-9abb-505c45725ad5\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " Oct 9 00:56:25.643723 kubelet[2810]: I1009 00:56:25.643652 2810 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "93f8e8c4-e661-48f0-9abb-505c45725ad5" (UID: "93f8e8c4-e661-48f0-9abb-505c45725ad5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:56:25.643849 kubelet[2810]: I1009 00:56:25.643658 2810 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-bpf-maps\") pod \"93f8e8c4-e661-48f0-9abb-505c45725ad5\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " Oct 9 00:56:25.643849 kubelet[2810]: I1009 00:56:25.643674 2810 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "93f8e8c4-e661-48f0-9abb-505c45725ad5" (UID: "93f8e8c4-e661-48f0-9abb-505c45725ad5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:56:25.643849 kubelet[2810]: I1009 00:56:25.643712 2810 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-host-proc-sys-net\") pod \"93f8e8c4-e661-48f0-9abb-505c45725ad5\" (UID: \"93f8e8c4-e661-48f0-9abb-505c45725ad5\") " Oct 9 00:56:25.643849 kubelet[2810]: I1009 00:56:25.643765 2810 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 9 00:56:25.643849 kubelet[2810]: I1009 00:56:25.643782 2810 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 9 00:56:25.643849 kubelet[2810]: I1009 00:56:25.643796 2810 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 9 00:56:25.643849 kubelet[2810]: I1009 00:56:25.643812 2810 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b25ccd21-9459-4491-b294-b8daba6a4ca4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 9 00:56:25.644044 kubelet[2810]: I1009 00:56:25.643826 2810 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hnmws\" (UniqueName: \"kubernetes.io/projected/b25ccd21-9459-4491-b294-b8daba6a4ca4-kube-api-access-hnmws\") on node \"localhost\" DevicePath \"\"" Oct 9 00:56:25.644044 kubelet[2810]: I1009 00:56:25.643837 2810 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 9 00:56:25.644044 kubelet[2810]: I1009 00:56:25.643849 2810 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 9 00:56:25.644044 kubelet[2810]: I1009 00:56:25.643869 2810 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "93f8e8c4-e661-48f0-9abb-505c45725ad5" (UID: "93f8e8c4-e661-48f0-9abb-505c45725ad5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:56:25.644044 kubelet[2810]: I1009 00:56:25.643893 2810 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "93f8e8c4-e661-48f0-9abb-505c45725ad5" (UID: "93f8e8c4-e661-48f0-9abb-505c45725ad5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:56:25.644044 kubelet[2810]: I1009 00:56:25.643918 2810 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-cni-path" (OuterVolumeSpecName: "cni-path") pod "93f8e8c4-e661-48f0-9abb-505c45725ad5" (UID: "93f8e8c4-e661-48f0-9abb-505c45725ad5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:56:25.644184 kubelet[2810]: I1009 00:56:25.643929 2810 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "93f8e8c4-e661-48f0-9abb-505c45725ad5" (UID: "93f8e8c4-e661-48f0-9abb-505c45725ad5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:56:25.644184 kubelet[2810]: I1009 00:56:25.643940 2810 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "93f8e8c4-e661-48f0-9abb-505c45725ad5" (UID: "93f8e8c4-e661-48f0-9abb-505c45725ad5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:56:25.646676 kubelet[2810]: I1009 00:56:25.646658 2810 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f8e8c4-e661-48f0-9abb-505c45725ad5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "93f8e8c4-e661-48f0-9abb-505c45725ad5" (UID: "93f8e8c4-e661-48f0-9abb-505c45725ad5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 9 00:56:25.646888 kubelet[2810]: I1009 00:56:25.646858 2810 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f8e8c4-e661-48f0-9abb-505c45725ad5-kube-api-access-xx6w9" (OuterVolumeSpecName: "kube-api-access-xx6w9") pod "93f8e8c4-e661-48f0-9abb-505c45725ad5" (UID: "93f8e8c4-e661-48f0-9abb-505c45725ad5"). InnerVolumeSpecName "kube-api-access-xx6w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 00:56:25.647518 kubelet[2810]: I1009 00:56:25.647494 2810 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f8e8c4-e661-48f0-9abb-505c45725ad5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "93f8e8c4-e661-48f0-9abb-505c45725ad5" (UID: "93f8e8c4-e661-48f0-9abb-505c45725ad5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 00:56:25.647606 kubelet[2810]: I1009 00:56:25.647584 2810 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f8e8c4-e661-48f0-9abb-505c45725ad5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "93f8e8c4-e661-48f0-9abb-505c45725ad5" (UID: "93f8e8c4-e661-48f0-9abb-505c45725ad5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 00:56:25.744959 kubelet[2810]: I1009 00:56:25.744924 2810 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 9 00:56:25.744959 kubelet[2810]: I1009 00:56:25.744952 2810 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 9 00:56:25.744959 kubelet[2810]: I1009 00:56:25.744964 2810 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xx6w9\" (UniqueName: \"kubernetes.io/projected/93f8e8c4-e661-48f0-9abb-505c45725ad5-kube-api-access-xx6w9\") on node \"localhost\" DevicePath \"\"" Oct 9 00:56:25.745145 kubelet[2810]: I1009 00:56:25.744976 2810 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 9 00:56:25.745145 kubelet[2810]: I1009 00:56:25.744985 2810 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/93f8e8c4-e661-48f0-9abb-505c45725ad5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 9 00:56:25.745145 kubelet[2810]: I1009 00:56:25.744994 2810 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 9 00:56:25.745145 kubelet[2810]: I1009 00:56:25.745003 2810 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/93f8e8c4-e661-48f0-9abb-505c45725ad5-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 9 00:56:25.745145 kubelet[2810]: I1009 00:56:25.745012 2810 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/93f8e8c4-e661-48f0-9abb-505c45725ad5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 9 00:56:25.745145 kubelet[2810]: I1009 00:56:25.745020 2810 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/93f8e8c4-e661-48f0-9abb-505c45725ad5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 9 00:56:26.006537 kubelet[2810]: I1009 00:56:26.006414 2810 scope.go:117] "RemoveContainer" containerID="c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43" Oct 9 00:56:26.014303 containerd[1588]: time="2024-10-09T00:56:26.014231125Z" level=info msg="RemoveContainer for \"c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43\"" Oct 9 00:56:26.021429 containerd[1588]: time="2024-10-09T00:56:26.021384938Z" level=info msg="RemoveContainer for \"c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43\" returns successfully" Oct 9 00:56:26.021677 kubelet[2810]: I1009 00:56:26.021644 2810 scope.go:117] "RemoveContainer" containerID="c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43" Oct 9 00:56:26.021909 containerd[1588]: time="2024-10-09T00:56:26.021865539Z" level=error msg="ContainerStatus for \"c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43\": not found" Oct 9 00:56:26.029133 kubelet[2810]: E1009 00:56:26.029093 2810 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43\": not found" containerID="c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43" Oct 9 00:56:26.029206 kubelet[2810]: I1009 00:56:26.029177 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43"} err="failed to get container status \"c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43\": rpc error: code = NotFound desc = an error occurred when try to find container \"c16cd2d6d8ff32a2deac28dd0dc471dafbadcc8fff9ca71663e99f87ce71db43\": not found" Oct 9 00:56:26.029206 kubelet[2810]: I1009 00:56:26.029189 2810 scope.go:117] "RemoveContainer" containerID="92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0" Oct 9 00:56:26.030103 containerd[1588]: time="2024-10-09T00:56:26.030075454Z" level=info msg="RemoveContainer for \"92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0\"" Oct 9 00:56:26.053943 containerd[1588]: time="2024-10-09T00:56:26.053905839Z" level=info msg="RemoveContainer for \"92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0\" returns successfully" Oct 9 00:56:26.054144 kubelet[2810]: I1009 00:56:26.054119 2810 scope.go:117] "RemoveContainer" containerID="56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf" Oct 9 00:56:26.054989 containerd[1588]: time="2024-10-09T00:56:26.054961171Z" level=info msg="RemoveContainer for \"56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf\"" Oct 9 00:56:26.062331 containerd[1588]: time="2024-10-09T00:56:26.062285230Z" level=info msg="RemoveContainer for \"56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf\" returns successfully" Oct 9 00:56:26.062491 kubelet[2810]: I1009 00:56:26.062455 2810 scope.go:117] "RemoveContainer" containerID="756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4" Oct 9 00:56:26.063308 containerd[1588]: time="2024-10-09T00:56:26.063250639Z" level=info msg="RemoveContainer for \"756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4\"" Oct 9 00:56:26.066333 containerd[1588]: time="2024-10-09T00:56:26.066309017Z" level=info msg="RemoveContainer for \"756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4\" returns successfully" Oct 9 00:56:26.066437 kubelet[2810]: I1009 00:56:26.066423 2810 scope.go:117] "RemoveContainer" containerID="5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052" Oct 9 00:56:26.067061 containerd[1588]: time="2024-10-09T00:56:26.067040107Z" level=info msg="RemoveContainer for \"5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052\"" Oct 9 00:56:26.069996 containerd[1588]: time="2024-10-09T00:56:26.069969899Z" level=info msg="RemoveContainer for \"5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052\" returns successfully" Oct 9 00:56:26.070097 kubelet[2810]: I1009 00:56:26.070081 2810 scope.go:117] "RemoveContainer" containerID="15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d" Oct 9 00:56:26.070702 containerd[1588]: time="2024-10-09T00:56:26.070686171Z" level=info msg="RemoveContainer for \"15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d\"" Oct 9 00:56:26.073962 containerd[1588]: time="2024-10-09T00:56:26.073931016Z" level=info msg="RemoveContainer for \"15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d\" returns successfully" Oct 9 00:56:26.074059 kubelet[2810]: I1009 00:56:26.074042 2810 scope.go:117] "RemoveContainer" containerID="92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0" Oct 9 00:56:26.074178 containerd[1588]: time="2024-10-09T00:56:26.074150767Z" level=error msg="ContainerStatus for \"92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0\": not found" Oct 9 00:56:26.074317 kubelet[2810]: E1009 00:56:26.074272 2810 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0\": not found" containerID="92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0" Oct 9 00:56:26.074373 kubelet[2810]: I1009 00:56:26.074327 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0"} err="failed to get container status \"92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0\": rpc error: code = NotFound desc = an error occurred when try to find container \"92d040abdd716549b6a78263d9c6836f6510a914324a18877b38dc765dd3e6e0\": not found" Oct 9 00:56:26.074373 kubelet[2810]: I1009 00:56:26.074340 2810 scope.go:117] "RemoveContainer" containerID="56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf" Oct 9 00:56:26.074507 containerd[1588]: time="2024-10-09T00:56:26.074480078Z" level=error msg="ContainerStatus for \"56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf\": not found" Oct 9 00:56:26.074601 kubelet[2810]: E1009 00:56:26.074587 2810 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf\": not found" containerID="56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf" Oct 9 00:56:26.074639 kubelet[2810]: I1009 00:56:26.074615 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf"} err="failed to get container status \"56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"56d01a712d4489d6e3b346cba83951174a24d1a103ef7c8da66f71a943ce16cf\": not found" Oct 9 00:56:26.074639 kubelet[2810]: I1009 00:56:26.074626 2810 scope.go:117] "RemoveContainer" containerID="756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4" Oct 9 00:56:26.074796 containerd[1588]: time="2024-10-09T00:56:26.074771876Z" level=error msg="ContainerStatus for \"756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4\": not found" Oct 9 00:56:26.074899 kubelet[2810]: E1009 00:56:26.074883 2810 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4\": not found" containerID="756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4" Oct 9 00:56:26.074936 kubelet[2810]: I1009 00:56:26.074912 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4"} err="failed to get container status \"756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"756f6c744a5d5e631c004db27a36c667139683282fd9ba46531d667b406331a4\": not found" Oct 9 00:56:26.074936 kubelet[2810]: I1009 00:56:26.074927 2810 scope.go:117] "RemoveContainer" containerID="5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052" Oct 9 00:56:26.075072 containerd[1588]: time="2024-10-09T00:56:26.075047615Z" level=error msg="ContainerStatus for \"5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052\": not found" Oct 9 00:56:26.075155 kubelet[2810]: E1009 00:56:26.075141 2810 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052\": not found" containerID="5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052" Oct 9 00:56:26.075186 kubelet[2810]: I1009 00:56:26.075167 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052"} err="failed to get container status \"5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a6e7fc7396f3b94fcc51e7c09a157555abc8ba6ece1b8d71c8f2caa48a9a052\": not found" Oct 9 00:56:26.075186 kubelet[2810]: I1009 00:56:26.075177 2810 scope.go:117] "RemoveContainer" containerID="15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d" Oct 9 00:56:26.075324 containerd[1588]: time="2024-10-09T00:56:26.075285079Z" level=error msg="ContainerStatus for \"15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d\": not found" Oct 9 00:56:26.075459 kubelet[2810]: E1009 00:56:26.075444 2810 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d\": not found" containerID="15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d" Oct 9 00:56:26.075504 kubelet[2810]: I1009 00:56:26.075470 2810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d"} err="failed to get container status \"15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d\": rpc error: code = NotFound desc = an error occurred when try to find container \"15c8c1fc173af298aa4d9281efd8db52daced8712d7eec28571e4f5bcc813d0d\": not found" Oct 9 00:56:26.335198 systemd[1]: var-lib-kubelet-pods-b25ccd21\x2d9459\x2d4491\x2db294\x2db8daba6a4ca4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhnmws.mount: Deactivated successfully. Oct 9 00:56:26.335442 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10-rootfs.mount: Deactivated successfully. Oct 9 00:56:26.335637 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7aa3f7bb9587887245a2f4a8c11f1f088c405189ddfab2b64e80dd658c94e10-shm.mount: Deactivated successfully. Oct 9 00:56:26.335882 systemd[1]: var-lib-kubelet-pods-93f8e8c4\x2de661\x2d48f0\x2d9abb\x2d505c45725ad5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxx6w9.mount: Deactivated successfully. Oct 9 00:56:26.336091 systemd[1]: var-lib-kubelet-pods-93f8e8c4\x2de661\x2d48f0\x2d9abb\x2d505c45725ad5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 9 00:56:26.336470 systemd[1]: var-lib-kubelet-pods-93f8e8c4\x2de661\x2d48f0\x2d9abb\x2d505c45725ad5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 9 00:56:26.818535 kubelet[2810]: E1009 00:56:26.818481 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:27.325500 systemd[1]: Started sshd@25-10.0.0.13:22-10.0.0.1:46648.service - OpenSSH per-connection server daemon (10.0.0.1:46648). Oct 9 00:56:27.327477 sshd[4467]: pam_unix(sshd:session): session closed for user core Oct 9 00:56:27.331933 systemd[1]: sshd@24-10.0.0.13:22-10.0.0.1:51778.service: Deactivated successfully. Oct 9 00:56:27.334724 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 00:56:27.335055 systemd-logind[1569]: Session 25 logged out. Waiting for processes to exit. Oct 9 00:56:27.336588 systemd-logind[1569]: Removed session 25. Oct 9 00:56:27.360392 sshd[4637]: Accepted publickey for core from 10.0.0.1 port 46648 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:56:27.361848 sshd[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:56:27.365969 systemd-logind[1569]: New session 26 of user core. Oct 9 00:56:27.373542 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 00:56:27.820210 kubelet[2810]: I1009 00:56:27.820165 2810 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="93f8e8c4-e661-48f0-9abb-505c45725ad5" path="/var/lib/kubelet/pods/93f8e8c4-e661-48f0-9abb-505c45725ad5/volumes" Oct 9 00:56:27.821233 kubelet[2810]: I1009 00:56:27.821208 2810 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b25ccd21-9459-4491-b294-b8daba6a4ca4" path="/var/lib/kubelet/pods/b25ccd21-9459-4491-b294-b8daba6a4ca4/volumes" Oct 9 00:56:27.970160 sshd[4637]: pam_unix(sshd:session): session closed for user core Oct 9 00:56:27.978669 systemd[1]: Started sshd@26-10.0.0.13:22-10.0.0.1:46662.service - OpenSSH per-connection server daemon (10.0.0.1:46662). Oct 9 00:56:27.979248 systemd[1]: sshd@25-10.0.0.13:22-10.0.0.1:46648.service: Deactivated successfully. Oct 9 00:56:27.984605 kubelet[2810]: I1009 00:56:27.984562 2810 topology_manager.go:215] "Topology Admit Handler" podUID="e9c40d91-9ea7-48f8-894b-026dc0bbe909" podNamespace="kube-system" podName="cilium-rn9bt" Oct 9 00:56:27.984735 kubelet[2810]: E1009 00:56:27.984634 2810 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93f8e8c4-e661-48f0-9abb-505c45725ad5" containerName="mount-bpf-fs" Oct 9 00:56:27.984735 kubelet[2810]: E1009 00:56:27.984646 2810 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93f8e8c4-e661-48f0-9abb-505c45725ad5" containerName="clean-cilium-state" Oct 9 00:56:27.984735 kubelet[2810]: E1009 00:56:27.984656 2810 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93f8e8c4-e661-48f0-9abb-505c45725ad5" containerName="cilium-agent" Oct 9 00:56:27.984735 kubelet[2810]: E1009 00:56:27.984664 2810 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93f8e8c4-e661-48f0-9abb-505c45725ad5" containerName="apply-sysctl-overwrites" Oct 9 00:56:27.984735 kubelet[2810]: E1009 00:56:27.984673 2810 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b25ccd21-9459-4491-b294-b8daba6a4ca4" containerName="cilium-operator" Oct 9 00:56:27.984735 kubelet[2810]: E1009 00:56:27.984681 2810 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="93f8e8c4-e661-48f0-9abb-505c45725ad5" containerName="mount-cgroup" Oct 9 00:56:27.984735 kubelet[2810]: I1009 00:56:27.984706 2810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b25ccd21-9459-4491-b294-b8daba6a4ca4" containerName="cilium-operator" Oct 9 00:56:27.984735 kubelet[2810]: I1009 00:56:27.984716 2810 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8e8c4-e661-48f0-9abb-505c45725ad5" containerName="cilium-agent" Oct 9 00:56:27.993112 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 00:56:27.993873 systemd-logind[1569]: Session 26 logged out. Waiting for processes to exit. Oct 9 00:56:27.997924 systemd-logind[1569]: Removed session 26. Oct 9 00:56:28.020892 sshd[4651]: Accepted publickey for core from 10.0.0.1 port 46662 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:56:28.022633 sshd[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:56:28.026562 systemd-logind[1569]: New session 27 of user core. Oct 9 00:56:28.037699 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 9 00:56:28.056803 kubelet[2810]: I1009 00:56:28.056750 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9c40d91-9ea7-48f8-894b-026dc0bbe909-cilium-cgroup\") pod \"cilium-rn9bt\" (UID: \"e9c40d91-9ea7-48f8-894b-026dc0bbe909\") " pod="kube-system/cilium-rn9bt" Oct 9 00:56:28.056803 kubelet[2810]: I1009 00:56:28.056798 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9c40d91-9ea7-48f8-894b-026dc0bbe909-hostproc\") pod \"cilium-rn9bt\" (UID: \"e9c40d91-9ea7-48f8-894b-026dc0bbe909\") " pod="kube-system/cilium-rn9bt" Oct 9 00:56:28.056893 kubelet[2810]: I1009 00:56:28.056819 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9c40d91-9ea7-48f8-894b-026dc0bbe909-cni-path\") pod \"cilium-rn9bt\" (UID: \"e9c40d91-9ea7-48f8-894b-026dc0bbe909\") " pod="kube-system/cilium-rn9bt" Oct 9 00:56:28.056893 kubelet[2810]: I1009 00:56:28.056852 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9c40d91-9ea7-48f8-894b-026dc0bbe909-xtables-lock\") pod \"cilium-rn9bt\" (UID: \"e9c40d91-9ea7-48f8-894b-026dc0bbe909\") " pod="kube-system/cilium-rn9bt" Oct 9 00:56:28.056893 kubelet[2810]: I1009 00:56:28.056871 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9c40d91-9ea7-48f8-894b-026dc0bbe909-clustermesh-secrets\") pod \"cilium-rn9bt\" (UID: \"e9c40d91-9ea7-48f8-894b-026dc0bbe909\") " pod="kube-system/cilium-rn9bt" Oct 9 00:56:28.056893 kubelet[2810]: I1009 00:56:28.056891 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9c40d91-9ea7-48f8-894b-026dc0bbe909-lib-modules\") pod \"cilium-rn9bt\" (UID: \"e9c40d91-9ea7-48f8-894b-026dc0bbe909\") " pod="kube-system/cilium-rn9bt" Oct 9 00:56:28.056989 kubelet[2810]: I1009 00:56:28.056909 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9c40d91-9ea7-48f8-894b-026dc0bbe909-cilium-run\") pod \"cilium-rn9bt\" (UID: \"e9c40d91-9ea7-48f8-894b-026dc0bbe909\") " pod="kube-system/cilium-rn9bt" Oct 9 00:56:28.056989 kubelet[2810]: I1009 00:56:28.056928 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9c40d91-9ea7-48f8-894b-026dc0bbe909-bpf-maps\") pod \"cilium-rn9bt\" (UID: \"e9c40d91-9ea7-48f8-894b-026dc0bbe909\") " pod="kube-system/cilium-rn9bt" Oct 9 00:56:28.056989 kubelet[2810]: I1009 00:56:28.056954 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9c40d91-9ea7-48f8-894b-026dc0bbe909-host-proc-sys-net\") pod \"cilium-rn9bt\" (UID: \"e9c40d91-9ea7-48f8-894b-026dc0bbe909\") " pod="kube-system/cilium-rn9bt" Oct 9 00:56:28.056989 kubelet[2810]: I1009 00:56:28.056979 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc2jd\" (UniqueName: \"kubernetes.io/projected/e9c40d91-9ea7-48f8-894b-026dc0bbe909-kube-api-access-dc2jd\") pod \"cilium-rn9bt\" (UID: \"e9c40d91-9ea7-48f8-894b-026dc0bbe909\") " pod="kube-system/cilium-rn9bt" Oct 9 00:56:28.057076 kubelet[2810]: I1009 00:56:28.056998 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9c40d91-9ea7-48f8-894b-026dc0bbe909-host-proc-sys-kernel\") pod \"cilium-rn9bt\" (UID: \"e9c40d91-9ea7-48f8-894b-026dc0bbe909\") " pod="kube-system/cilium-rn9bt" Oct 9 00:56:28.057076 kubelet[2810]: I1009 00:56:28.057017 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9c40d91-9ea7-48f8-894b-026dc0bbe909-hubble-tls\") pod \"cilium-rn9bt\" (UID: \"e9c40d91-9ea7-48f8-894b-026dc0bbe909\") " pod="kube-system/cilium-rn9bt" Oct 9 00:56:28.057076 kubelet[2810]: I1009 00:56:28.057034 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9c40d91-9ea7-48f8-894b-026dc0bbe909-etc-cni-netd\") pod \"cilium-rn9bt\" (UID: \"e9c40d91-9ea7-48f8-894b-026dc0bbe909\") " pod="kube-system/cilium-rn9bt" Oct 9 00:56:28.057076 kubelet[2810]: I1009 00:56:28.057052 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9c40d91-9ea7-48f8-894b-026dc0bbe909-cilium-config-path\") pod \"cilium-rn9bt\" (UID: \"e9c40d91-9ea7-48f8-894b-026dc0bbe909\") " pod="kube-system/cilium-rn9bt" Oct 9 00:56:28.057076 kubelet[2810]: I1009 00:56:28.057070 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e9c40d91-9ea7-48f8-894b-026dc0bbe909-cilium-ipsec-secrets\") pod \"cilium-rn9bt\" (UID: \"e9c40d91-9ea7-48f8-894b-026dc0bbe909\") " pod="kube-system/cilium-rn9bt" Oct 9 00:56:28.088721 sshd[4651]: pam_unix(sshd:session): session closed for user core Oct 9 00:56:28.096506 systemd[1]: Started sshd@27-10.0.0.13:22-10.0.0.1:46678.service - OpenSSH per-connection server daemon (10.0.0.1:46678). Oct 9 00:56:28.097030 systemd[1]: sshd@26-10.0.0.13:22-10.0.0.1:46662.service: Deactivated successfully. Oct 9 00:56:28.099075 systemd[1]: session-27.scope: Deactivated successfully. Oct 9 00:56:28.100850 systemd-logind[1569]: Session 27 logged out. Waiting for processes to exit. Oct 9 00:56:28.101863 systemd-logind[1569]: Removed session 27. Oct 9 00:56:28.126534 sshd[4660]: Accepted publickey for core from 10.0.0.1 port 46678 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:56:28.128054 sshd[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:56:28.132360 systemd-logind[1569]: New session 28 of user core. Oct 9 00:56:28.141578 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 9 00:56:28.292672 kubelet[2810]: E1009 00:56:28.292609 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:28.293217 containerd[1588]: time="2024-10-09T00:56:28.293171926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rn9bt,Uid:e9c40d91-9ea7-48f8-894b-026dc0bbe909,Namespace:kube-system,Attempt:0,}" Oct 9 00:56:28.316137 containerd[1588]: time="2024-10-09T00:56:28.315316678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:56:28.316137 containerd[1588]: time="2024-10-09T00:56:28.316067794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:56:28.316137 containerd[1588]: time="2024-10-09T00:56:28.316088244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:56:28.316340 containerd[1588]: time="2024-10-09T00:56:28.316221619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:56:28.357004 containerd[1588]: time="2024-10-09T00:56:28.356959086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rn9bt,Uid:e9c40d91-9ea7-48f8-894b-026dc0bbe909,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3f0d1c5338e837620a101a273a0d857050debf7eb5f22ec8c3f3207841d0949\"" Oct 9 00:56:28.357644 kubelet[2810]: E1009 00:56:28.357623 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:28.360107 containerd[1588]: time="2024-10-09T00:56:28.360071491Z" level=info msg="CreateContainer within sandbox \"f3f0d1c5338e837620a101a273a0d857050debf7eb5f22ec8c3f3207841d0949\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 9 00:56:28.377810 containerd[1588]: time="2024-10-09T00:56:28.377744547Z" level=info msg="CreateContainer within sandbox \"f3f0d1c5338e837620a101a273a0d857050debf7eb5f22ec8c3f3207841d0949\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"27bd5298342cbedbb14242b3ab3aa26316443e62793b1a93b4ec4f3d20f93e27\"" Oct 9 00:56:28.378380 containerd[1588]: time="2024-10-09T00:56:28.378330006Z" level=info msg="StartContainer for \"27bd5298342cbedbb14242b3ab3aa26316443e62793b1a93b4ec4f3d20f93e27\"" Oct 9 00:56:28.431063 containerd[1588]: time="2024-10-09T00:56:28.431025644Z" level=info msg="StartContainer for \"27bd5298342cbedbb14242b3ab3aa26316443e62793b1a93b4ec4f3d20f93e27\" returns successfully" Oct 9 00:56:28.471640 containerd[1588]: time="2024-10-09T00:56:28.471558701Z" level=info msg="shim disconnected" id=27bd5298342cbedbb14242b3ab3aa26316443e62793b1a93b4ec4f3d20f93e27 namespace=k8s.io Oct 9 00:56:28.471640 containerd[1588]: time="2024-10-09T00:56:28.471625899Z" level=warning msg="cleaning up after shim disconnected" id=27bd5298342cbedbb14242b3ab3aa26316443e62793b1a93b4ec4f3d20f93e27 namespace=k8s.io Oct 9 00:56:28.471640 containerd[1588]: time="2024-10-09T00:56:28.471636630Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:56:28.886113 kubelet[2810]: E1009 00:56:28.886079 2810 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 9 00:56:29.015658 kubelet[2810]: E1009 00:56:29.015627 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:29.017501 containerd[1588]: time="2024-10-09T00:56:29.017456405Z" level=info msg="CreateContainer within sandbox \"f3f0d1c5338e837620a101a273a0d857050debf7eb5f22ec8c3f3207841d0949\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 9 00:56:29.156941 containerd[1588]: time="2024-10-09T00:56:29.156805870Z" level=info msg="CreateContainer within sandbox \"f3f0d1c5338e837620a101a273a0d857050debf7eb5f22ec8c3f3207841d0949\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4b78074398632db9e24aed838871570212b22d771f296a4d28684d67e6acdece\"" Oct 9 00:56:29.157424 containerd[1588]: time="2024-10-09T00:56:29.157362905Z" level=info msg="StartContainer for \"4b78074398632db9e24aed838871570212b22d771f296a4d28684d67e6acdece\"" Oct 9 00:56:29.214068 containerd[1588]: time="2024-10-09T00:56:29.214020628Z" level=info msg="StartContainer for \"4b78074398632db9e24aed838871570212b22d771f296a4d28684d67e6acdece\" returns successfully" Oct 9 00:56:29.239681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b78074398632db9e24aed838871570212b22d771f296a4d28684d67e6acdece-rootfs.mount: Deactivated successfully. Oct 9 00:56:29.246379 containerd[1588]: time="2024-10-09T00:56:29.246316963Z" level=info msg="shim disconnected" id=4b78074398632db9e24aed838871570212b22d771f296a4d28684d67e6acdece namespace=k8s.io Oct 9 00:56:29.246379 containerd[1588]: time="2024-10-09T00:56:29.246371186Z" level=warning msg="cleaning up after shim disconnected" id=4b78074398632db9e24aed838871570212b22d771f296a4d28684d67e6acdece namespace=k8s.io Oct 9 00:56:29.246379 containerd[1588]: time="2024-10-09T00:56:29.246381356Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:56:30.018958 kubelet[2810]: E1009 00:56:30.018917 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:30.020982 containerd[1588]: time="2024-10-09T00:56:30.020945561Z" level=info msg="CreateContainer within sandbox \"f3f0d1c5338e837620a101a273a0d857050debf7eb5f22ec8c3f3207841d0949\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 9 00:56:30.270702 containerd[1588]: time="2024-10-09T00:56:30.270565665Z" level=info msg="CreateContainer within sandbox \"f3f0d1c5338e837620a101a273a0d857050debf7eb5f22ec8c3f3207841d0949\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"710078d88d79777a515b89d352f15554111d523c2f7d36223580b5c8566db6a0\"" Oct 9 00:56:30.271231 containerd[1588]: time="2024-10-09T00:56:30.271189097Z" level=info msg="StartContainer for \"710078d88d79777a515b89d352f15554111d523c2f7d36223580b5c8566db6a0\"" Oct 9 00:56:30.332287 containerd[1588]: time="2024-10-09T00:56:30.332238942Z" level=info msg="StartContainer for \"710078d88d79777a515b89d352f15554111d523c2f7d36223580b5c8566db6a0\" returns successfully" Oct 9 00:56:30.356090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-710078d88d79777a515b89d352f15554111d523c2f7d36223580b5c8566db6a0-rootfs.mount: Deactivated successfully. Oct 9 00:56:30.359896 containerd[1588]: time="2024-10-09T00:56:30.359777222Z" level=info msg="shim disconnected" id=710078d88d79777a515b89d352f15554111d523c2f7d36223580b5c8566db6a0 namespace=k8s.io Oct 9 00:56:30.360014 containerd[1588]: time="2024-10-09T00:56:30.359934583Z" level=warning msg="cleaning up after shim disconnected" id=710078d88d79777a515b89d352f15554111d523c2f7d36223580b5c8566db6a0 namespace=k8s.io Oct 9 00:56:30.360014 containerd[1588]: time="2024-10-09T00:56:30.359948689Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:56:31.022806 kubelet[2810]: E1009 00:56:31.022765 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:31.024757 containerd[1588]: time="2024-10-09T00:56:31.024667097Z" level=info msg="CreateContainer within sandbox \"f3f0d1c5338e837620a101a273a0d857050debf7eb5f22ec8c3f3207841d0949\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 9 00:56:31.603648 containerd[1588]: time="2024-10-09T00:56:31.603584421Z" level=info msg="CreateContainer within sandbox \"f3f0d1c5338e837620a101a273a0d857050debf7eb5f22ec8c3f3207841d0949\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e139ac267d442c3d3586cea4070a41f6156b2ab5c1d8800e2d0922cc3887a343\"" Oct 9 00:56:31.604420 containerd[1588]: time="2024-10-09T00:56:31.604330897Z" level=info msg="StartContainer for \"e139ac267d442c3d3586cea4070a41f6156b2ab5c1d8800e2d0922cc3887a343\"" Oct 9 00:56:31.786014 containerd[1588]: time="2024-10-09T00:56:31.785955153Z" level=info msg="StartContainer for \"e139ac267d442c3d3586cea4070a41f6156b2ab5c1d8800e2d0922cc3887a343\" returns successfully" Oct 9 00:56:31.803519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e139ac267d442c3d3586cea4070a41f6156b2ab5c1d8800e2d0922cc3887a343-rootfs.mount: Deactivated successfully. Oct 9 00:56:32.025894 kubelet[2810]: E1009 00:56:32.025780 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:32.026885 containerd[1588]: time="2024-10-09T00:56:32.026823558Z" level=info msg="shim disconnected" id=e139ac267d442c3d3586cea4070a41f6156b2ab5c1d8800e2d0922cc3887a343 namespace=k8s.io Oct 9 00:56:32.026885 containerd[1588]: time="2024-10-09T00:56:32.026870366Z" level=warning msg="cleaning up after shim disconnected" id=e139ac267d442c3d3586cea4070a41f6156b2ab5c1d8800e2d0922cc3887a343 namespace=k8s.io Oct 9 00:56:32.026885 containerd[1588]: time="2024-10-09T00:56:32.026878513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:56:32.818674 kubelet[2810]: E1009 00:56:32.818200 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:33.029926 kubelet[2810]: E1009 00:56:33.029897 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:33.031708 containerd[1588]: time="2024-10-09T00:56:33.031670215Z" level=info msg="CreateContainer within sandbox \"f3f0d1c5338e837620a101a273a0d857050debf7eb5f22ec8c3f3207841d0949\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 9 00:56:33.044992 containerd[1588]: time="2024-10-09T00:56:33.044949113Z" level=info msg="CreateContainer within sandbox \"f3f0d1c5338e837620a101a273a0d857050debf7eb5f22ec8c3f3207841d0949\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f9b5f1773c32819ace74d1e37da039c95f636914520da43b2d09f35eb78ee406\"" Oct 9 00:56:33.045565 containerd[1588]: time="2024-10-09T00:56:33.045526605Z" level=info msg="StartContainer for \"f9b5f1773c32819ace74d1e37da039c95f636914520da43b2d09f35eb78ee406\"" Oct 9 00:56:33.100514 containerd[1588]: time="2024-10-09T00:56:33.100472208Z" level=info msg="StartContainer for \"f9b5f1773c32819ace74d1e37da039c95f636914520da43b2d09f35eb78ee406\" returns successfully" Oct 9 00:56:33.501322 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Oct 9 00:56:34.034200 kubelet[2810]: E1009 00:56:34.034174 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:34.263006 kubelet[2810]: I1009 00:56:34.262958 2810 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rn9bt" podStartSLOduration=7.262922185 podStartE2EDuration="7.262922185s" podCreationTimestamp="2024-10-09 00:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:56:34.26287202 +0000 UTC m=+90.551183928" watchObservedRunningTime="2024-10-09 00:56:34.262922185 +0000 UTC m=+90.551234093" Oct 9 00:56:35.036103 kubelet[2810]: E1009 00:56:35.036072 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:36.536627 systemd-networkd[1256]: lxc_health: Link UP Oct 9 00:56:36.542651 systemd-networkd[1256]: lxc_health: Gained carrier Oct 9 00:56:37.016197 kubelet[2810]: E1009 00:56:37.016141 2810 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57166->127.0.0.1:42257: write tcp 127.0.0.1:57166->127.0.0.1:42257: write: broken pipe Oct 9 00:56:38.295359 kubelet[2810]: E1009 00:56:38.295205 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:38.570520 systemd-networkd[1256]: lxc_health: Gained IPv6LL Oct 9 00:56:39.043919 kubelet[2810]: E1009 00:56:39.043892 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:39.818502 kubelet[2810]: E1009 00:56:39.818447 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:40.045242 kubelet[2810]: E1009 00:56:40.045214 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:41.818986 kubelet[2810]: E1009 00:56:41.818886 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:56:43.429952 sshd[4660]: pam_unix(sshd:session): session closed for user core Oct 9 00:56:43.433933 systemd[1]: sshd@27-10.0.0.13:22-10.0.0.1:46678.service: Deactivated successfully. Oct 9 00:56:43.436223 systemd-logind[1569]: Session 28 logged out. Waiting for processes to exit. Oct 9 00:56:43.436403 systemd[1]: session-28.scope: Deactivated successfully. Oct 9 00:56:43.437336 systemd-logind[1569]: Removed session 28.