Jan 29 16:20:38.904666 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:20:38.904697 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:20:38.904712 kernel: BIOS-provided physical RAM map: Jan 29 16:20:38.904720 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 16:20:38.904728 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 29 16:20:38.904737 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 29 16:20:38.904747 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 29 16:20:38.904756 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 29 16:20:38.904765 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 29 16:20:38.904774 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 29 16:20:38.904783 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 29 16:20:38.904796 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 29 16:20:38.904805 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 29 16:20:38.904814 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 29 16:20:38.904825 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 29 16:20:38.904835 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 29 16:20:38.904849 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 29 16:20:38.904858 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 29 16:20:38.904867 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 29 16:20:38.904876 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 29 16:20:38.904885 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 29 16:20:38.904895 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 29 16:20:38.904904 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 29 16:20:38.904913 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 16:20:38.904922 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 29 16:20:38.904931 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 16:20:38.904940 kernel: NX (Execute Disable) protection: active Jan 29 16:20:38.904954 kernel: APIC: Static calls initialized Jan 29 16:20:38.904963 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 29 16:20:38.904973 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 29 16:20:38.904982 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 29 16:20:38.904991 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 29 16:20:38.904999 kernel: extended physical RAM map: Jan 29 16:20:38.905008 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 16:20:38.905018 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 29 16:20:38.905027 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 29 16:20:38.905036 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 29 16:20:38.905045 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 29 16:20:38.905054 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 29 16:20:38.905066 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 29 16:20:38.905079 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Jan 29 16:20:38.905088 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Jan 29 16:20:38.905095 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Jan 29 16:20:38.905102 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Jan 29 16:20:38.905109 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Jan 29 16:20:38.905118 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 29 16:20:38.905125 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 29 16:20:38.905133 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 29 16:20:38.905140 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 29 16:20:38.905147 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 29 16:20:38.905154 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 29 16:20:38.905161 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 29 16:20:38.905168 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 29 16:20:38.905175 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 29 16:20:38.905184 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 29 16:20:38.905192 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 29 16:20:38.905199 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 29 16:20:38.905206 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 16:20:38.905213 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 29 16:20:38.905220 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 16:20:38.905227 kernel: efi: EFI v2.7 by EDK II Jan 29 16:20:38.905234 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Jan 29 16:20:38.905243 kernel: random: crng init done Jan 29 16:20:38.905252 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 29 16:20:38.905262 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 29 16:20:38.905271 kernel: secureboot: Secure boot disabled Jan 29 16:20:38.905291 kernel: SMBIOS 2.8 present. Jan 29 16:20:38.905301 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 29 16:20:38.905311 kernel: Hypervisor detected: KVM Jan 29 16:20:38.905320 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:20:38.905329 kernel: kvm-clock: using sched offset of 2960786333 cycles Jan 29 16:20:38.905336 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:20:38.905344 kernel: tsc: Detected 2794.750 MHz processor Jan 29 16:20:38.905351 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:20:38.905359 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:20:38.905366 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 29 16:20:38.905376 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 29 16:20:38.905384 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:20:38.905391 kernel: Using GB pages for direct mapping Jan 29 16:20:38.905400 kernel: ACPI: Early table checksum verification disabled Jan 29 16:20:38.905410 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 29 16:20:38.905419 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 29 16:20:38.905429 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:38.905439 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:38.905448 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 29 16:20:38.905459 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:38.905466 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:38.905473 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:38.905481 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:38.905488 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 16:20:38.905496 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 29 16:20:38.905503 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 29 16:20:38.905514 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 29 16:20:38.905523 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 29 16:20:38.905537 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 29 16:20:38.905546 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 29 16:20:38.905556 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 29 16:20:38.905566 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 29 16:20:38.905576 kernel: No NUMA configuration found Jan 29 16:20:38.905585 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 29 16:20:38.905595 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Jan 29 16:20:38.905605 kernel: Zone ranges: Jan 29 16:20:38.905613 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:20:38.905623 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 29 16:20:38.905630 kernel: Normal empty Jan 29 16:20:38.905637 kernel: Movable zone start for each node Jan 29 16:20:38.905664 kernel: Early memory node ranges Jan 29 16:20:38.905671 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 29 16:20:38.905679 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 29 16:20:38.905686 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 29 16:20:38.905693 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 29 16:20:38.905700 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 29 16:20:38.905708 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 29 16:20:38.905718 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Jan 29 16:20:38.905725 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Jan 29 16:20:38.905732 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 29 16:20:38.905740 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:20:38.905747 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 29 16:20:38.905762 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 29 16:20:38.905772 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:20:38.905780 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 29 16:20:38.905787 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 29 16:20:38.905795 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 29 16:20:38.905802 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 29 16:20:38.905810 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 29 16:20:38.905820 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 16:20:38.905827 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:20:38.905835 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:20:38.905843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 16:20:38.905853 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:20:38.905860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:20:38.905868 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:20:38.905875 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:20:38.905883 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:20:38.905891 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 16:20:38.905900 kernel: TSC deadline timer available Jan 29 16:20:38.905911 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 16:20:38.905921 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:20:38.905930 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 16:20:38.905940 kernel: kvm-guest: setup PV sched yield Jan 29 16:20:38.905948 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 29 16:20:38.905955 kernel: Booting paravirtualized kernel on KVM Jan 29 16:20:38.905963 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:20:38.905971 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 16:20:38.905978 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 16:20:38.905986 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 16:20:38.905993 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 16:20:38.906001 kernel: kvm-guest: PV spinlocks enabled Jan 29 16:20:38.906011 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 16:20:38.906020 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:20:38.906028 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:20:38.906035 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:20:38.906043 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:20:38.906051 kernel: Fallback order for Node 0: 0 Jan 29 16:20:38.906059 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Jan 29 16:20:38.906067 kernel: Policy zone: DMA32 Jan 29 16:20:38.906079 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:20:38.906090 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 177824K reserved, 0K cma-reserved) Jan 29 16:20:38.906101 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 16:20:38.906111 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:20:38.906121 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:20:38.906129 kernel: Dynamic Preempt: voluntary Jan 29 16:20:38.906137 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:20:38.906145 kernel: rcu: RCU event tracing is enabled. Jan 29 16:20:38.906153 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 16:20:38.906164 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:20:38.906171 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:20:38.906179 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:20:38.906187 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:20:38.906194 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 16:20:38.906202 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 16:20:38.906209 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:20:38.906219 kernel: Console: colour dummy device 80x25 Jan 29 16:20:38.906229 kernel: printk: console [ttyS0] enabled Jan 29 16:20:38.906242 kernel: ACPI: Core revision 20230628 Jan 29 16:20:38.906253 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 16:20:38.906263 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:20:38.906273 kernel: x2apic enabled Jan 29 16:20:38.906293 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:20:38.906304 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 16:20:38.906315 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 16:20:38.906325 kernel: kvm-guest: setup PV IPIs Jan 29 16:20:38.906336 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 16:20:38.906351 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 16:20:38.906362 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 29 16:20:38.906373 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 16:20:38.906383 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 16:20:38.906394 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 16:20:38.906405 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:20:38.906416 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:20:38.906426 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:20:38.906436 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:20:38.906451 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 16:20:38.906461 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 16:20:38.906471 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 16:20:38.906479 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 16:20:38.906487 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 16:20:38.906495 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 16:20:38.906503 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 16:20:38.906511 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:20:38.906521 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:20:38.906529 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:20:38.906536 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:20:38.906544 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 16:20:38.906554 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:20:38.906564 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:20:38.906575 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:20:38.906585 kernel: landlock: Up and running. Jan 29 16:20:38.906595 kernel: SELinux: Initializing. Jan 29 16:20:38.906608 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:20:38.906618 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:20:38.906626 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 16:20:38.906634 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:20:38.906642 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:20:38.906663 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 16:20:38.906671 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 16:20:38.906679 kernel: ... version: 0 Jan 29 16:20:38.906686 kernel: ... bit width: 48 Jan 29 16:20:38.906696 kernel: ... generic registers: 6 Jan 29 16:20:38.906704 kernel: ... value mask: 0000ffffffffffff Jan 29 16:20:38.906711 kernel: ... max period: 00007fffffffffff Jan 29 16:20:38.906719 kernel: ... fixed-purpose events: 0 Jan 29 16:20:38.906726 kernel: ... event mask: 000000000000003f Jan 29 16:20:38.906734 kernel: signal: max sigframe size: 1776 Jan 29 16:20:38.906741 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:20:38.906749 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:20:38.906757 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:20:38.906767 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:20:38.906774 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 16:20:38.906782 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 16:20:38.906790 kernel: smpboot: Max logical packages: 1 Jan 29 16:20:38.906797 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 29 16:20:38.906805 kernel: devtmpfs: initialized Jan 29 16:20:38.906812 kernel: x86/mm: Memory block size: 128MB Jan 29 16:20:38.906820 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 29 16:20:38.906828 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 29 16:20:38.906835 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 29 16:20:38.906845 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 29 16:20:38.906853 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Jan 29 16:20:38.906861 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 29 16:20:38.906869 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:20:38.906876 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 16:20:38.906884 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:20:38.906892 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:20:38.906899 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:20:38.906909 kernel: audit: type=2000 audit(1738167638.376:1): state=initialized audit_enabled=0 res=1 Jan 29 16:20:38.906917 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:20:38.906924 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:20:38.906932 kernel: cpuidle: using governor menu Jan 29 16:20:38.906940 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:20:38.906947 kernel: dca service started, version 1.12.1 Jan 29 16:20:38.906955 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 29 16:20:38.906963 kernel: PCI: Using configuration type 1 for base access Jan 29 16:20:38.906970 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:20:38.906980 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:20:38.906988 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:20:38.906996 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:20:38.907003 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:20:38.907011 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:20:38.907018 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:20:38.907026 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:20:38.907033 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:20:38.907041 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:20:38.907051 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:20:38.907058 kernel: ACPI: Interpreter enabled Jan 29 16:20:38.907066 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 16:20:38.907073 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:20:38.907081 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:20:38.907089 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:20:38.907097 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 16:20:38.907104 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:20:38.907313 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:20:38.907454 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 16:20:38.907588 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 16:20:38.907600 kernel: PCI host bridge to bus 0000:00 Jan 29 16:20:38.907746 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:20:38.907861 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:20:38.907975 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:20:38.908093 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 29 16:20:38.908206 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 29 16:20:38.908341 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 29 16:20:38.908458 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:20:38.908612 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 16:20:38.908773 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 16:20:38.908900 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 29 16:20:38.909029 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 29 16:20:38.909155 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 29 16:20:38.909278 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 29 16:20:38.909414 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:20:38.909556 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 16:20:38.909710 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 29 16:20:38.909842 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 29 16:20:38.909968 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Jan 29 16:20:38.910101 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 16:20:38.910227 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 29 16:20:38.910361 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 29 16:20:38.910487 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Jan 29 16:20:38.910627 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 16:20:38.910776 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 29 16:20:38.910904 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 29 16:20:38.911030 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 29 16:20:38.911169 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 29 16:20:38.911320 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 16:20:38.911459 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 16:20:38.911606 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 16:20:38.911756 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 29 16:20:38.911881 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 29 16:20:38.912013 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 16:20:38.912180 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 29 16:20:38.912193 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:20:38.912201 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:20:38.912209 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:20:38.912221 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:20:38.912229 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 16:20:38.912236 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 16:20:38.912244 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 16:20:38.912251 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 16:20:38.912259 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 16:20:38.912266 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 16:20:38.912274 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 16:20:38.912281 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 16:20:38.912300 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 16:20:38.912308 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 16:20:38.912315 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 16:20:38.912323 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 16:20:38.912330 kernel: iommu: Default domain type: Translated Jan 29 16:20:38.912338 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:20:38.912345 kernel: efivars: Registered efivars operations Jan 29 16:20:38.912353 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:20:38.912369 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:20:38.912381 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 29 16:20:38.912390 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 29 16:20:38.912400 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Jan 29 16:20:38.912411 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Jan 29 16:20:38.912422 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 29 16:20:38.912432 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 29 16:20:38.912439 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Jan 29 16:20:38.912447 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 29 16:20:38.912588 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 16:20:38.912767 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 16:20:38.912891 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:20:38.912902 kernel: vgaarb: loaded Jan 29 16:20:38.912910 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 16:20:38.912918 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 16:20:38.912926 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:20:38.912933 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:20:38.912941 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:20:38.912949 kernel: pnp: PnP ACPI init Jan 29 16:20:38.913091 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 29 16:20:38.913103 kernel: pnp: PnP ACPI: found 6 devices Jan 29 16:20:38.913111 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:20:38.913119 kernel: NET: Registered PF_INET protocol family Jan 29 16:20:38.913148 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:20:38.913159 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 16:20:38.913167 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:20:38.913175 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:20:38.913186 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 16:20:38.913194 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 16:20:38.913202 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:20:38.913210 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:20:38.913218 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:20:38.913226 kernel: NET: Registered PF_XDP protocol family Jan 29 16:20:38.913400 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 29 16:20:38.913537 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 29 16:20:38.913682 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:20:38.913800 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:20:38.913919 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:20:38.914031 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 29 16:20:38.914145 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 29 16:20:38.914265 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 29 16:20:38.914277 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:20:38.914293 kernel: Initialise system trusted keyrings Jan 29 16:20:38.914306 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 16:20:38.914314 kernel: Key type asymmetric registered Jan 29 16:20:38.914321 kernel: Asymmetric key parser 'x509' registered Jan 29 16:20:38.914329 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:20:38.914337 kernel: io scheduler mq-deadline registered Jan 29 16:20:38.914345 kernel: io scheduler kyber registered Jan 29 16:20:38.914353 kernel: io scheduler bfq registered Jan 29 16:20:38.914361 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:20:38.914369 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 16:20:38.914380 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 16:20:38.914390 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 16:20:38.914398 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:20:38.914406 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:20:38.914415 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:20:38.914422 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:20:38.914433 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:20:38.914580 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 16:20:38.914594 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:20:38.914728 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 16:20:38.914939 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T16:20:38 UTC (1738167638) Jan 29 16:20:38.915081 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 29 16:20:38.915093 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 16:20:38.915101 kernel: efifb: probing for efifb Jan 29 16:20:38.915113 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 29 16:20:38.915121 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 29 16:20:38.915129 kernel: efifb: scrolling: redraw Jan 29 16:20:38.915137 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 16:20:38.915145 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 16:20:38.915153 kernel: fb0: EFI VGA frame buffer device Jan 29 16:20:38.915161 kernel: pstore: Using crash dump compression: deflate Jan 29 16:20:38.915169 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 16:20:38.915177 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:20:38.915187 kernel: Segment Routing with IPv6 Jan 29 16:20:38.915194 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:20:38.915213 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:20:38.915224 kernel: Key type dns_resolver registered Jan 29 16:20:38.915240 kernel: IPI shorthand broadcast: enabled Jan 29 16:20:38.915248 kernel: sched_clock: Marking stable (583002715, 151743259)->(786105157, -51359183) Jan 29 16:20:38.915256 kernel: registered taskstats version 1 Jan 29 16:20:38.915264 kernel: Loading compiled-in X.509 certificates Jan 29 16:20:38.915272 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:20:38.915291 kernel: Key type .fscrypt registered Jan 29 16:20:38.915299 kernel: Key type fscrypt-provisioning registered Jan 29 16:20:38.915307 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:20:38.915315 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:20:38.915323 kernel: ima: No architecture policies found Jan 29 16:20:38.915331 kernel: clk: Disabling unused clocks Jan 29 16:20:38.915338 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:20:38.915346 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:20:38.915355 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:20:38.915365 kernel: Run /init as init process Jan 29 16:20:38.915373 kernel: with arguments: Jan 29 16:20:38.915381 kernel: /init Jan 29 16:20:38.915389 kernel: with environment: Jan 29 16:20:38.915397 kernel: HOME=/ Jan 29 16:20:38.915405 kernel: TERM=linux Jan 29 16:20:38.915412 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:20:38.915421 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:20:38.915435 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:20:38.915444 systemd[1]: Detected virtualization kvm. Jan 29 16:20:38.915452 systemd[1]: Detected architecture x86-64. Jan 29 16:20:38.915461 systemd[1]: Running in initrd. Jan 29 16:20:38.915469 systemd[1]: No hostname configured, using default hostname. Jan 29 16:20:38.915477 systemd[1]: Hostname set to . Jan 29 16:20:38.915485 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:20:38.915494 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:20:38.915506 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:20:38.915515 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:20:38.915524 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:20:38.915533 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:20:38.915541 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:20:38.915553 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:20:38.915567 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:20:38.915583 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:20:38.915595 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:20:38.915605 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:20:38.915613 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:20:38.915622 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:20:38.915630 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:20:38.915638 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:20:38.915660 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:20:38.915672 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:20:38.915681 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:20:38.915689 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:20:38.915698 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:20:38.915706 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:20:38.915715 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:20:38.915723 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:20:38.915731 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:20:38.915740 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:20:38.915751 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:20:38.915759 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:20:38.915767 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:20:38.915776 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:20:38.915784 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:20:38.915792 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:20:38.915801 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:20:38.915840 systemd-journald[192]: Collecting audit messages is disabled. Jan 29 16:20:38.915863 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:20:38.915871 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:20:38.915880 systemd-journald[192]: Journal started Jan 29 16:20:38.915898 systemd-journald[192]: Runtime Journal (/run/log/journal/b6226dfcd2714afd9525ba5a1033873f) is 6M, max 48.2M, 42.2M free. Jan 29 16:20:38.915264 systemd-modules-load[195]: Inserted module 'overlay' Jan 29 16:20:38.917930 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:20:38.919736 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:20:38.922728 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:20:38.928879 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:20:38.933406 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:20:38.940918 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:38.942500 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:20:38.951673 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:20:38.953514 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 29 16:20:38.954590 kernel: Bridge firewalling registered Jan 29 16:20:38.964886 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:20:38.965703 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:20:38.967462 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:20:38.978634 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:20:38.987788 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:20:38.989058 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:20:38.990310 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:20:39.006406 dracut-cmdline[233]: dracut-dracut-053 Jan 29 16:20:39.009510 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:20:39.032240 systemd-resolved[230]: Positive Trust Anchors: Jan 29 16:20:39.032256 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:20:39.032296 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:20:39.043011 systemd-resolved[230]: Defaulting to hostname 'linux'. Jan 29 16:20:39.045166 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:20:39.045479 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:20:39.095679 kernel: SCSI subsystem initialized Jan 29 16:20:39.105673 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:20:39.115684 kernel: iscsi: registered transport (tcp) Jan 29 16:20:39.136673 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:20:39.136689 kernel: QLogic iSCSI HBA Driver Jan 29 16:20:39.188684 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:20:39.199785 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:20:39.226111 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:20:39.226145 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:20:39.227143 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:20:39.269678 kernel: raid6: avx2x4 gen() 30291 MB/s Jan 29 16:20:39.286669 kernel: raid6: avx2x2 gen() 30884 MB/s Jan 29 16:20:39.303751 kernel: raid6: avx2x1 gen() 26034 MB/s Jan 29 16:20:39.303774 kernel: raid6: using algorithm avx2x2 gen() 30884 MB/s Jan 29 16:20:39.321758 kernel: raid6: .... xor() 20005 MB/s, rmw enabled Jan 29 16:20:39.321789 kernel: raid6: using avx2x2 recovery algorithm Jan 29 16:20:39.341675 kernel: xor: automatically using best checksumming function avx Jan 29 16:20:39.496680 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:20:39.509925 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:20:39.517895 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:20:39.532896 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jan 29 16:20:39.538199 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:20:39.549817 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:20:39.564371 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Jan 29 16:20:39.596692 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:20:39.614812 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:20:39.683680 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:20:39.693819 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:20:39.711530 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:20:39.713378 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:20:39.716135 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:20:39.719110 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:20:39.726672 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 16:20:39.751065 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 16:20:39.752638 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:20:39.752674 kernel: libata version 3.00 loaded. Jan 29 16:20:39.752687 kernel: GPT:9289727 != 19775487 Jan 29 16:20:39.752698 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:20:39.752715 kernel: GPT:9289727 != 19775487 Jan 29 16:20:39.752725 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:20:39.752735 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:20:39.752745 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:20:39.732744 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:20:39.745788 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:20:39.757703 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 16:20:39.789379 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 16:20:39.789400 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 16:20:39.789559 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 16:20:39.789727 kernel: scsi host0: ahci Jan 29 16:20:39.789924 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:20:39.789947 kernel: AES CTR mode by8 optimization enabled Jan 29 16:20:39.789959 kernel: scsi host1: ahci Jan 29 16:20:39.790118 kernel: scsi host2: ahci Jan 29 16:20:39.790296 kernel: scsi host3: ahci Jan 29 16:20:39.790460 kernel: scsi host4: ahci Jan 29 16:20:39.790610 kernel: scsi host5: ahci Jan 29 16:20:39.790774 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 29 16:20:39.790791 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 29 16:20:39.790802 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 29 16:20:39.790812 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 29 16:20:39.790823 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 29 16:20:39.790835 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 29 16:20:39.790846 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (460) Jan 29 16:20:39.759092 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:20:39.759257 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:20:39.761031 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:20:39.762407 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:20:39.762633 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:39.769832 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:20:39.776904 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:20:39.803450 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (463) Jan 29 16:20:39.806641 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:39.830594 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:20:39.841286 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 16:20:39.852119 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 16:20:39.861173 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 16:20:39.864457 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 16:20:39.879762 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:20:39.882781 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:20:39.890293 disk-uuid[557]: Primary Header is updated. Jan 29 16:20:39.890293 disk-uuid[557]: Secondary Entries is updated. Jan 29 16:20:39.890293 disk-uuid[557]: Secondary Header is updated. Jan 29 16:20:39.894679 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:20:39.899684 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:20:39.905216 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:20:40.096681 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 16:20:40.096756 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 16:20:40.097675 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 16:20:40.098679 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 16:20:40.099671 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 16:20:40.099693 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 16:20:40.100802 kernel: ata3.00: applying bridge limits Jan 29 16:20:40.101677 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 16:20:40.101690 kernel: ata3.00: configured for UDMA/100 Jan 29 16:20:40.102693 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 16:20:40.150669 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 16:20:40.164459 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:20:40.164486 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 16:20:40.900673 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 16:20:40.901051 disk-uuid[559]: The operation has completed successfully. Jan 29 16:20:40.932361 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:20:40.932482 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:20:40.986767 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:20:40.989691 sh[594]: Success Jan 29 16:20:41.003687 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 16:20:41.040328 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:20:41.055553 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:20:41.062140 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:20:41.072556 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:20:41.072608 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:20:41.072622 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:20:41.073589 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:20:41.074490 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:20:41.080178 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:20:41.082923 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:20:41.095995 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:20:41.099904 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:20:41.109984 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:20:41.110015 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:20:41.110026 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:20:41.113685 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:20:41.122790 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:20:41.124483 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:20:41.205444 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:20:41.228880 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:20:41.241147 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:20:41.249782 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:20:41.255202 systemd-networkd[773]: lo: Link UP Jan 29 16:20:41.255212 systemd-networkd[773]: lo: Gained carrier Jan 29 16:20:41.257031 systemd-networkd[773]: Enumeration completed Jan 29 16:20:41.257118 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:20:41.257405 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:20:41.257410 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:20:41.259380 systemd[1]: Reached target network.target - Network. Jan 29 16:20:41.259623 systemd-networkd[773]: eth0: Link UP Jan 29 16:20:41.259628 systemd-networkd[773]: eth0: Gained carrier Jan 29 16:20:41.259636 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:20:41.274713 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:20:41.300565 ignition[777]: Ignition 2.20.0 Jan 29 16:20:41.300578 ignition[777]: Stage: fetch-offline Jan 29 16:20:41.300621 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:41.300631 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:20:41.300758 ignition[777]: parsed url from cmdline: "" Jan 29 16:20:41.300762 ignition[777]: no config URL provided Jan 29 16:20:41.300767 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:20:41.300777 ignition[777]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:20:41.300803 ignition[777]: op(1): [started] loading QEMU firmware config module Jan 29 16:20:41.300809 ignition[777]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 16:20:41.310922 ignition[777]: op(1): [finished] loading QEMU firmware config module Jan 29 16:20:41.349632 ignition[777]: parsing config with SHA512: af246e4bd3e1c33a4b432857f2da60f863f89377e09623471f9d21a1868ececba81e69877b349efb3cb594a269dc59aa428db18a9d9e576f8eeb6e15a2427629 Jan 29 16:20:41.353553 unknown[777]: fetched base config from "system" Jan 29 16:20:41.353569 unknown[777]: fetched user config from "qemu" Jan 29 16:20:41.355568 ignition[777]: fetch-offline: fetch-offline passed Jan 29 16:20:41.356450 ignition[777]: Ignition finished successfully Jan 29 16:20:41.359581 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:20:41.362222 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 16:20:41.378803 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:20:41.394359 ignition[789]: Ignition 2.20.0 Jan 29 16:20:41.394371 ignition[789]: Stage: kargs Jan 29 16:20:41.394546 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:41.394559 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:20:41.395587 ignition[789]: kargs: kargs passed Jan 29 16:20:41.395632 ignition[789]: Ignition finished successfully Jan 29 16:20:41.403461 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:20:41.413941 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:20:41.426121 ignition[797]: Ignition 2.20.0 Jan 29 16:20:41.426134 ignition[797]: Stage: disks Jan 29 16:20:41.426313 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:41.426329 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:20:41.427267 ignition[797]: disks: disks passed Jan 29 16:20:41.427319 ignition[797]: Ignition finished successfully Jan 29 16:20:41.434101 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:20:41.434820 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:20:41.435510 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:20:41.439504 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:20:41.444516 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:20:41.445029 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:20:41.454811 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:20:41.468026 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 16:20:41.474683 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:20:42.079732 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:20:42.172704 kernel: EXT4-fs (vda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:20:42.173684 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:20:42.174834 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:20:42.187730 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:20:42.190011 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:20:42.190916 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 16:20:42.197687 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (815) Jan 29 16:20:42.197721 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:20:42.190958 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:20:42.204102 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:20:42.204130 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:20:42.204143 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:20:42.190981 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:20:42.205614 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:20:42.220049 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:20:42.222492 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:20:42.259154 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:20:42.265372 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:20:42.269378 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:20:42.273332 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:20:42.370253 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:20:42.382732 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:20:42.385815 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:20:42.390667 kernel: BTRFS info (device vda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:20:42.412386 ignition[928]: INFO : Ignition 2.20.0 Jan 29 16:20:42.412386 ignition[928]: INFO : Stage: mount Jan 29 16:20:42.414906 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:42.414906 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:20:42.414906 ignition[928]: INFO : mount: mount passed Jan 29 16:20:42.414906 ignition[928]: INFO : Ignition finished successfully Jan 29 16:20:42.415060 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:20:42.417985 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:20:42.431748 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:20:42.617871 systemd-networkd[773]: eth0: Gained IPv6LL Jan 29 16:20:43.071702 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:20:43.088792 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:20:43.095670 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Jan 29 16:20:43.099499 kernel: BTRFS info (device vda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:20:43.099592 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:20:43.099619 kernel: BTRFS info (device vda6): using free space tree Jan 29 16:20:43.102694 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 16:20:43.104053 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:20:43.129735 ignition[959]: INFO : Ignition 2.20.0 Jan 29 16:20:43.129735 ignition[959]: INFO : Stage: files Jan 29 16:20:43.131978 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:43.131978 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:20:43.131978 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:20:43.131978 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:20:43.131978 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:20:43.139417 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:20:43.139417 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:20:43.139417 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:20:43.139417 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:20:43.139417 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 16:20:43.135284 unknown[959]: wrote ssh authorized keys file for user: core Jan 29 16:20:43.200963 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:20:43.456874 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 16:20:43.456874 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:20:43.461560 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 16:20:43.831995 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 16:20:43.941874 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:20:43.943948 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:20:43.943948 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:20:43.943948 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:20:43.943948 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:20:43.943948 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:20:43.943948 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:20:43.943948 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:20:43.943948 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:20:43.943948 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:20:43.943948 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:20:43.943948 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:20:43.943948 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:20:43.943948 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:20:43.943948 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 16:20:44.390298 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 16:20:44.773819 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:20:44.773819 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 16:20:44.777412 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:20:44.779553 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:20:44.779553 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 16:20:44.782797 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 16:20:44.782797 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 16:20:44.786128 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 16:20:44.786128 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 16:20:44.789336 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 16:20:44.806686 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 16:20:44.812265 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 16:20:44.813898 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 16:20:44.813898 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:20:44.813898 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:20:44.813898 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:20:44.813898 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:20:44.813898 ignition[959]: INFO : files: files passed Jan 29 16:20:44.813898 ignition[959]: INFO : Ignition finished successfully Jan 29 16:20:44.815447 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:20:44.823779 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:20:44.825905 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:20:44.827843 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:20:44.827947 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:20:44.836232 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 16:20:44.839090 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:20:44.839090 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:20:44.843068 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:20:44.844102 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:20:44.845894 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:20:44.860873 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:20:44.885143 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:20:44.885287 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:20:44.886004 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:20:44.889000 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:20:44.889358 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:20:44.892430 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:20:44.908674 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:20:44.918829 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:20:44.927532 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:20:44.929854 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:20:44.932205 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:20:44.934030 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:20:44.935034 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:20:44.937529 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:20:44.939589 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:20:44.941419 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:20:44.943593 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:20:44.945900 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:20:44.948110 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:20:44.950171 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:20:44.952631 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:20:44.954721 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:20:44.956796 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:20:44.958395 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:20:44.959395 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:20:44.961633 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:20:44.963798 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:20:44.966174 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:20:44.967110 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:20:44.969671 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:20:44.970691 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:20:44.972898 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:20:44.973975 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:20:44.976345 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:20:44.978085 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:20:44.979210 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:20:44.981917 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:20:44.983739 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:20:44.985588 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:20:44.986494 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:20:44.988578 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:20:44.989457 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:20:44.991541 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:20:44.992719 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:20:44.995234 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:20:44.996221 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:20:45.008784 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:20:45.011298 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:20:45.013043 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:20:45.014139 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:20:45.016518 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:20:45.017421 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:20:45.021487 ignition[1013]: INFO : Ignition 2.20.0 Jan 29 16:20:45.022558 ignition[1013]: INFO : Stage: umount Jan 29 16:20:45.022558 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:45.022558 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 16:20:45.024958 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:20:45.027961 ignition[1013]: INFO : umount: umount passed Jan 29 16:20:45.027961 ignition[1013]: INFO : Ignition finished successfully Jan 29 16:20:45.025073 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:20:45.026553 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:20:45.026732 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:20:45.029290 systemd[1]: Stopped target network.target - Network. Jan 29 16:20:45.030045 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:20:45.030108 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:20:45.030431 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:20:45.030477 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:20:45.033881 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:20:45.033928 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:20:45.034201 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:20:45.034244 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:20:45.034669 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:20:45.039363 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:20:45.045300 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:20:45.049397 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:20:45.049574 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:20:45.053164 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:20:45.053390 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:20:45.053504 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:20:45.057431 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:20:45.058769 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:20:45.058818 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:20:45.070718 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:20:45.071696 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:20:45.071788 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:20:45.073842 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:20:45.073891 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:20:45.076013 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:20:45.076060 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:20:45.078297 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:20:45.078344 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:20:45.080557 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:20:45.084170 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:20:45.084247 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:20:45.092536 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:20:45.093611 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:20:45.105379 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:20:45.106444 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:20:45.109165 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:20:45.110194 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:20:45.112326 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:20:45.112371 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:20:45.114303 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:20:45.114350 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:20:45.115381 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:20:45.115426 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:20:45.116203 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:20:45.116249 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:20:45.129777 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:20:45.130196 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:20:45.130248 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:20:45.134125 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:20:45.134183 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:20:45.136687 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:20:45.136735 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:20:45.137249 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:20:45.137290 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:45.142986 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:20:45.143049 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:20:45.143376 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:20:45.143478 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:20:45.214033 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:20:45.214188 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:20:45.214834 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:20:45.215128 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:20:45.215187 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:20:45.233765 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:20:45.242505 systemd[1]: Switching root. Jan 29 16:20:45.278489 systemd-journald[192]: Journal stopped Jan 29 16:20:46.433488 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 29 16:20:46.433556 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:20:46.433576 kernel: SELinux: policy capability open_perms=1 Jan 29 16:20:46.433599 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:20:46.433610 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:20:46.433622 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:20:46.433633 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:20:46.433656 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:20:46.433668 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:20:46.433685 kernel: audit: type=1403 audit(1738167645.637:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:20:46.433698 systemd[1]: Successfully loaded SELinux policy in 45.401ms. Jan 29 16:20:46.433719 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.393ms. Jan 29 16:20:46.433735 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:20:46.433747 systemd[1]: Detected virtualization kvm. Jan 29 16:20:46.433766 systemd[1]: Detected architecture x86-64. Jan 29 16:20:46.433778 systemd[1]: Detected first boot. Jan 29 16:20:46.433790 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:20:46.433802 zram_generator::config[1058]: No configuration found. Jan 29 16:20:46.433815 kernel: Guest personality initialized and is inactive Jan 29 16:20:46.433827 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:20:46.433839 kernel: Initialized host personality Jan 29 16:20:46.433853 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:20:46.433864 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:20:46.433877 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:20:46.433890 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:20:46.433902 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:20:46.433914 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:20:46.433926 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:20:46.433939 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:20:46.433953 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:20:46.433965 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:20:46.433977 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:20:46.433990 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:20:46.434002 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:20:46.434019 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:20:46.434032 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:20:46.434044 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:20:46.434057 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:20:46.434072 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:20:46.434085 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:20:46.434097 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:20:46.434118 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:20:46.434131 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:20:46.434143 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:20:46.434155 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:20:46.434170 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:20:46.434182 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:20:46.434194 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:20:46.434212 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:20:46.434224 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:20:46.434237 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:20:46.434249 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:20:46.434261 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:20:46.434273 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:20:46.434285 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:20:46.434300 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:20:46.434313 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:20:46.434325 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:20:46.434339 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:20:46.434351 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:20:46.434363 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:20:46.434376 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:46.434389 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:20:46.434401 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:20:46.434416 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:20:46.434429 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:20:46.434441 systemd[1]: Reached target machines.target - Containers. Jan 29 16:20:46.434453 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:20:46.434465 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:20:46.434478 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:20:46.434490 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:20:46.434502 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:20:46.434517 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:20:46.434529 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:20:46.434541 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:20:46.434553 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:20:46.434566 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:20:46.434578 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:20:46.434591 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:20:46.434602 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:20:46.434617 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:20:46.434630 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:20:46.434642 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:20:46.434671 kernel: loop: module loaded Jan 29 16:20:46.434682 kernel: fuse: init (API version 7.39) Jan 29 16:20:46.434694 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:20:46.434706 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:20:46.434718 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:20:46.434730 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:20:46.434745 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:20:46.434758 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:20:46.434770 systemd[1]: Stopped verity-setup.service. Jan 29 16:20:46.434782 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:46.434794 kernel: ACPI: bus type drm_connector registered Jan 29 16:20:46.434809 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:20:46.434821 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:20:46.434833 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:20:46.434845 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:20:46.434857 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:20:46.434869 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:20:46.434902 systemd-journald[1129]: Collecting audit messages is disabled. Jan 29 16:20:46.434927 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:20:46.434940 systemd-journald[1129]: Journal started Jan 29 16:20:46.434962 systemd-journald[1129]: Runtime Journal (/run/log/journal/b6226dfcd2714afd9525ba5a1033873f) is 6M, max 48.2M, 42.2M free. Jan 29 16:20:46.207271 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:20:46.219575 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 16:20:46.220320 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:20:46.437036 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:20:46.438197 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:20:46.439810 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:20:46.440031 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:20:46.441535 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:20:46.441761 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:20:46.443233 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:20:46.443445 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:20:46.444840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:20:46.445054 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:20:46.446746 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:20:46.446956 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:20:46.448360 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:20:46.448573 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:20:46.450156 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:20:46.451612 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:20:46.453377 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:20:46.454960 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:20:46.470201 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:20:46.487746 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:20:46.489990 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:20:46.491216 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:20:46.491244 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:20:46.493231 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:20:46.495522 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:20:46.497729 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:20:46.498964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:20:46.501268 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:20:46.504463 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:20:46.506128 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:20:46.508661 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:20:46.512792 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:20:46.514506 systemd-journald[1129]: Time spent on flushing to /var/log/journal/b6226dfcd2714afd9525ba5a1033873f is 14.491ms for 1056 entries. Jan 29 16:20:46.514506 systemd-journald[1129]: System Journal (/var/log/journal/b6226dfcd2714afd9525ba5a1033873f) is 8M, max 195.6M, 187.6M free. Jan 29 16:20:46.541962 systemd-journald[1129]: Received client request to flush runtime journal. Jan 29 16:20:46.515003 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:20:46.518326 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:20:46.525779 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:20:46.529064 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:20:46.534612 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:20:46.539146 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:20:46.542277 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:20:46.544193 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:20:46.546138 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:20:46.554080 kernel: loop0: detected capacity change from 0 to 138176 Jan 29 16:20:46.554474 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:20:46.566879 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:20:46.570732 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:20:46.573681 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:20:46.575934 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jan 29 16:20:46.575947 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jan 29 16:20:46.582153 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:20:46.589689 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:20:46.594881 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:20:46.596662 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 16:20:46.601546 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:20:46.611899 kernel: loop1: detected capacity change from 0 to 205544 Jan 29 16:20:46.627263 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:20:46.634884 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:20:46.651794 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Jan 29 16:20:46.651816 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Jan 29 16:20:46.653753 kernel: loop2: detected capacity change from 0 to 147912 Jan 29 16:20:46.658640 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:20:46.699774 kernel: loop3: detected capacity change from 0 to 138176 Jan 29 16:20:46.713683 kernel: loop4: detected capacity change from 0 to 205544 Jan 29 16:20:46.722665 kernel: loop5: detected capacity change from 0 to 147912 Jan 29 16:20:46.733121 (sd-merge)[1208]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 16:20:46.733715 (sd-merge)[1208]: Merged extensions into '/usr'. Jan 29 16:20:46.737815 systemd[1]: Reload requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:20:46.737924 systemd[1]: Reloading... Jan 29 16:20:46.807712 zram_generator::config[1245]: No configuration found. Jan 29 16:20:46.881017 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:20:46.932544 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:20:47.008969 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:20:47.009354 systemd[1]: Reloading finished in 270 ms. Jan 29 16:20:47.035767 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:20:47.037332 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:20:47.059210 systemd[1]: Starting ensure-sysext.service... Jan 29 16:20:47.061161 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:20:47.082307 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:20:47.082583 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:20:47.083516 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:20:47.083801 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jan 29 16:20:47.083883 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jan 29 16:20:47.084022 systemd[1]: Reload requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:20:47.084038 systemd[1]: Reloading... Jan 29 16:20:47.087685 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:20:47.087697 systemd-tmpfiles[1274]: Skipping /boot Jan 29 16:20:47.101005 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:20:47.101017 systemd-tmpfiles[1274]: Skipping /boot Jan 29 16:20:47.136687 zram_generator::config[1305]: No configuration found. Jan 29 16:20:47.253420 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:20:47.319970 systemd[1]: Reloading finished in 235 ms. Jan 29 16:20:47.333815 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:20:47.353415 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:20:47.362989 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:20:47.365472 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:20:47.368078 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:20:47.373331 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:20:47.376703 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:20:47.388943 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:20:47.393201 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:47.393380 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:20:47.395812 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:20:47.401757 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:20:47.407698 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:20:47.409103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:20:47.409258 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:20:47.411290 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:20:47.412751 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:47.415052 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:20:47.417947 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:20:47.418220 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:20:47.420386 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:20:47.420638 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:20:47.423228 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:20:47.423548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:20:47.428502 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Jan 29 16:20:47.433488 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:20:47.433889 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:20:47.437357 augenrules[1376]: No rules Jan 29 16:20:47.441974 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:20:47.444316 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:20:47.444826 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:20:47.451984 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:47.452243 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:20:47.460148 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:20:47.463423 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:20:47.468670 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:20:47.470288 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:20:47.470402 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:20:47.470498 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:47.472155 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:20:47.473896 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:20:47.482129 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:20:47.489660 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:20:47.492222 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:20:47.493939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:20:47.494206 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:20:47.496028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:20:47.496250 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:20:47.498308 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:20:47.498587 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:20:47.521064 systemd[1]: Finished ensure-sysext.service. Jan 29 16:20:47.523630 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:47.529867 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:20:47.531936 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:20:47.535871 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:20:47.541215 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:20:47.544431 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:20:47.550918 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:20:47.552729 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:20:47.552774 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:20:47.556789 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:20:47.561773 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:20:47.562917 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:20:47.562944 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:20:47.563791 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:20:47.564253 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:20:47.565445 augenrules[1416]: /sbin/augenrules: No change Jan 29 16:20:47.566006 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:20:47.566268 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:20:47.568083 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:20:47.568413 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:20:47.574191 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:20:47.574525 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:20:47.581255 augenrules[1442]: No rules Jan 29 16:20:47.582029 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:20:47.582784 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:20:47.583101 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:20:47.583229 systemd-resolved[1345]: Positive Trust Anchors: Jan 29 16:20:47.583254 systemd-resolved[1345]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:20:47.583297 systemd-resolved[1345]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:20:47.586073 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:20:47.586150 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:20:47.591279 systemd-resolved[1345]: Defaulting to hostname 'linux'. Jan 29 16:20:47.593234 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:20:47.594607 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:20:47.628694 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1401) Jan 29 16:20:47.648801 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 29 16:20:47.649133 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 16:20:47.649305 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 16:20:47.649492 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 16:20:47.672071 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:20:47.673851 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:20:47.680707 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 29 16:20:47.685376 systemd-networkd[1428]: lo: Link UP Jan 29 16:20:47.687679 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 16:20:47.688414 systemd-networkd[1428]: lo: Gained carrier Jan 29 16:20:47.690716 systemd-networkd[1428]: Enumeration completed Jan 29 16:20:47.694099 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:20:47.694197 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:20:47.695191 systemd-networkd[1428]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:20:47.695880 systemd[1]: Reached target network.target - Network. Jan 29 16:20:47.696535 systemd-networkd[1428]: eth0: Link UP Jan 29 16:20:47.696590 systemd-networkd[1428]: eth0: Gained carrier Jan 29 16:20:47.696665 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:20:47.702686 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:20:47.742207 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:20:47.747925 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:20:47.751681 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:20:47.756786 systemd-networkd[1428]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:20:47.762508 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. Jan 29 16:20:48.283768 systemd-timesyncd[1433]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 16:20:48.283836 systemd-timesyncd[1433]: Initial clock synchronization to Wed 2025-01-29 16:20:48.283555 UTC. Jan 29 16:20:48.283884 systemd-resolved[1345]: Clock change detected. Flushing caches. Jan 29 16:20:48.285856 kernel: kvm_amd: TSC scaling supported Jan 29 16:20:48.285891 kernel: kvm_amd: Nested Virtualization enabled Jan 29 16:20:48.285920 kernel: kvm_amd: Nested Paging enabled Jan 29 16:20:48.286980 kernel: kvm_amd: LBR virtualization supported Jan 29 16:20:48.286997 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 16:20:48.287641 kernel: kvm_amd: Virtual GIF supported Jan 29 16:20:48.310615 kernel: EDAC MC: Ver: 3.0.0 Jan 29 16:20:48.317759 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:20:48.325037 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 16:20:48.340884 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:20:48.344448 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:20:48.348688 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:20:48.353385 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:20:48.358126 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:20:48.367897 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:20:48.403910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:48.407541 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:20:48.409244 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:20:48.410421 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:20:48.411640 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:20:48.412940 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:20:48.414410 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:20:48.415642 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:20:48.416964 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:20:48.418334 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:20:48.418376 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:20:48.419394 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:20:48.421294 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:20:48.424451 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:20:48.428885 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:20:48.430425 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:20:48.431752 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:20:48.438527 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:20:48.440134 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:20:48.442533 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:20:48.444167 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:20:48.445325 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:20:48.446290 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:20:48.447251 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:20:48.447275 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:20:48.448287 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:20:48.450395 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:20:48.452111 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:20:48.454767 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:20:48.458386 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:20:48.459873 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:20:48.461140 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:20:48.464124 jq[1484]: false Jan 29 16:20:48.465690 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:20:48.468578 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:20:48.473713 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:20:48.484643 dbus-daemon[1483]: [system] SELinux support is enabled Jan 29 16:20:48.486795 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:20:48.488843 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:20:48.489522 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:20:48.490320 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:20:48.492405 extend-filesystems[1485]: Found loop3 Jan 29 16:20:48.492405 extend-filesystems[1485]: Found loop4 Jan 29 16:20:48.492405 extend-filesystems[1485]: Found loop5 Jan 29 16:20:48.492405 extend-filesystems[1485]: Found sr0 Jan 29 16:20:48.492405 extend-filesystems[1485]: Found vda Jan 29 16:20:48.492405 extend-filesystems[1485]: Found vda1 Jan 29 16:20:48.492405 extend-filesystems[1485]: Found vda2 Jan 29 16:20:48.492405 extend-filesystems[1485]: Found vda3 Jan 29 16:20:48.492405 extend-filesystems[1485]: Found usr Jan 29 16:20:48.492405 extend-filesystems[1485]: Found vda4 Jan 29 16:20:48.492405 extend-filesystems[1485]: Found vda6 Jan 29 16:20:48.492405 extend-filesystems[1485]: Found vda7 Jan 29 16:20:48.492405 extend-filesystems[1485]: Found vda9 Jan 29 16:20:48.492405 extend-filesystems[1485]: Checking size of /dev/vda9 Jan 29 16:20:48.518873 extend-filesystems[1485]: Resized partition /dev/vda9 Jan 29 16:20:48.522064 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1402) Jan 29 16:20:48.492414 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:20:48.497089 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:20:48.522529 update_engine[1497]: I20250129 16:20:48.510838 1497 main.cc:92] Flatcar Update Engine starting Jan 29 16:20:48.522529 update_engine[1497]: I20250129 16:20:48.520642 1497 update_check_scheduler.cc:74] Next update check in 10m39s Jan 29 16:20:48.512151 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:20:48.527019 jq[1499]: true Jan 29 16:20:48.527303 extend-filesystems[1506]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:20:48.527106 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:20:48.527424 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:20:48.529047 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:20:48.529333 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:20:48.531130 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 16:20:48.533131 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:20:48.533390 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:20:48.554065 jq[1509]: true Jan 29 16:20:48.562602 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 16:20:48.589393 tar[1508]: linux-amd64/helm Jan 29 16:20:48.564196 (ntainerd)[1510]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:20:48.586966 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:20:48.588599 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:20:48.588626 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:20:48.590350 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:20:48.590375 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:20:48.590460 systemd-logind[1493]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 16:20:48.590483 systemd-logind[1493]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:20:48.591669 systemd-logind[1493]: New seat seat0. Jan 29 16:20:48.593517 extend-filesystems[1506]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 16:20:48.593517 extend-filesystems[1506]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 16:20:48.593517 extend-filesystems[1506]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 16:20:48.601246 extend-filesystems[1485]: Resized filesystem in /dev/vda9 Jan 29 16:20:48.600799 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:20:48.605702 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:20:48.605793 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:20:48.608321 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:20:48.608753 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:20:48.613414 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:20:48.615415 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 16:20:48.643228 locksmithd[1538]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:20:48.726081 sshd_keygen[1503]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:20:48.749176 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:20:48.757801 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:20:48.766036 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:20:48.766303 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:20:48.775775 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:20:48.780825 containerd[1510]: time="2025-01-29T16:20:48.780725607Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:20:48.787845 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:20:48.791033 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:20:48.794712 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:20:48.796090 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:20:48.808815 containerd[1510]: time="2025-01-29T16:20:48.808765753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:20:48.810508 containerd[1510]: time="2025-01-29T16:20:48.810478664Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:20:48.810618 containerd[1510]: time="2025-01-29T16:20:48.810603428Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:20:48.810671 containerd[1510]: time="2025-01-29T16:20:48.810659964Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:20:48.810907 containerd[1510]: time="2025-01-29T16:20:48.810891148Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:20:48.810964 containerd[1510]: time="2025-01-29T16:20:48.810951972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:20:48.811087 containerd[1510]: time="2025-01-29T16:20:48.811071075Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:20:48.811136 containerd[1510]: time="2025-01-29T16:20:48.811124836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:20:48.811433 containerd[1510]: time="2025-01-29T16:20:48.811412245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:20:48.811494 containerd[1510]: time="2025-01-29T16:20:48.811480553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:20:48.811577 containerd[1510]: time="2025-01-29T16:20:48.811547238Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:20:48.811631 containerd[1510]: time="2025-01-29T16:20:48.811618161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:20:48.811782 containerd[1510]: time="2025-01-29T16:20:48.811766850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:20:48.812147 containerd[1510]: time="2025-01-29T16:20:48.812123839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:20:48.812722 containerd[1510]: time="2025-01-29T16:20:48.812385490Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:20:48.812722 containerd[1510]: time="2025-01-29T16:20:48.812403634Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:20:48.812722 containerd[1510]: time="2025-01-29T16:20:48.812501948Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:20:48.812722 containerd[1510]: time="2025-01-29T16:20:48.812561950Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:20:48.874093 containerd[1510]: time="2025-01-29T16:20:48.873966624Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:20:48.874093 containerd[1510]: time="2025-01-29T16:20:48.874051804Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:20:48.874093 containerd[1510]: time="2025-01-29T16:20:48.874069477Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:20:48.874093 containerd[1510]: time="2025-01-29T16:20:48.874084996Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:20:48.874093 containerd[1510]: time="2025-01-29T16:20:48.874104272Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:20:48.874375 containerd[1510]: time="2025-01-29T16:20:48.874346086Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:20:48.874628 containerd[1510]: time="2025-01-29T16:20:48.874607195Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:20:48.874747 containerd[1510]: time="2025-01-29T16:20:48.874720728Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:20:48.874747 containerd[1510]: time="2025-01-29T16:20:48.874744443Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:20:48.874799 containerd[1510]: time="2025-01-29T16:20:48.874759110Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:20:48.874799 containerd[1510]: time="2025-01-29T16:20:48.874772385Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:20:48.874799 containerd[1510]: time="2025-01-29T16:20:48.874784678Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:20:48.874799 containerd[1510]: time="2025-01-29T16:20:48.874796079Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:20:48.874868 containerd[1510]: time="2025-01-29T16:20:48.874809735Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:20:48.874868 containerd[1510]: time="2025-01-29T16:20:48.874825545Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:20:48.874868 containerd[1510]: time="2025-01-29T16:20:48.874837978Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:20:48.874868 containerd[1510]: time="2025-01-29T16:20:48.874850241Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:20:48.874868 containerd[1510]: time="2025-01-29T16:20:48.874863506Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:20:48.874964 containerd[1510]: time="2025-01-29T16:20:48.874883614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.874964 containerd[1510]: time="2025-01-29T16:20:48.874918529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.874964 containerd[1510]: time="2025-01-29T16:20:48.874933136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.874964 containerd[1510]: time="2025-01-29T16:20:48.874948996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.874964 containerd[1510]: time="2025-01-29T16:20:48.874963604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.875071 containerd[1510]: time="2025-01-29T16:20:48.874983471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.875071 containerd[1510]: time="2025-01-29T16:20:48.875007706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.875071 containerd[1510]: time="2025-01-29T16:20:48.875024257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.875071 containerd[1510]: time="2025-01-29T16:20:48.875036811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.875071 containerd[1510]: time="2025-01-29T16:20:48.875050647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.875071 containerd[1510]: time="2025-01-29T16:20:48.875061928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.875071 containerd[1510]: time="2025-01-29T16:20:48.875073850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.875194 containerd[1510]: time="2025-01-29T16:20:48.875087035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.875194 containerd[1510]: time="2025-01-29T16:20:48.875101542Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:20:48.875194 containerd[1510]: time="2025-01-29T16:20:48.875120538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.875194 containerd[1510]: time="2025-01-29T16:20:48.875133502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.875194 containerd[1510]: time="2025-01-29T16:20:48.875144092Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:20:48.875194 containerd[1510]: time="2025-01-29T16:20:48.875188094Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:20:48.875309 containerd[1510]: time="2025-01-29T16:20:48.875205487Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:20:48.875309 containerd[1510]: time="2025-01-29T16:20:48.875216167Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:20:48.875309 containerd[1510]: time="2025-01-29T16:20:48.875228089Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:20:48.875309 containerd[1510]: time="2025-01-29T16:20:48.875236946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.875309 containerd[1510]: time="2025-01-29T16:20:48.875249029Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:20:48.875309 containerd[1510]: time="2025-01-29T16:20:48.875258717Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:20:48.875309 containerd[1510]: time="2025-01-29T16:20:48.875267944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:20:48.875609 containerd[1510]: time="2025-01-29T16:20:48.875540595Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:20:48.875609 containerd[1510]: time="2025-01-29T16:20:48.875604155Z" level=info msg="Connect containerd service" Jan 29 16:20:48.875757 containerd[1510]: time="2025-01-29T16:20:48.875633299Z" level=info msg="using legacy CRI server" Jan 29 16:20:48.875757 containerd[1510]: time="2025-01-29T16:20:48.875640643Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:20:48.876087 containerd[1510]: time="2025-01-29T16:20:48.876040082Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:20:48.877002 containerd[1510]: time="2025-01-29T16:20:48.876954617Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:20:48.877900 containerd[1510]: time="2025-01-29T16:20:48.877296007Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:20:48.877900 containerd[1510]: time="2025-01-29T16:20:48.877341121Z" level=info msg="Start subscribing containerd event" Jan 29 16:20:48.877900 containerd[1510]: time="2025-01-29T16:20:48.877422424Z" level=info msg="Start recovering state" Jan 29 16:20:48.877900 containerd[1510]: time="2025-01-29T16:20:48.877439125Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:20:48.877900 containerd[1510]: time="2025-01-29T16:20:48.877508535Z" level=info msg="Start event monitor" Jan 29 16:20:48.877900 containerd[1510]: time="2025-01-29T16:20:48.877528162Z" level=info msg="Start snapshots syncer" Jan 29 16:20:48.877900 containerd[1510]: time="2025-01-29T16:20:48.877539924Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:20:48.877900 containerd[1510]: time="2025-01-29T16:20:48.877552518Z" level=info msg="Start streaming server" Jan 29 16:20:48.877767 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:20:48.879259 containerd[1510]: time="2025-01-29T16:20:48.879227078Z" level=info msg="containerd successfully booted in 0.103591s" Jan 29 16:20:48.965045 tar[1508]: linux-amd64/LICENSE Jan 29 16:20:48.965233 tar[1508]: linux-amd64/README.md Jan 29 16:20:48.993142 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:20:49.856755 systemd-networkd[1428]: eth0: Gained IPv6LL Jan 29 16:20:49.859787 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:20:49.861736 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:20:49.871770 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 16:20:49.874221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:20:49.876354 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:20:49.896261 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 16:20:49.896549 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 16:20:49.898436 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:20:49.900832 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:20:50.509580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:20:50.511339 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:20:50.514260 systemd[1]: Startup finished in 716ms (kernel) + 6.929s (initrd) + 4.402s (userspace) = 12.047s. Jan 29 16:20:50.515101 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:20:50.924314 kubelet[1595]: E0129 16:20:50.924184 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:20:50.928639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:20:50.928847 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:20:50.929230 systemd[1]: kubelet.service: Consumed 916ms CPU time, 235.9M memory peak. Jan 29 16:20:55.298249 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:20:55.299641 systemd[1]: Started sshd@0-10.0.0.106:22-10.0.0.1:36808.service - OpenSSH per-connection server daemon (10.0.0.1:36808). Jan 29 16:20:55.352776 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 36808 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:20:55.354522 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:55.361081 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:20:55.369782 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:20:55.375343 systemd-logind[1493]: New session 1 of user core. Jan 29 16:20:55.381176 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:20:55.384435 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:20:55.391655 (systemd)[1612]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:20:55.393801 systemd-logind[1493]: New session c1 of user core. Jan 29 16:20:55.543693 systemd[1612]: Queued start job for default target default.target. Jan 29 16:20:55.552992 systemd[1612]: Created slice app.slice - User Application Slice. Jan 29 16:20:55.553020 systemd[1612]: Reached target paths.target - Paths. Jan 29 16:20:55.553064 systemd[1612]: Reached target timers.target - Timers. Jan 29 16:20:55.554781 systemd[1612]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:20:55.568324 systemd[1612]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:20:55.568462 systemd[1612]: Reached target sockets.target - Sockets. Jan 29 16:20:55.568508 systemd[1612]: Reached target basic.target - Basic System. Jan 29 16:20:55.568551 systemd[1612]: Reached target default.target - Main User Target. Jan 29 16:20:55.568599 systemd[1612]: Startup finished in 168ms. Jan 29 16:20:55.569058 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:20:55.570861 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:20:55.632596 systemd[1]: Started sshd@1-10.0.0.106:22-10.0.0.1:36814.service - OpenSSH per-connection server daemon (10.0.0.1:36814). Jan 29 16:20:55.670681 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 36814 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:20:55.672210 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:55.677320 systemd-logind[1493]: New session 2 of user core. Jan 29 16:20:55.687824 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:20:55.741611 sshd[1625]: Connection closed by 10.0.0.1 port 36814 Jan 29 16:20:55.742162 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:55.765585 systemd[1]: sshd@1-10.0.0.106:22-10.0.0.1:36814.service: Deactivated successfully. Jan 29 16:20:55.767534 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:20:55.769208 systemd-logind[1493]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:20:55.780928 systemd[1]: Started sshd@2-10.0.0.106:22-10.0.0.1:36818.service - OpenSSH per-connection server daemon (10.0.0.1:36818). Jan 29 16:20:55.781978 systemd-logind[1493]: Removed session 2. Jan 29 16:20:55.816194 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 36818 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:20:55.817719 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:55.821956 systemd-logind[1493]: New session 3 of user core. Jan 29 16:20:55.837859 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:20:55.887970 sshd[1633]: Connection closed by 10.0.0.1 port 36818 Jan 29 16:20:55.888327 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:55.902179 systemd[1]: sshd@2-10.0.0.106:22-10.0.0.1:36818.service: Deactivated successfully. Jan 29 16:20:55.903900 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:20:55.905538 systemd-logind[1493]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:20:55.915836 systemd[1]: Started sshd@3-10.0.0.106:22-10.0.0.1:36834.service - OpenSSH per-connection server daemon (10.0.0.1:36834). Jan 29 16:20:55.917012 systemd-logind[1493]: Removed session 3. Jan 29 16:20:55.950967 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 36834 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:20:55.952700 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:55.957238 systemd-logind[1493]: New session 4 of user core. Jan 29 16:20:55.966832 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:20:56.020518 sshd[1641]: Connection closed by 10.0.0.1 port 36834 Jan 29 16:20:56.020865 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:56.029447 systemd[1]: sshd@3-10.0.0.106:22-10.0.0.1:36834.service: Deactivated successfully. Jan 29 16:20:56.031309 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:20:56.032948 systemd-logind[1493]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:20:56.046942 systemd[1]: Started sshd@4-10.0.0.106:22-10.0.0.1:36842.service - OpenSSH per-connection server daemon (10.0.0.1:36842). Jan 29 16:20:56.048076 systemd-logind[1493]: Removed session 4. Jan 29 16:20:56.084530 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 36842 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:20:56.086271 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:56.090472 systemd-logind[1493]: New session 5 of user core. Jan 29 16:20:56.100689 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:20:56.160592 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:20:56.161006 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:20:56.178453 sudo[1650]: pam_unix(sudo:session): session closed for user root Jan 29 16:20:56.180154 sshd[1649]: Connection closed by 10.0.0.1 port 36842 Jan 29 16:20:56.180552 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:56.197540 systemd[1]: sshd@4-10.0.0.106:22-10.0.0.1:36842.service: Deactivated successfully. Jan 29 16:20:56.200331 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:20:56.202924 systemd-logind[1493]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:20:56.217147 systemd[1]: Started sshd@5-10.0.0.106:22-10.0.0.1:36844.service - OpenSSH per-connection server daemon (10.0.0.1:36844). Jan 29 16:20:56.218438 systemd-logind[1493]: Removed session 5. Jan 29 16:20:56.252501 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 36844 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:20:56.253923 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:56.258442 systemd-logind[1493]: New session 6 of user core. Jan 29 16:20:56.267809 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:20:56.322025 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:20:56.322426 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:20:56.326807 sudo[1660]: pam_unix(sudo:session): session closed for user root Jan 29 16:20:56.333733 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:20:56.334076 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:20:56.354018 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:20:56.386878 augenrules[1682]: No rules Jan 29 16:20:56.388762 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:20:56.389075 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:20:56.390421 sudo[1659]: pam_unix(sudo:session): session closed for user root Jan 29 16:20:56.392019 sshd[1658]: Connection closed by 10.0.0.1 port 36844 Jan 29 16:20:56.392326 sshd-session[1655]: pam_unix(sshd:session): session closed for user core Jan 29 16:20:56.408536 systemd[1]: sshd@5-10.0.0.106:22-10.0.0.1:36844.service: Deactivated successfully. Jan 29 16:20:56.410633 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:20:56.412022 systemd-logind[1493]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:20:56.413393 systemd[1]: Started sshd@6-10.0.0.106:22-10.0.0.1:36858.service - OpenSSH per-connection server daemon (10.0.0.1:36858). Jan 29 16:20:56.414214 systemd-logind[1493]: Removed session 6. Jan 29 16:20:56.464444 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 36858 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:20:56.466328 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:20:56.470751 systemd-logind[1493]: New session 7 of user core. Jan 29 16:20:56.480713 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:20:56.534302 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:20:56.534643 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:20:56.878832 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:20:56.879033 (dockerd)[1714]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:20:57.221451 dockerd[1714]: time="2025-01-29T16:20:57.221267201Z" level=info msg="Starting up" Jan 29 16:20:57.538297 dockerd[1714]: time="2025-01-29T16:20:57.538177607Z" level=info msg="Loading containers: start." Jan 29 16:20:57.706627 kernel: Initializing XFRM netlink socket Jan 29 16:20:57.783060 systemd-networkd[1428]: docker0: Link UP Jan 29 16:20:57.813968 dockerd[1714]: time="2025-01-29T16:20:57.813878707Z" level=info msg="Loading containers: done." Jan 29 16:20:57.832907 dockerd[1714]: time="2025-01-29T16:20:57.832851220Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:20:57.833081 dockerd[1714]: time="2025-01-29T16:20:57.832949685Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:20:57.833081 dockerd[1714]: time="2025-01-29T16:20:57.833048390Z" level=info msg="Daemon has completed initialization" Jan 29 16:20:57.866646 dockerd[1714]: time="2025-01-29T16:20:57.866588688Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:20:57.866759 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:20:58.545024 containerd[1510]: time="2025-01-29T16:20:58.544984500Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 16:20:59.409909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740025659.mount: Deactivated successfully. Jan 29 16:21:00.388768 containerd[1510]: time="2025-01-29T16:21:00.388690859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:00.389384 containerd[1510]: time="2025-01-29T16:21:00.389307535Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 29 16:21:00.390669 containerd[1510]: time="2025-01-29T16:21:00.390630596Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:00.393417 containerd[1510]: time="2025-01-29T16:21:00.393374842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:00.394455 containerd[1510]: time="2025-01-29T16:21:00.394423227Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 1.849399954s" Jan 29 16:21:00.394505 containerd[1510]: time="2025-01-29T16:21:00.394455297Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 16:21:00.395958 containerd[1510]: time="2025-01-29T16:21:00.395912099Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 16:21:01.179660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:21:01.245870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:01.430610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:01.463002 (kubelet)[1978]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:21:01.560470 kubelet[1978]: E0129 16:21:01.560401 1978 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:21:01.566630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:21:01.566843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:21:01.567220 systemd[1]: kubelet.service: Consumed 287ms CPU time, 97M memory peak. Jan 29 16:21:01.913076 containerd[1510]: time="2025-01-29T16:21:01.912936820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:01.913758 containerd[1510]: time="2025-01-29T16:21:01.913704980Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 29 16:21:01.915021 containerd[1510]: time="2025-01-29T16:21:01.914974861Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:01.917764 containerd[1510]: time="2025-01-29T16:21:01.917736920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:01.918802 containerd[1510]: time="2025-01-29T16:21:01.918776469Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.522829785s" Jan 29 16:21:01.918864 containerd[1510]: time="2025-01-29T16:21:01.918802578Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 16:21:01.919311 containerd[1510]: time="2025-01-29T16:21:01.919282959Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 16:21:03.710272 containerd[1510]: time="2025-01-29T16:21:03.710196611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:03.711064 containerd[1510]: time="2025-01-29T16:21:03.711030345Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 29 16:21:03.712220 containerd[1510]: time="2025-01-29T16:21:03.712185871Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:03.714924 containerd[1510]: time="2025-01-29T16:21:03.714889200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:03.716092 containerd[1510]: time="2025-01-29T16:21:03.716057931Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.796743834s" Jan 29 16:21:03.716092 containerd[1510]: time="2025-01-29T16:21:03.716088018Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 16:21:03.716554 containerd[1510]: time="2025-01-29T16:21:03.716527863Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 16:21:05.226465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2146559874.mount: Deactivated successfully. Jan 29 16:21:05.767540 containerd[1510]: time="2025-01-29T16:21:05.767463346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:05.768199 containerd[1510]: time="2025-01-29T16:21:05.768156886Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 16:21:05.769229 containerd[1510]: time="2025-01-29T16:21:05.769195243Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:05.771125 containerd[1510]: time="2025-01-29T16:21:05.771067935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:05.771735 containerd[1510]: time="2025-01-29T16:21:05.771711602Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.05515229s" Jan 29 16:21:05.771774 containerd[1510]: time="2025-01-29T16:21:05.771736128Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 16:21:05.772285 containerd[1510]: time="2025-01-29T16:21:05.772135597Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 16:21:06.293644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2133887197.mount: Deactivated successfully. Jan 29 16:21:08.022374 containerd[1510]: time="2025-01-29T16:21:08.022310604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:08.023245 containerd[1510]: time="2025-01-29T16:21:08.023206324Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 16:21:08.024727 containerd[1510]: time="2025-01-29T16:21:08.024687581Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:08.027455 containerd[1510]: time="2025-01-29T16:21:08.027408963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:08.028388 containerd[1510]: time="2025-01-29T16:21:08.028351210Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.256192841s" Jan 29 16:21:08.028388 containerd[1510]: time="2025-01-29T16:21:08.028379523Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 16:21:08.028952 containerd[1510]: time="2025-01-29T16:21:08.028917913Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 16:21:08.626790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount949027052.mount: Deactivated successfully. Jan 29 16:21:08.631430 containerd[1510]: time="2025-01-29T16:21:08.631390699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:08.632113 containerd[1510]: time="2025-01-29T16:21:08.632045908Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 16:21:08.633143 containerd[1510]: time="2025-01-29T16:21:08.633101006Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:08.635182 containerd[1510]: time="2025-01-29T16:21:08.635144007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:08.635816 containerd[1510]: time="2025-01-29T16:21:08.635773948Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 606.829495ms" Jan 29 16:21:08.635816 containerd[1510]: time="2025-01-29T16:21:08.635799836Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 16:21:08.636371 containerd[1510]: time="2025-01-29T16:21:08.636334809Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 16:21:09.180773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241923071.mount: Deactivated successfully. Jan 29 16:21:11.126500 containerd[1510]: time="2025-01-29T16:21:11.126429927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:11.127306 containerd[1510]: time="2025-01-29T16:21:11.127241569Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 29 16:21:11.128509 containerd[1510]: time="2025-01-29T16:21:11.128470443Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:11.131688 containerd[1510]: time="2025-01-29T16:21:11.131643112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:11.132737 containerd[1510]: time="2025-01-29T16:21:11.132701977Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.49632005s" Jan 29 16:21:11.132783 containerd[1510]: time="2025-01-29T16:21:11.132735049Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 16:21:11.602555 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:21:11.614767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:11.778415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:11.783442 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:21:11.827932 kubelet[2131]: E0129 16:21:11.827859 2131 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:21:11.833553 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:21:11.833894 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:21:11.834675 systemd[1]: kubelet.service: Consumed 193ms CPU time, 96.1M memory peak. Jan 29 16:21:14.296330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:14.296514 systemd[1]: kubelet.service: Consumed 193ms CPU time, 96.1M memory peak. Jan 29 16:21:14.313759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:14.338176 systemd[1]: Reload requested from client PID 2147 ('systemctl') (unit session-7.scope)... Jan 29 16:21:14.338190 systemd[1]: Reloading... Jan 29 16:21:14.416598 zram_generator::config[2195]: No configuration found. Jan 29 16:21:15.047446 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:21:15.150224 systemd[1]: Reloading finished in 811 ms. Jan 29 16:21:15.210539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:15.214415 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:21:15.215076 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:15.215381 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:21:15.215850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:15.215889 systemd[1]: kubelet.service: Consumed 135ms CPU time, 83.6M memory peak. Jan 29 16:21:15.218583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:15.363265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:15.367173 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:21:15.409755 kubelet[2242]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:21:15.409755 kubelet[2242]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:21:15.409755 kubelet[2242]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:21:15.410198 kubelet[2242]: I0129 16:21:15.409794 2242 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:21:15.902727 kubelet[2242]: I0129 16:21:15.902660 2242 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:21:15.902727 kubelet[2242]: I0129 16:21:15.902712 2242 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:21:15.903039 kubelet[2242]: I0129 16:21:15.903013 2242 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:21:15.931848 kubelet[2242]: E0129 16:21:15.931781 2242 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:15.933960 kubelet[2242]: I0129 16:21:15.933933 2242 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:21:15.942065 kubelet[2242]: E0129 16:21:15.941987 2242 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:21:15.942065 kubelet[2242]: I0129 16:21:15.942051 2242 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:21:15.948859 kubelet[2242]: I0129 16:21:15.948818 2242 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:21:15.950632 kubelet[2242]: I0129 16:21:15.950596 2242 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:21:15.950843 kubelet[2242]: I0129 16:21:15.950780 2242 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:21:15.951021 kubelet[2242]: I0129 16:21:15.950820 2242 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:21:15.951192 kubelet[2242]: I0129 16:21:15.951028 2242 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:21:15.951192 kubelet[2242]: I0129 16:21:15.951039 2242 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:21:15.951192 kubelet[2242]: I0129 16:21:15.951189 2242 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:21:16.021307 kubelet[2242]: I0129 16:21:16.012067 2242 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:21:16.021307 kubelet[2242]: I0129 16:21:16.012132 2242 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:21:16.021307 kubelet[2242]: I0129 16:21:16.012177 2242 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:21:16.021307 kubelet[2242]: I0129 16:21:16.012194 2242 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:21:16.021307 kubelet[2242]: W0129 16:21:16.012994 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 29 16:21:16.021307 kubelet[2242]: E0129 16:21:16.013038 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:16.021307 kubelet[2242]: W0129 16:21:16.013063 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 29 16:21:16.021307 kubelet[2242]: E0129 16:21:16.013109 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:16.033465 kubelet[2242]: I0129 16:21:16.033427 2242 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:21:16.038914 kubelet[2242]: I0129 16:21:16.038894 2242 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:21:16.039713 kubelet[2242]: W0129 16:21:16.039680 2242 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:21:16.040805 kubelet[2242]: I0129 16:21:16.040778 2242 server.go:1269] "Started kubelet" Jan 29 16:21:16.041203 kubelet[2242]: I0129 16:21:16.041065 2242 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:21:16.041236 kubelet[2242]: I0129 16:21:16.041187 2242 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:21:16.041983 kubelet[2242]: I0129 16:21:16.041520 2242 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:21:16.042725 kubelet[2242]: I0129 16:21:16.042702 2242 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:21:16.043218 kubelet[2242]: I0129 16:21:16.043202 2242 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:21:16.043502 kubelet[2242]: I0129 16:21:16.043484 2242 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:21:16.044181 kubelet[2242]: I0129 16:21:16.044166 2242 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:21:16.044660 kubelet[2242]: I0129 16:21:16.044643 2242 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:21:16.044703 kubelet[2242]: I0129 16:21:16.044684 2242 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:21:16.060138 kubelet[2242]: E0129 16:21:16.044262 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:16.060138 kubelet[2242]: E0129 16:21:16.059027 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="200ms" Jan 29 16:21:16.060138 kubelet[2242]: I0129 16:21:16.059260 2242 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:21:16.060138 kubelet[2242]: I0129 16:21:16.059339 2242 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:21:16.060349 kubelet[2242]: W0129 16:21:16.060242 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 29 16:21:16.060349 kubelet[2242]: E0129 16:21:16.060281 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:16.063586 kubelet[2242]: E0129 16:21:16.059965 2242 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.106:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f364aca75528d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:21:16.040467085 +0000 UTC m=+0.666605776,LastTimestamp:2025-01-29 16:21:16.040467085 +0000 UTC m=+0.666605776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 16:21:16.063586 kubelet[2242]: I0129 16:21:16.062463 2242 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:21:16.064716 kubelet[2242]: E0129 16:21:16.064689 2242 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:21:16.079273 kubelet[2242]: I0129 16:21:16.079247 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:21:16.079512 kubelet[2242]: I0129 16:21:16.079485 2242 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:21:16.079512 kubelet[2242]: I0129 16:21:16.079501 2242 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:21:16.079608 kubelet[2242]: I0129 16:21:16.079517 2242 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:21:16.080898 kubelet[2242]: I0129 16:21:16.080866 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:21:16.081559 kubelet[2242]: I0129 16:21:16.080913 2242 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:21:16.081559 kubelet[2242]: I0129 16:21:16.080932 2242 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:21:16.081559 kubelet[2242]: E0129 16:21:16.080970 2242 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:21:16.081906 kubelet[2242]: W0129 16:21:16.081784 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 29 16:21:16.081906 kubelet[2242]: E0129 16:21:16.081815 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:16.159244 kubelet[2242]: E0129 16:21:16.159133 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:16.181467 kubelet[2242]: E0129 16:21:16.181416 2242 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:21:16.260107 kubelet[2242]: E0129 16:21:16.260049 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:16.260516 kubelet[2242]: E0129 16:21:16.260467 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="400ms" Jan 29 16:21:16.361010 kubelet[2242]: E0129 16:21:16.360951 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:16.381705 kubelet[2242]: E0129 16:21:16.381676 2242 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:21:16.461597 kubelet[2242]: E0129 16:21:16.461420 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:16.561648 kubelet[2242]: E0129 16:21:16.561588 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:16.661720 kubelet[2242]: E0129 16:21:16.661665 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:16.661888 kubelet[2242]: E0129 16:21:16.661785 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="800ms" Jan 29 16:21:16.750477 kubelet[2242]: I0129 16:21:16.750198 2242 policy_none.go:49] "None policy: Start" Jan 29 16:21:16.751303 kubelet[2242]: I0129 16:21:16.751270 2242 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:21:16.751303 kubelet[2242]: I0129 16:21:16.751296 2242 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:21:16.760639 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:21:16.770120 kubelet[2242]: E0129 16:21:16.761997 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:16.773678 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:21:16.776863 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:21:16.781882 kubelet[2242]: E0129 16:21:16.781852 2242 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:21:16.784518 kubelet[2242]: I0129 16:21:16.784477 2242 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:21:16.784795 kubelet[2242]: I0129 16:21:16.784746 2242 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:21:16.784795 kubelet[2242]: I0129 16:21:16.784762 2242 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:21:16.785021 kubelet[2242]: I0129 16:21:16.784995 2242 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:21:16.786187 kubelet[2242]: E0129 16:21:16.786104 2242 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 16:21:16.842815 kubelet[2242]: W0129 16:21:16.842755 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 29 16:21:16.842815 kubelet[2242]: E0129 16:21:16.842812 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:16.876875 kubelet[2242]: W0129 16:21:16.876823 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 29 16:21:16.876914 kubelet[2242]: E0129 16:21:16.876873 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:16.887139 kubelet[2242]: I0129 16:21:16.887110 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 16:21:16.887370 kubelet[2242]: E0129 16:21:16.887333 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Jan 29 16:21:17.089399 kubelet[2242]: I0129 16:21:17.089274 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 16:21:17.089554 kubelet[2242]: E0129 16:21:17.089502 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Jan 29 16:21:17.407375 kubelet[2242]: W0129 16:21:17.407223 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 29 16:21:17.407375 kubelet[2242]: E0129 16:21:17.407269 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:17.462445 kubelet[2242]: E0129 16:21:17.462362 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="1.6s" Jan 29 16:21:17.491614 kubelet[2242]: I0129 16:21:17.491564 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 16:21:17.491868 kubelet[2242]: E0129 16:21:17.491832 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Jan 29 16:21:17.554151 kubelet[2242]: W0129 16:21:17.554077 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 29 16:21:17.554151 kubelet[2242]: E0129 16:21:17.554120 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:17.590819 systemd[1]: Created slice kubepods-burstable-pod6a86e81240c5fd89d51b5adf10e9feae.slice - libcontainer container kubepods-burstable-pod6a86e81240c5fd89d51b5adf10e9feae.slice. Jan 29 16:21:17.611661 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 29 16:21:17.615004 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 29 16:21:17.653682 kubelet[2242]: I0129 16:21:17.653629 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a86e81240c5fd89d51b5adf10e9feae-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a86e81240c5fd89d51b5adf10e9feae\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:21:17.653682 kubelet[2242]: I0129 16:21:17.653669 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a86e81240c5fd89d51b5adf10e9feae-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a86e81240c5fd89d51b5adf10e9feae\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:21:17.653841 kubelet[2242]: I0129 16:21:17.653693 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:21:17.653841 kubelet[2242]: I0129 16:21:17.653713 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 16:21:17.653841 kubelet[2242]: I0129 16:21:17.653733 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a86e81240c5fd89d51b5adf10e9feae-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6a86e81240c5fd89d51b5adf10e9feae\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:21:17.653841 kubelet[2242]: I0129 16:21:17.653752 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:21:17.653841 kubelet[2242]: I0129 16:21:17.653770 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:21:17.653992 kubelet[2242]: I0129 16:21:17.653807 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:21:17.653992 kubelet[2242]: I0129 16:21:17.653858 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:21:17.908992 kubelet[2242]: E0129 16:21:17.908943 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:17.909777 containerd[1510]: time="2025-01-29T16:21:17.909722689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6a86e81240c5fd89d51b5adf10e9feae,Namespace:kube-system,Attempt:0,}" Jan 29 16:21:17.914022 kubelet[2242]: E0129 16:21:17.913987 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:17.914431 containerd[1510]: time="2025-01-29T16:21:17.914393918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 29 16:21:17.917656 kubelet[2242]: E0129 16:21:17.917620 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:17.917990 containerd[1510]: time="2025-01-29T16:21:17.917953792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 29 16:21:17.968461 kubelet[2242]: E0129 16:21:17.968422 2242 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:18.293480 kubelet[2242]: I0129 16:21:18.293367 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 16:21:18.293693 kubelet[2242]: E0129 16:21:18.293667 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Jan 29 16:21:18.452263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3107385979.mount: Deactivated successfully. Jan 29 16:21:18.458630 containerd[1510]: time="2025-01-29T16:21:18.458536435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:21:18.461788 containerd[1510]: time="2025-01-29T16:21:18.461706258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 16:21:18.462871 containerd[1510]: time="2025-01-29T16:21:18.462840194Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:21:18.464772 containerd[1510]: time="2025-01-29T16:21:18.464736710Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:21:18.465401 containerd[1510]: time="2025-01-29T16:21:18.465350852Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:21:18.466277 containerd[1510]: time="2025-01-29T16:21:18.466234789Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:21:18.467333 containerd[1510]: time="2025-01-29T16:21:18.467264510Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:21:18.468113 containerd[1510]: time="2025-01-29T16:21:18.468083315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:21:18.470301 containerd[1510]: time="2025-01-29T16:21:18.470271157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 555.775078ms" Jan 29 16:21:18.471043 containerd[1510]: time="2025-01-29T16:21:18.471000234Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 561.169423ms" Jan 29 16:21:18.475434 containerd[1510]: time="2025-01-29T16:21:18.475394093Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 557.358236ms" Jan 29 16:21:18.769118 containerd[1510]: time="2025-01-29T16:21:18.768879288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:18.769478 containerd[1510]: time="2025-01-29T16:21:18.768985477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:18.769478 containerd[1510]: time="2025-01-29T16:21:18.769014190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:18.769478 containerd[1510]: time="2025-01-29T16:21:18.769131320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:18.771227 containerd[1510]: time="2025-01-29T16:21:18.768748693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:18.771227 containerd[1510]: time="2025-01-29T16:21:18.771159673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:18.771227 containerd[1510]: time="2025-01-29T16:21:18.771188527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:18.771558 containerd[1510]: time="2025-01-29T16:21:18.771400645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:18.777950 containerd[1510]: time="2025-01-29T16:21:18.777734611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:18.777950 containerd[1510]: time="2025-01-29T16:21:18.777787100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:18.777950 containerd[1510]: time="2025-01-29T16:21:18.777798261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:18.777950 containerd[1510]: time="2025-01-29T16:21:18.777874904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:18.842901 systemd[1]: Started cri-containerd-49b258516870487c4e665ea823d9da78634c18b062dc124307f1a97e0aebc7bd.scope - libcontainer container 49b258516870487c4e665ea823d9da78634c18b062dc124307f1a97e0aebc7bd. Jan 29 16:21:18.848049 systemd[1]: Started cri-containerd-56bf4c89c0afdc472bc950ac3bdfb73c353188b9681c8645ed22a695c87ee106.scope - libcontainer container 56bf4c89c0afdc472bc950ac3bdfb73c353188b9681c8645ed22a695c87ee106. Jan 29 16:21:18.871701 systemd[1]: Started cri-containerd-0ba9e30d1348b38bf139c7f285e365701f364a0d546b178353934ac7204c1e61.scope - libcontainer container 0ba9e30d1348b38bf139c7f285e365701f364a0d546b178353934ac7204c1e61. Jan 29 16:21:18.896264 kubelet[2242]: W0129 16:21:18.896186 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 29 16:21:18.896600 kubelet[2242]: E0129 16:21:18.896278 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:18.918455 containerd[1510]: time="2025-01-29T16:21:18.917727588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6a86e81240c5fd89d51b5adf10e9feae,Namespace:kube-system,Attempt:0,} returns sandbox id \"56bf4c89c0afdc472bc950ac3bdfb73c353188b9681c8645ed22a695c87ee106\"" Jan 29 16:21:18.918455 containerd[1510]: time="2025-01-29T16:21:18.918178624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"49b258516870487c4e665ea823d9da78634c18b062dc124307f1a97e0aebc7bd\"" Jan 29 16:21:18.919115 kubelet[2242]: E0129 16:21:18.919067 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:18.919601 kubelet[2242]: E0129 16:21:18.919575 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:18.923523 containerd[1510]: time="2025-01-29T16:21:18.922381745Z" level=info msg="CreateContainer within sandbox \"49b258516870487c4e665ea823d9da78634c18b062dc124307f1a97e0aebc7bd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:21:18.923611 containerd[1510]: time="2025-01-29T16:21:18.922925083Z" level=info msg="CreateContainer within sandbox \"56bf4c89c0afdc472bc950ac3bdfb73c353188b9681c8645ed22a695c87ee106\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:21:18.923766 containerd[1510]: time="2025-01-29T16:21:18.923411746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ba9e30d1348b38bf139c7f285e365701f364a0d546b178353934ac7204c1e61\"" Jan 29 16:21:18.925162 kubelet[2242]: E0129 16:21:18.924968 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:18.926894 containerd[1510]: time="2025-01-29T16:21:18.926856815Z" level=info msg="CreateContainer within sandbox \"0ba9e30d1348b38bf139c7f285e365701f364a0d546b178353934ac7204c1e61\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:21:19.063289 kubelet[2242]: E0129 16:21:19.063152 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="3.2s" Jan 29 16:21:19.312999 kubelet[2242]: W0129 16:21:19.312903 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Jan 29 16:21:19.312999 kubelet[2242]: E0129 16:21:19.312989 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:21:19.425187 containerd[1510]: time="2025-01-29T16:21:19.425061449Z" level=info msg="CreateContainer within sandbox \"56bf4c89c0afdc472bc950ac3bdfb73c353188b9681c8645ed22a695c87ee106\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"983ac473f8904c2fd8bad2147beeb07b0c3fad2b3d27517497b88c22e34726d7\"" Jan 29 16:21:19.425918 containerd[1510]: time="2025-01-29T16:21:19.425869233Z" level=info msg="StartContainer for \"983ac473f8904c2fd8bad2147beeb07b0c3fad2b3d27517497b88c22e34726d7\"" Jan 29 16:21:19.435563 containerd[1510]: time="2025-01-29T16:21:19.435513166Z" level=info msg="CreateContainer within sandbox \"49b258516870487c4e665ea823d9da78634c18b062dc124307f1a97e0aebc7bd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"80f1532ef1d94ee6bd6449ba1b3c1725d517e3e1f9f8d1e8ad855f18c4fd9f31\"" Jan 29 16:21:19.436049 containerd[1510]: time="2025-01-29T16:21:19.436014776Z" level=info msg="StartContainer for \"80f1532ef1d94ee6bd6449ba1b3c1725d517e3e1f9f8d1e8ad855f18c4fd9f31\"" Jan 29 16:21:19.439955 containerd[1510]: time="2025-01-29T16:21:19.439925309Z" level=info msg="CreateContainer within sandbox \"0ba9e30d1348b38bf139c7f285e365701f364a0d546b178353934ac7204c1e61\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4c3d725e718d2f900b986ae959852def276635fbc6a276eefecae68485d0b910\"" Jan 29 16:21:19.440493 containerd[1510]: time="2025-01-29T16:21:19.440472915Z" level=info msg="StartContainer for \"4c3d725e718d2f900b986ae959852def276635fbc6a276eefecae68485d0b910\"" Jan 29 16:21:19.471781 systemd[1]: Started cri-containerd-983ac473f8904c2fd8bad2147beeb07b0c3fad2b3d27517497b88c22e34726d7.scope - libcontainer container 983ac473f8904c2fd8bad2147beeb07b0c3fad2b3d27517497b88c22e34726d7. Jan 29 16:21:19.554743 systemd[1]: Started cri-containerd-80f1532ef1d94ee6bd6449ba1b3c1725d517e3e1f9f8d1e8ad855f18c4fd9f31.scope - libcontainer container 80f1532ef1d94ee6bd6449ba1b3c1725d517e3e1f9f8d1e8ad855f18c4fd9f31. Jan 29 16:21:19.558562 systemd[1]: Started cri-containerd-4c3d725e718d2f900b986ae959852def276635fbc6a276eefecae68485d0b910.scope - libcontainer container 4c3d725e718d2f900b986ae959852def276635fbc6a276eefecae68485d0b910. Jan 29 16:21:19.598473 containerd[1510]: time="2025-01-29T16:21:19.597845509Z" level=info msg="StartContainer for \"983ac473f8904c2fd8bad2147beeb07b0c3fad2b3d27517497b88c22e34726d7\" returns successfully" Jan 29 16:21:19.613113 containerd[1510]: time="2025-01-29T16:21:19.612952244Z" level=info msg="StartContainer for \"80f1532ef1d94ee6bd6449ba1b3c1725d517e3e1f9f8d1e8ad855f18c4fd9f31\" returns successfully" Jan 29 16:21:19.617034 containerd[1510]: time="2025-01-29T16:21:19.617002017Z" level=info msg="StartContainer for \"4c3d725e718d2f900b986ae959852def276635fbc6a276eefecae68485d0b910\" returns successfully" Jan 29 16:21:19.895900 kubelet[2242]: I0129 16:21:19.895760 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 16:21:20.095023 kubelet[2242]: E0129 16:21:20.094932 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:20.099155 kubelet[2242]: E0129 16:21:20.097989 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:20.099785 kubelet[2242]: E0129 16:21:20.099767 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:21.102948 kubelet[2242]: E0129 16:21:21.102908 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:21.103411 kubelet[2242]: E0129 16:21:21.103304 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:21.103875 kubelet[2242]: E0129 16:21:21.103817 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:21.270659 kubelet[2242]: E0129 16:21:21.269173 2242 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f364aca75528d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:21:16.040467085 +0000 UTC m=+0.666605776,LastTimestamp:2025-01-29 16:21:16.040467085 +0000 UTC m=+0.666605776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 16:21:21.349414 kubelet[2242]: E0129 16:21:21.349287 2242 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f364acbe6c3d8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 16:21:16.064678872 +0000 UTC m=+0.690817564,LastTimestamp:2025-01-29 16:21:16.064678872 +0000 UTC m=+0.690817564,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 16:21:21.350516 kubelet[2242]: I0129 16:21:21.349380 2242 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 16:21:21.350516 kubelet[2242]: E0129 16:21:21.349548 2242 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 16:21:21.358730 kubelet[2242]: E0129 16:21:21.358631 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:21.459078 kubelet[2242]: E0129 16:21:21.459023 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:21.559699 kubelet[2242]: E0129 16:21:21.559646 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:21.660249 kubelet[2242]: E0129 16:21:21.660102 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:21.760683 kubelet[2242]: E0129 16:21:21.760613 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:21.860830 kubelet[2242]: E0129 16:21:21.860791 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:21.961242 kubelet[2242]: E0129 16:21:21.961049 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:22.062165 kubelet[2242]: E0129 16:21:22.062085 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:22.103974 kubelet[2242]: E0129 16:21:22.103940 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:22.103974 kubelet[2242]: E0129 16:21:22.103982 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:22.162266 kubelet[2242]: E0129 16:21:22.162209 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:22.262675 kubelet[2242]: E0129 16:21:22.262513 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:22.362919 kubelet[2242]: E0129 16:21:22.362877 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:22.464712 kubelet[2242]: E0129 16:21:22.464655 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:22.564959 kubelet[2242]: E0129 16:21:22.564822 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:22.665442 kubelet[2242]: E0129 16:21:22.665371 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:22.766015 kubelet[2242]: E0129 16:21:22.765951 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:22.866873 kubelet[2242]: E0129 16:21:22.866737 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:22.967642 kubelet[2242]: E0129 16:21:22.967578 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:23.067990 kubelet[2242]: E0129 16:21:23.067890 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:23.168814 kubelet[2242]: E0129 16:21:23.168677 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:23.269188 kubelet[2242]: E0129 16:21:23.269147 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:23.486270 kubelet[2242]: E0129 16:21:23.486130 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:23.502875 systemd[1]: Reload requested from client PID 2515 ('systemctl') (unit session-7.scope)... Jan 29 16:21:23.502890 systemd[1]: Reloading... Jan 29 16:21:23.587683 kubelet[2242]: E0129 16:21:23.587643 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:23.592319 zram_generator::config[2560]: No configuration found. Jan 29 16:21:23.687947 kubelet[2242]: E0129 16:21:23.687920 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:23.696595 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:21:23.788330 kubelet[2242]: E0129 16:21:23.788244 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:23.811037 systemd[1]: Reloading finished in 307 ms. Jan 29 16:21:23.836305 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:23.860977 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:21:23.861225 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:23.861264 systemd[1]: kubelet.service: Consumed 1.391s CPU time, 119.2M memory peak. Jan 29 16:21:23.869790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:24.033776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:24.037807 (kubelet)[2604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:21:24.147477 kubelet[2604]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:21:24.147477 kubelet[2604]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:21:24.147477 kubelet[2604]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:21:24.147477 kubelet[2604]: I0129 16:21:24.147442 2604 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:21:24.154490 kubelet[2604]: I0129 16:21:24.154464 2604 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:21:24.154490 kubelet[2604]: I0129 16:21:24.154483 2604 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:21:24.154690 kubelet[2604]: I0129 16:21:24.154673 2604 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:21:24.155747 kubelet[2604]: I0129 16:21:24.155729 2604 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:21:24.157271 kubelet[2604]: I0129 16:21:24.157250 2604 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:21:24.160647 kubelet[2604]: E0129 16:21:24.160599 2604 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:21:24.160647 kubelet[2604]: I0129 16:21:24.160645 2604 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:21:24.166143 kubelet[2604]: I0129 16:21:24.166125 2604 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:21:24.166251 kubelet[2604]: I0129 16:21:24.166239 2604 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:21:24.166425 kubelet[2604]: I0129 16:21:24.166400 2604 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:21:24.166583 kubelet[2604]: I0129 16:21:24.166426 2604 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:21:24.166671 kubelet[2604]: I0129 16:21:24.166582 2604 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:21:24.166671 kubelet[2604]: I0129 16:21:24.166591 2604 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:21:24.166671 kubelet[2604]: I0129 16:21:24.166619 2604 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:21:24.166838 kubelet[2604]: I0129 16:21:24.166728 2604 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:21:24.166838 kubelet[2604]: I0129 16:21:24.166738 2604 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:21:24.166838 kubelet[2604]: I0129 16:21:24.166798 2604 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:21:24.166838 kubelet[2604]: I0129 16:21:24.166808 2604 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:21:24.167622 kubelet[2604]: I0129 16:21:24.167561 2604 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:21:24.168048 kubelet[2604]: I0129 16:21:24.168025 2604 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:21:24.169913 kubelet[2604]: I0129 16:21:24.168496 2604 server.go:1269] "Started kubelet" Jan 29 16:21:24.169913 kubelet[2604]: I0129 16:21:24.168624 2604 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:21:24.169913 kubelet[2604]: I0129 16:21:24.168816 2604 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:21:24.169913 kubelet[2604]: I0129 16:21:24.169104 2604 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:21:24.169913 kubelet[2604]: I0129 16:21:24.169416 2604 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:21:24.172087 kubelet[2604]: I0129 16:21:24.171052 2604 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:21:24.172087 kubelet[2604]: I0129 16:21:24.171422 2604 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:21:24.175790 kubelet[2604]: I0129 16:21:24.175753 2604 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:21:24.175935 kubelet[2604]: I0129 16:21:24.175908 2604 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:21:24.176147 kubelet[2604]: I0129 16:21:24.176121 2604 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:21:24.176833 kubelet[2604]: E0129 16:21:24.176794 2604 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 16:21:24.180502 kubelet[2604]: I0129 16:21:24.179731 2604 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:21:24.180502 kubelet[2604]: I0129 16:21:24.179754 2604 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:21:24.180502 kubelet[2604]: I0129 16:21:24.179857 2604 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:21:24.182220 kubelet[2604]: E0129 16:21:24.182140 2604 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:21:24.183835 kubelet[2604]: I0129 16:21:24.183691 2604 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:21:24.185673 kubelet[2604]: I0129 16:21:24.185646 2604 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:21:24.185734 kubelet[2604]: I0129 16:21:24.185685 2604 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:21:24.185734 kubelet[2604]: I0129 16:21:24.185702 2604 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:21:24.185781 kubelet[2604]: E0129 16:21:24.185741 2604 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:21:24.218106 kubelet[2604]: I0129 16:21:24.218077 2604 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:21:24.218106 kubelet[2604]: I0129 16:21:24.218096 2604 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:21:24.218106 kubelet[2604]: I0129 16:21:24.218113 2604 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:21:24.218287 kubelet[2604]: I0129 16:21:24.218267 2604 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:21:24.218335 kubelet[2604]: I0129 16:21:24.218282 2604 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:21:24.218335 kubelet[2604]: I0129 16:21:24.218334 2604 policy_none.go:49] "None policy: Start" Jan 29 16:21:24.218848 kubelet[2604]: I0129 16:21:24.218831 2604 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:21:24.218885 kubelet[2604]: I0129 16:21:24.218852 2604 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:21:24.218990 kubelet[2604]: I0129 16:21:24.218976 2604 state_mem.go:75] "Updated machine memory state" Jan 29 16:21:24.223447 kubelet[2604]: I0129 16:21:24.223310 2604 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:21:24.223511 kubelet[2604]: I0129 16:21:24.223491 2604 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:21:24.223540 kubelet[2604]: I0129 16:21:24.223513 2604 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:21:24.223696 kubelet[2604]: I0129 16:21:24.223679 2604 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:21:24.306449 sudo[2640]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 16:21:24.306829 sudo[2640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 16:21:24.328826 kubelet[2604]: I0129 16:21:24.328789 2604 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 16:21:24.335929 kubelet[2604]: I0129 16:21:24.335169 2604 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 29 16:21:24.335929 kubelet[2604]: I0129 16:21:24.335244 2604 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 16:21:24.377540 kubelet[2604]: I0129 16:21:24.377494 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a86e81240c5fd89d51b5adf10e9feae-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a86e81240c5fd89d51b5adf10e9feae\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:21:24.377540 kubelet[2604]: I0129 16:21:24.377537 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:21:24.377695 kubelet[2604]: I0129 16:21:24.377559 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 16:21:24.377695 kubelet[2604]: I0129 16:21:24.377588 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a86e81240c5fd89d51b5adf10e9feae-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a86e81240c5fd89d51b5adf10e9feae\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:21:24.377695 kubelet[2604]: I0129 16:21:24.377632 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a86e81240c5fd89d51b5adf10e9feae-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6a86e81240c5fd89d51b5adf10e9feae\") " pod="kube-system/kube-apiserver-localhost" Jan 29 16:21:24.377695 kubelet[2604]: I0129 16:21:24.377676 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:21:24.377820 kubelet[2604]: I0129 16:21:24.377698 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:21:24.377820 kubelet[2604]: I0129 16:21:24.377716 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:21:24.377820 kubelet[2604]: I0129 16:21:24.377736 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 16:21:24.596394 kubelet[2604]: E0129 16:21:24.596043 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:24.597209 kubelet[2604]: E0129 16:21:24.597023 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:24.598016 kubelet[2604]: E0129 16:21:24.597169 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:25.073702 sudo[2640]: pam_unix(sudo:session): session closed for user root Jan 29 16:21:25.167365 kubelet[2604]: I0129 16:21:25.167330 2604 apiserver.go:52] "Watching apiserver" Jan 29 16:21:25.176693 kubelet[2604]: I0129 16:21:25.176665 2604 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:21:25.205284 kubelet[2604]: E0129 16:21:25.204654 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:25.205284 kubelet[2604]: E0129 16:21:25.205047 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:25.217125 kubelet[2604]: E0129 16:21:25.217063 2604 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 16:21:25.218110 kubelet[2604]: E0129 16:21:25.218089 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:25.225214 kubelet[2604]: I0129 16:21:25.225138 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.225097525 podStartE2EDuration="1.225097525s" podCreationTimestamp="2025-01-29 16:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:21:25.225078558 +0000 UTC m=+1.183121507" watchObservedRunningTime="2025-01-29 16:21:25.225097525 +0000 UTC m=+1.183140444" Jan 29 16:21:25.232195 kubelet[2604]: I0129 16:21:25.232108 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.23208425 podStartE2EDuration="1.23208425s" podCreationTimestamp="2025-01-29 16:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:21:25.23149602 +0000 UTC m=+1.189538949" watchObservedRunningTime="2025-01-29 16:21:25.23208425 +0000 UTC m=+1.190127170" Jan 29 16:21:25.247428 kubelet[2604]: I0129 16:21:25.246879 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.246863959 podStartE2EDuration="1.246863959s" podCreationTimestamp="2025-01-29 16:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:21:25.240485692 +0000 UTC m=+1.198528621" watchObservedRunningTime="2025-01-29 16:21:25.246863959 +0000 UTC m=+1.204906878" Jan 29 16:21:26.204961 kubelet[2604]: E0129 16:21:26.204899 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:26.205391 kubelet[2604]: E0129 16:21:26.205163 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:26.687031 sudo[1694]: pam_unix(sudo:session): session closed for user root Jan 29 16:21:26.688379 sshd[1693]: Connection closed by 10.0.0.1 port 36858 Jan 29 16:21:26.688988 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:26.692812 systemd[1]: sshd@6-10.0.0.106:22-10.0.0.1:36858.service: Deactivated successfully. Jan 29 16:21:26.695274 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:21:26.695544 systemd[1]: session-7.scope: Consumed 5.878s CPU time, 253.5M memory peak. Jan 29 16:21:26.697115 systemd-logind[1493]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:21:26.698018 systemd-logind[1493]: Removed session 7. Jan 29 16:21:29.973110 kubelet[2604]: I0129 16:21:29.973073 2604 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:21:29.973697 kubelet[2604]: I0129 16:21:29.973616 2604 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:21:29.973730 containerd[1510]: time="2025-01-29T16:21:29.973433181Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:21:30.231912 kubelet[2604]: E0129 16:21:30.231801 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:31.078919 systemd[1]: Created slice kubepods-besteffort-podf84a92d6_be2c_49e1_9959_104ceb273be0.slice - libcontainer container kubepods-besteffort-podf84a92d6_be2c_49e1_9959_104ceb273be0.slice. Jan 29 16:21:31.094752 systemd[1]: Created slice kubepods-burstable-podf173e8f1_728d_4219_8446_e38d80e68400.slice - libcontainer container kubepods-burstable-podf173e8f1_728d_4219_8446_e38d80e68400.slice. Jan 29 16:21:31.119541 kubelet[2604]: I0129 16:21:31.119494 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-xtables-lock\") pod \"cilium-9445j\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " pod="kube-system/cilium-9445j" Jan 29 16:21:31.119541 kubelet[2604]: I0129 16:21:31.119528 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-host-proc-sys-net\") pod \"cilium-9445j\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " pod="kube-system/cilium-9445j" Jan 29 16:21:31.119541 kubelet[2604]: I0129 16:21:31.119548 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-cni-path\") pod \"cilium-9445j\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " pod="kube-system/cilium-9445j" Jan 29 16:21:31.120182 kubelet[2604]: I0129 16:21:31.119583 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f84a92d6-be2c-49e1-9959-104ceb273be0-xtables-lock\") pod \"kube-proxy-jnzpb\" (UID: \"f84a92d6-be2c-49e1-9959-104ceb273be0\") " pod="kube-system/kube-proxy-jnzpb" Jan 29 16:21:31.120182 kubelet[2604]: I0129 16:21:31.119599 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-cilium-run\") pod \"cilium-9445j\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " pod="kube-system/cilium-9445j" Jan 29 16:21:31.120182 kubelet[2604]: I0129 16:21:31.119614 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-bpf-maps\") pod \"cilium-9445j\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " pod="kube-system/cilium-9445j" Jan 29 16:21:31.120182 kubelet[2604]: I0129 16:21:31.119700 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-hostproc\") pod \"cilium-9445j\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " pod="kube-system/cilium-9445j" Jan 29 16:21:31.120182 kubelet[2604]: I0129 16:21:31.119788 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-cilium-cgroup\") pod \"cilium-9445j\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " pod="kube-system/cilium-9445j" Jan 29 16:21:31.120182 kubelet[2604]: I0129 16:21:31.119815 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-host-proc-sys-kernel\") pod \"cilium-9445j\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " pod="kube-system/cilium-9445j" Jan 29 16:21:31.120382 kubelet[2604]: I0129 16:21:31.119891 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f84a92d6-be2c-49e1-9959-104ceb273be0-lib-modules\") pod \"kube-proxy-jnzpb\" (UID: \"f84a92d6-be2c-49e1-9959-104ceb273be0\") " pod="kube-system/kube-proxy-jnzpb" Jan 29 16:21:31.120382 kubelet[2604]: I0129 16:21:31.119919 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz2gn\" (UniqueName: \"kubernetes.io/projected/f84a92d6-be2c-49e1-9959-104ceb273be0-kube-api-access-zz2gn\") pod \"kube-proxy-jnzpb\" (UID: \"f84a92d6-be2c-49e1-9959-104ceb273be0\") " pod="kube-system/kube-proxy-jnzpb" Jan 29 16:21:31.120382 kubelet[2604]: I0129 16:21:31.119959 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd9jl\" (UniqueName: \"kubernetes.io/projected/f173e8f1-728d-4219-8446-e38d80e68400-kube-api-access-wd9jl\") pod \"cilium-9445j\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " pod="kube-system/cilium-9445j" Jan 29 16:21:31.120382 kubelet[2604]: I0129 16:21:31.119980 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-lib-modules\") pod \"cilium-9445j\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " pod="kube-system/cilium-9445j" Jan 29 16:21:31.120382 kubelet[2604]: I0129 16:21:31.120005 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f173e8f1-728d-4219-8446-e38d80e68400-clustermesh-secrets\") pod \"cilium-9445j\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " pod="kube-system/cilium-9445j" Jan 29 16:21:31.120493 kubelet[2604]: I0129 16:21:31.120028 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f173e8f1-728d-4219-8446-e38d80e68400-cilium-config-path\") pod \"cilium-9445j\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " pod="kube-system/cilium-9445j" Jan 29 16:21:31.120493 kubelet[2604]: I0129 16:21:31.120054 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f84a92d6-be2c-49e1-9959-104ceb273be0-kube-proxy\") pod \"kube-proxy-jnzpb\" (UID: \"f84a92d6-be2c-49e1-9959-104ceb273be0\") " pod="kube-system/kube-proxy-jnzpb" Jan 29 16:21:31.120493 kubelet[2604]: I0129 16:21:31.120075 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-etc-cni-netd\") pod \"cilium-9445j\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " pod="kube-system/cilium-9445j" Jan 29 16:21:31.120493 kubelet[2604]: I0129 16:21:31.120095 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f173e8f1-728d-4219-8446-e38d80e68400-hubble-tls\") pod \"cilium-9445j\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " pod="kube-system/cilium-9445j" Jan 29 16:21:31.124031 systemd[1]: Created slice kubepods-besteffort-pod9457d0b1_f09a_4e10_8110_b61d3716790e.slice - libcontainer container kubepods-besteffort-pod9457d0b1_f09a_4e10_8110_b61d3716790e.slice. Jan 29 16:21:31.212142 kubelet[2604]: E0129 16:21:31.212106 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:31.221313 kubelet[2604]: I0129 16:21:31.221251 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zstn9\" (UniqueName: \"kubernetes.io/projected/9457d0b1-f09a-4e10-8110-b61d3716790e-kube-api-access-zstn9\") pod \"cilium-operator-5d85765b45-wgjz5\" (UID: \"9457d0b1-f09a-4e10-8110-b61d3716790e\") " pod="kube-system/cilium-operator-5d85765b45-wgjz5" Jan 29 16:21:31.221514 kubelet[2604]: I0129 16:21:31.221354 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9457d0b1-f09a-4e10-8110-b61d3716790e-cilium-config-path\") pod \"cilium-operator-5d85765b45-wgjz5\" (UID: \"9457d0b1-f09a-4e10-8110-b61d3716790e\") " pod="kube-system/cilium-operator-5d85765b45-wgjz5" Jan 29 16:21:31.388409 kubelet[2604]: E0129 16:21:31.388255 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:31.388888 containerd[1510]: time="2025-01-29T16:21:31.388830413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jnzpb,Uid:f84a92d6-be2c-49e1-9959-104ceb273be0,Namespace:kube-system,Attempt:0,}" Jan 29 16:21:31.401343 kubelet[2604]: E0129 16:21:31.400967 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:31.402294 containerd[1510]: time="2025-01-29T16:21:31.402265405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9445j,Uid:f173e8f1-728d-4219-8446-e38d80e68400,Namespace:kube-system,Attempt:0,}" Jan 29 16:21:31.419090 containerd[1510]: time="2025-01-29T16:21:31.418816977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:31.419090 containerd[1510]: time="2025-01-29T16:21:31.418891688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:31.419090 containerd[1510]: time="2025-01-29T16:21:31.418912298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:31.419090 containerd[1510]: time="2025-01-29T16:21:31.419024650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:31.427009 kubelet[2604]: E0129 16:21:31.426973 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:31.428376 containerd[1510]: time="2025-01-29T16:21:31.428337788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wgjz5,Uid:9457d0b1-f09a-4e10-8110-b61d3716790e,Namespace:kube-system,Attempt:0,}" Jan 29 16:21:31.429409 containerd[1510]: time="2025-01-29T16:21:31.429012808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:31.429409 containerd[1510]: time="2025-01-29T16:21:31.429076348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:31.429409 containerd[1510]: time="2025-01-29T16:21:31.429089633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:31.429409 containerd[1510]: time="2025-01-29T16:21:31.429182429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:31.441353 systemd[1]: Started cri-containerd-d5b8d3565b76a3ff9e3f79b7d73bc12ba61f6a603423dedf06f0029a942fc916.scope - libcontainer container d5b8d3565b76a3ff9e3f79b7d73bc12ba61f6a603423dedf06f0029a942fc916. Jan 29 16:21:31.446094 systemd[1]: Started cri-containerd-b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f.scope - libcontainer container b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f. Jan 29 16:21:31.469394 containerd[1510]: time="2025-01-29T16:21:31.469287907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:31.469394 containerd[1510]: time="2025-01-29T16:21:31.469358381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:31.469797 containerd[1510]: time="2025-01-29T16:21:31.469752518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:31.469930 containerd[1510]: time="2025-01-29T16:21:31.469888867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:31.471771 containerd[1510]: time="2025-01-29T16:21:31.471737141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jnzpb,Uid:f84a92d6-be2c-49e1-9959-104ceb273be0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5b8d3565b76a3ff9e3f79b7d73bc12ba61f6a603423dedf06f0029a942fc916\"" Jan 29 16:21:31.473680 kubelet[2604]: E0129 16:21:31.473050 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:31.475826 containerd[1510]: time="2025-01-29T16:21:31.475760439Z" level=info msg="CreateContainer within sandbox \"d5b8d3565b76a3ff9e3f79b7d73bc12ba61f6a603423dedf06f0029a942fc916\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:21:31.492909 systemd[1]: Started cri-containerd-31ecba73adbd57426895cfb1fa5a52dfd6015a3e9816fb9d5e7632539927eaf6.scope - libcontainer container 31ecba73adbd57426895cfb1fa5a52dfd6015a3e9816fb9d5e7632539927eaf6. Jan 29 16:21:31.493235 containerd[1510]: time="2025-01-29T16:21:31.493196937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9445j,Uid:f173e8f1-728d-4219-8446-e38d80e68400,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f\"" Jan 29 16:21:31.494003 kubelet[2604]: E0129 16:21:31.493981 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:31.495847 containerd[1510]: time="2025-01-29T16:21:31.495811785Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:21:31.498108 containerd[1510]: time="2025-01-29T16:21:31.498064145Z" level=info msg="CreateContainer within sandbox \"d5b8d3565b76a3ff9e3f79b7d73bc12ba61f6a603423dedf06f0029a942fc916\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2a3d47d11b7d6e0617d4082c196764e114145b6ad12e3dce7517fbc29e8b9ded\"" Jan 29 16:21:31.499251 containerd[1510]: time="2025-01-29T16:21:31.499220938Z" level=info msg="StartContainer for \"2a3d47d11b7d6e0617d4082c196764e114145b6ad12e3dce7517fbc29e8b9ded\"" Jan 29 16:21:31.529711 systemd[1]: Started cri-containerd-2a3d47d11b7d6e0617d4082c196764e114145b6ad12e3dce7517fbc29e8b9ded.scope - libcontainer container 2a3d47d11b7d6e0617d4082c196764e114145b6ad12e3dce7517fbc29e8b9ded. Jan 29 16:21:31.541068 containerd[1510]: time="2025-01-29T16:21:31.540993186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wgjz5,Uid:9457d0b1-f09a-4e10-8110-b61d3716790e,Namespace:kube-system,Attempt:0,} returns sandbox id \"31ecba73adbd57426895cfb1fa5a52dfd6015a3e9816fb9d5e7632539927eaf6\"" Jan 29 16:21:31.542745 kubelet[2604]: E0129 16:21:31.541739 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:31.567364 containerd[1510]: time="2025-01-29T16:21:31.567316715Z" level=info msg="StartContainer for \"2a3d47d11b7d6e0617d4082c196764e114145b6ad12e3dce7517fbc29e8b9ded\" returns successfully" Jan 29 16:21:32.215728 kubelet[2604]: E0129 16:21:32.215681 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:32.225081 kubelet[2604]: I0129 16:21:32.225024 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jnzpb" podStartSLOduration=1.225008494 podStartE2EDuration="1.225008494s" podCreationTimestamp="2025-01-29 16:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:21:32.223880086 +0000 UTC m=+8.181923005" watchObservedRunningTime="2025-01-29 16:21:32.225008494 +0000 UTC m=+8.183051413" Jan 29 16:21:32.686243 kubelet[2604]: E0129 16:21:32.685793 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:33.217548 kubelet[2604]: E0129 16:21:33.217523 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:33.468772 update_engine[1497]: I20250129 16:21:33.468611 1497 update_attempter.cc:509] Updating boot flags... Jan 29 16:21:33.498300 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2981) Jan 29 16:21:33.548595 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2980) Jan 29 16:21:33.583668 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2980) Jan 29 16:21:35.350845 kubelet[2604]: E0129 16:21:35.350813 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:40.132479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount433919601.mount: Deactivated successfully. Jan 29 16:21:42.330881 containerd[1510]: time="2025-01-29T16:21:42.330822331Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:42.331552 containerd[1510]: time="2025-01-29T16:21:42.331510308Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 16:21:42.332822 containerd[1510]: time="2025-01-29T16:21:42.332793277Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:42.334553 containerd[1510]: time="2025-01-29T16:21:42.334520904Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.838668993s" Jan 29 16:21:42.334622 containerd[1510]: time="2025-01-29T16:21:42.334555169Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 16:21:42.340168 containerd[1510]: time="2025-01-29T16:21:42.340138996Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:21:42.352138 containerd[1510]: time="2025-01-29T16:21:42.352094716Z" level=info msg="CreateContainer within sandbox \"b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:21:42.366404 containerd[1510]: time="2025-01-29T16:21:42.366358678Z" level=info msg="CreateContainer within sandbox \"b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2\"" Jan 29 16:21:42.369054 containerd[1510]: time="2025-01-29T16:21:42.368919826Z" level=info msg="StartContainer for \"1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2\"" Jan 29 16:21:42.402748 systemd[1]: Started cri-containerd-1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2.scope - libcontainer container 1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2. Jan 29 16:21:42.431204 containerd[1510]: time="2025-01-29T16:21:42.431166019Z" level=info msg="StartContainer for \"1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2\" returns successfully" Jan 29 16:21:42.456106 systemd[1]: cri-containerd-1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2.scope: Deactivated successfully. Jan 29 16:21:42.456515 systemd[1]: cri-containerd-1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2.scope: Consumed 27ms CPU time, 6.7M memory peak, 3.2M written to disk. Jan 29 16:21:42.653695 containerd[1510]: time="2025-01-29T16:21:42.653548908Z" level=info msg="shim disconnected" id=1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2 namespace=k8s.io Jan 29 16:21:42.653695 containerd[1510]: time="2025-01-29T16:21:42.653620153Z" level=warning msg="cleaning up after shim disconnected" id=1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2 namespace=k8s.io Jan 29 16:21:42.653695 containerd[1510]: time="2025-01-29T16:21:42.653631354Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:21:43.236475 kubelet[2604]: E0129 16:21:43.236443 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:43.239948 containerd[1510]: time="2025-01-29T16:21:43.239909761Z" level=info msg="CreateContainer within sandbox \"b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:21:43.253786 containerd[1510]: time="2025-01-29T16:21:43.253732382Z" level=info msg="CreateContainer within sandbox \"b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c\"" Jan 29 16:21:43.255176 containerd[1510]: time="2025-01-29T16:21:43.255142660Z" level=info msg="StartContainer for \"c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c\"" Jan 29 16:21:43.284701 systemd[1]: Started cri-containerd-c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c.scope - libcontainer container c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c. Jan 29 16:21:43.311832 containerd[1510]: time="2025-01-29T16:21:43.311787255Z" level=info msg="StartContainer for \"c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c\" returns successfully" Jan 29 16:21:43.323417 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:21:43.323893 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:21:43.324097 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:21:43.331947 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:21:43.332329 systemd[1]: cri-containerd-c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c.scope: Deactivated successfully. Jan 29 16:21:43.347877 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:21:43.352721 containerd[1510]: time="2025-01-29T16:21:43.352665427Z" level=info msg="shim disconnected" id=c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c namespace=k8s.io Jan 29 16:21:43.353034 containerd[1510]: time="2025-01-29T16:21:43.352721242Z" level=warning msg="cleaning up after shim disconnected" id=c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c namespace=k8s.io Jan 29 16:21:43.353034 containerd[1510]: time="2025-01-29T16:21:43.352743233Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:21:43.362176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2-rootfs.mount: Deactivated successfully. Jan 29 16:21:43.768417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851628666.mount: Deactivated successfully. Jan 29 16:21:44.046167 containerd[1510]: time="2025-01-29T16:21:44.046050192Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:44.047030 containerd[1510]: time="2025-01-29T16:21:44.046988650Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 16:21:44.048108 containerd[1510]: time="2025-01-29T16:21:44.048072533Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:21:44.049546 containerd[1510]: time="2025-01-29T16:21:44.049512606Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.709342161s" Jan 29 16:21:44.049604 containerd[1510]: time="2025-01-29T16:21:44.049542923Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 16:21:44.051790 containerd[1510]: time="2025-01-29T16:21:44.051610358Z" level=info msg="CreateContainer within sandbox \"31ecba73adbd57426895cfb1fa5a52dfd6015a3e9816fb9d5e7632539927eaf6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:21:44.065456 containerd[1510]: time="2025-01-29T16:21:44.065410749Z" level=info msg="CreateContainer within sandbox \"31ecba73adbd57426895cfb1fa5a52dfd6015a3e9816fb9d5e7632539927eaf6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3\"" Jan 29 16:21:44.065805 containerd[1510]: time="2025-01-29T16:21:44.065764284Z" level=info msg="StartContainer for \"55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3\"" Jan 29 16:21:44.095701 systemd[1]: Started cri-containerd-55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3.scope - libcontainer container 55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3. Jan 29 16:21:44.122866 containerd[1510]: time="2025-01-29T16:21:44.122812858Z" level=info msg="StartContainer for \"55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3\" returns successfully" Jan 29 16:21:44.246422 kubelet[2604]: E0129 16:21:44.246383 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:44.249333 containerd[1510]: time="2025-01-29T16:21:44.249298543Z" level=info msg="CreateContainer within sandbox \"b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:21:44.252670 kubelet[2604]: E0129 16:21:44.251778 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:44.344824 kubelet[2604]: I0129 16:21:44.344653 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-wgjz5" podStartSLOduration=0.837533411 podStartE2EDuration="13.344637667s" podCreationTimestamp="2025-01-29 16:21:31 +0000 UTC" firstStartedPulling="2025-01-29 16:21:31.543222463 +0000 UTC m=+7.501265382" lastFinishedPulling="2025-01-29 16:21:44.050326709 +0000 UTC m=+20.008369638" observedRunningTime="2025-01-29 16:21:44.344019312 +0000 UTC m=+20.302062231" watchObservedRunningTime="2025-01-29 16:21:44.344637667 +0000 UTC m=+20.302680747" Jan 29 16:21:44.376523 containerd[1510]: time="2025-01-29T16:21:44.376454722Z" level=info msg="CreateContainer within sandbox \"b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d\"" Jan 29 16:21:44.377246 containerd[1510]: time="2025-01-29T16:21:44.377199906Z" level=info msg="StartContainer for \"d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d\"" Jan 29 16:21:44.412748 systemd[1]: Started cri-containerd-d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d.scope - libcontainer container d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d. Jan 29 16:21:44.447413 systemd[1]: cri-containerd-d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d.scope: Deactivated successfully. Jan 29 16:21:44.513465 containerd[1510]: time="2025-01-29T16:21:44.513406898Z" level=info msg="StartContainer for \"d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d\" returns successfully" Jan 29 16:21:44.647996 containerd[1510]: time="2025-01-29T16:21:44.647791405Z" level=info msg="shim disconnected" id=d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d namespace=k8s.io Jan 29 16:21:44.647996 containerd[1510]: time="2025-01-29T16:21:44.647875403Z" level=warning msg="cleaning up after shim disconnected" id=d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d namespace=k8s.io Jan 29 16:21:44.647996 containerd[1510]: time="2025-01-29T16:21:44.647886874Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:21:45.249737 kubelet[2604]: E0129 16:21:45.249703 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:45.249737 kubelet[2604]: E0129 16:21:45.249759 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:45.251174 containerd[1510]: time="2025-01-29T16:21:45.251098115Z" level=info msg="CreateContainer within sandbox \"b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:21:45.268847 containerd[1510]: time="2025-01-29T16:21:45.268801420Z" level=info msg="CreateContainer within sandbox \"b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b\"" Jan 29 16:21:45.269369 containerd[1510]: time="2025-01-29T16:21:45.269289699Z" level=info msg="StartContainer for \"6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b\"" Jan 29 16:21:45.322715 systemd[1]: Started cri-containerd-6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b.scope - libcontainer container 6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b. Jan 29 16:21:45.347346 systemd[1]: cri-containerd-6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b.scope: Deactivated successfully. Jan 29 16:21:45.363215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d-rootfs.mount: Deactivated successfully. Jan 29 16:21:45.373283 containerd[1510]: time="2025-01-29T16:21:45.373237704Z" level=info msg="StartContainer for \"6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b\" returns successfully" Jan 29 16:21:45.391637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b-rootfs.mount: Deactivated successfully. Jan 29 16:21:45.394778 containerd[1510]: time="2025-01-29T16:21:45.394705469Z" level=info msg="shim disconnected" id=6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b namespace=k8s.io Jan 29 16:21:45.394778 containerd[1510]: time="2025-01-29T16:21:45.394765531Z" level=warning msg="cleaning up after shim disconnected" id=6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b namespace=k8s.io Jan 29 16:21:45.394778 containerd[1510]: time="2025-01-29T16:21:45.394773807Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:21:46.252893 kubelet[2604]: E0129 16:21:46.252861 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:46.257338 containerd[1510]: time="2025-01-29T16:21:46.256761203Z" level=info msg="CreateContainer within sandbox \"b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:21:46.278365 containerd[1510]: time="2025-01-29T16:21:46.278328634Z" level=info msg="CreateContainer within sandbox \"b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c\"" Jan 29 16:21:46.278877 containerd[1510]: time="2025-01-29T16:21:46.278845427Z" level=info msg="StartContainer for \"5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c\"" Jan 29 16:21:46.305700 systemd[1]: Started cri-containerd-5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c.scope - libcontainer container 5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c. Jan 29 16:21:46.335201 containerd[1510]: time="2025-01-29T16:21:46.335134998Z" level=info msg="StartContainer for \"5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c\" returns successfully" Jan 29 16:21:46.575541 kubelet[2604]: I0129 16:21:46.575502 2604 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 16:21:46.633358 systemd[1]: Created slice kubepods-burstable-pod9b055851_31d1_4d5f_a4d0_43a11fa44728.slice - libcontainer container kubepods-burstable-pod9b055851_31d1_4d5f_a4d0_43a11fa44728.slice. Jan 29 16:21:46.640686 systemd[1]: Created slice kubepods-burstable-pod05487251_da2b_4f5a_aab0_b40c54652635.slice - libcontainer container kubepods-burstable-pod05487251_da2b_4f5a_aab0_b40c54652635.slice. Jan 29 16:21:46.729630 kubelet[2604]: I0129 16:21:46.729397 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmwzb\" (UniqueName: \"kubernetes.io/projected/9b055851-31d1-4d5f-a4d0-43a11fa44728-kube-api-access-hmwzb\") pod \"coredns-6f6b679f8f-sc2z5\" (UID: \"9b055851-31d1-4d5f-a4d0-43a11fa44728\") " pod="kube-system/coredns-6f6b679f8f-sc2z5" Jan 29 16:21:46.729630 kubelet[2604]: I0129 16:21:46.729463 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05487251-da2b-4f5a-aab0-b40c54652635-config-volume\") pod \"coredns-6f6b679f8f-4k26v\" (UID: \"05487251-da2b-4f5a-aab0-b40c54652635\") " pod="kube-system/coredns-6f6b679f8f-4k26v" Jan 29 16:21:46.729630 kubelet[2604]: I0129 16:21:46.729492 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b055851-31d1-4d5f-a4d0-43a11fa44728-config-volume\") pod \"coredns-6f6b679f8f-sc2z5\" (UID: \"9b055851-31d1-4d5f-a4d0-43a11fa44728\") " pod="kube-system/coredns-6f6b679f8f-sc2z5" Jan 29 16:21:46.729630 kubelet[2604]: I0129 16:21:46.729530 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnwtp\" (UniqueName: \"kubernetes.io/projected/05487251-da2b-4f5a-aab0-b40c54652635-kube-api-access-nnwtp\") pod \"coredns-6f6b679f8f-4k26v\" (UID: \"05487251-da2b-4f5a-aab0-b40c54652635\") " pod="kube-system/coredns-6f6b679f8f-4k26v" Jan 29 16:21:46.937795 kubelet[2604]: E0129 16:21:46.937403 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:46.943906 kubelet[2604]: E0129 16:21:46.943870 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:46.947819 containerd[1510]: time="2025-01-29T16:21:46.947784662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sc2z5,Uid:9b055851-31d1-4d5f-a4d0-43a11fa44728,Namespace:kube-system,Attempt:0,}" Jan 29 16:21:46.949075 containerd[1510]: time="2025-01-29T16:21:46.949033854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4k26v,Uid:05487251-da2b-4f5a-aab0-b40c54652635,Namespace:kube-system,Attempt:0,}" Jan 29 16:21:47.288170 kubelet[2604]: E0129 16:21:47.288058 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:47.305739 kubelet[2604]: I0129 16:21:47.304943 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9445j" podStartSLOduration=5.460337704 podStartE2EDuration="16.304927755s" podCreationTimestamp="2025-01-29 16:21:31 +0000 UTC" firstStartedPulling="2025-01-29 16:21:31.495411426 +0000 UTC m=+7.453454345" lastFinishedPulling="2025-01-29 16:21:42.340001477 +0000 UTC m=+18.298044396" observedRunningTime="2025-01-29 16:21:47.304429919 +0000 UTC m=+23.262472828" watchObservedRunningTime="2025-01-29 16:21:47.304927755 +0000 UTC m=+23.262970674" Jan 29 16:21:48.287948 kubelet[2604]: E0129 16:21:48.287917 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:48.532016 systemd-networkd[1428]: cilium_host: Link UP Jan 29 16:21:48.532241 systemd-networkd[1428]: cilium_net: Link UP Jan 29 16:21:48.532245 systemd-networkd[1428]: cilium_net: Gained carrier Jan 29 16:21:48.532513 systemd-networkd[1428]: cilium_host: Gained carrier Jan 29 16:21:48.642963 systemd-networkd[1428]: cilium_vxlan: Link UP Jan 29 16:21:48.642976 systemd-networkd[1428]: cilium_vxlan: Gained carrier Jan 29 16:21:48.881617 kernel: NET: Registered PF_ALG protocol family Jan 29 16:21:49.248750 systemd-networkd[1428]: cilium_host: Gained IPv6LL Jan 29 16:21:49.290157 kubelet[2604]: E0129 16:21:49.290106 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:49.376886 systemd-networkd[1428]: cilium_net: Gained IPv6LL Jan 29 16:21:49.546977 systemd-networkd[1428]: lxc_health: Link UP Jan 29 16:21:49.547361 systemd-networkd[1428]: lxc_health: Gained carrier Jan 29 16:21:50.014323 systemd-networkd[1428]: lxcb8c288c125cb: Link UP Jan 29 16:21:50.015615 kernel: eth0: renamed from tmpd863c Jan 29 16:21:50.023760 systemd-networkd[1428]: lxcb8c288c125cb: Gained carrier Jan 29 16:21:50.023925 systemd-networkd[1428]: lxc430eac9026fd: Link UP Jan 29 16:21:50.038475 kernel: eth0: renamed from tmpa60ea Jan 29 16:21:50.041893 systemd-networkd[1428]: lxc430eac9026fd: Gained carrier Jan 29 16:21:50.592732 systemd-networkd[1428]: cilium_vxlan: Gained IPv6LL Jan 29 16:21:50.725972 systemd[1]: Started sshd@7-10.0.0.106:22-10.0.0.1:34598.service - OpenSSH per-connection server daemon (10.0.0.1:34598). Jan 29 16:21:50.773213 sshd[3831]: Accepted publickey for core from 10.0.0.1 port 34598 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:21:50.775258 sshd-session[3831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:50.780223 systemd-logind[1493]: New session 8 of user core. Jan 29 16:21:50.787693 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:21:50.920511 sshd[3833]: Connection closed by 10.0.0.1 port 34598 Jan 29 16:21:50.920993 sshd-session[3831]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:50.924139 systemd[1]: sshd@7-10.0.0.106:22-10.0.0.1:34598.service: Deactivated successfully. Jan 29 16:21:50.926369 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:21:50.928201 systemd-logind[1493]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:21:50.929188 systemd-logind[1493]: Removed session 8. Jan 29 16:21:51.403397 kubelet[2604]: E0129 16:21:51.403348 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:51.424739 systemd-networkd[1428]: lxc_health: Gained IPv6LL Jan 29 16:21:51.680821 systemd-networkd[1428]: lxcb8c288c125cb: Gained IPv6LL Jan 29 16:21:51.937735 systemd-networkd[1428]: lxc430eac9026fd: Gained IPv6LL Jan 29 16:21:53.512031 containerd[1510]: time="2025-01-29T16:21:53.511927943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:53.512031 containerd[1510]: time="2025-01-29T16:21:53.511991843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:53.512031 containerd[1510]: time="2025-01-29T16:21:53.512001681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:53.512515 containerd[1510]: time="2025-01-29T16:21:53.512081011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:53.515446 containerd[1510]: time="2025-01-29T16:21:53.514527440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:21:53.515446 containerd[1510]: time="2025-01-29T16:21:53.515349445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:21:53.515446 containerd[1510]: time="2025-01-29T16:21:53.515368732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:53.517845 containerd[1510]: time="2025-01-29T16:21:53.515460445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:21:53.532643 systemd[1]: run-containerd-runc-k8s.io-a60ea1ada347796a138b348dfb9aa8ddca8fde362b1e8068f57e35a18b1ad13c-runc.Gy2579.mount: Deactivated successfully. Jan 29 16:21:53.546709 systemd[1]: Started cri-containerd-a60ea1ada347796a138b348dfb9aa8ddca8fde362b1e8068f57e35a18b1ad13c.scope - libcontainer container a60ea1ada347796a138b348dfb9aa8ddca8fde362b1e8068f57e35a18b1ad13c. Jan 29 16:21:53.548587 systemd[1]: Started cri-containerd-d863c0385fe752444807a0e3b6d095d8ec34c3ebe2139e39cce0df4259d5978e.scope - libcontainer container d863c0385fe752444807a0e3b6d095d8ec34c3ebe2139e39cce0df4259d5978e. Jan 29 16:21:53.557737 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:21:53.562932 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 16:21:53.581800 containerd[1510]: time="2025-01-29T16:21:53.581759631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sc2z5,Uid:9b055851-31d1-4d5f-a4d0-43a11fa44728,Namespace:kube-system,Attempt:0,} returns sandbox id \"a60ea1ada347796a138b348dfb9aa8ddca8fde362b1e8068f57e35a18b1ad13c\"" Jan 29 16:21:53.582963 kubelet[2604]: E0129 16:21:53.582931 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:53.592744 containerd[1510]: time="2025-01-29T16:21:53.592266515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4k26v,Uid:05487251-da2b-4f5a-aab0-b40c54652635,Namespace:kube-system,Attempt:0,} returns sandbox id \"d863c0385fe752444807a0e3b6d095d8ec34c3ebe2139e39cce0df4259d5978e\"" Jan 29 16:21:53.592856 kubelet[2604]: E0129 16:21:53.592841 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:53.594868 containerd[1510]: time="2025-01-29T16:21:53.594828071Z" level=info msg="CreateContainer within sandbox \"d863c0385fe752444807a0e3b6d095d8ec34c3ebe2139e39cce0df4259d5978e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:21:53.608287 containerd[1510]: time="2025-01-29T16:21:53.608244445Z" level=info msg="CreateContainer within sandbox \"a60ea1ada347796a138b348dfb9aa8ddca8fde362b1e8068f57e35a18b1ad13c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:21:53.917232 containerd[1510]: time="2025-01-29T16:21:53.917170194Z" level=info msg="CreateContainer within sandbox \"d863c0385fe752444807a0e3b6d095d8ec34c3ebe2139e39cce0df4259d5978e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c25ea6cdd33703b0634a5f22e2a63abbd6101000b67fb4a8dc4b77738ea46f5\"" Jan 29 16:21:53.917848 containerd[1510]: time="2025-01-29T16:21:53.917793246Z" level=info msg="StartContainer for \"9c25ea6cdd33703b0634a5f22e2a63abbd6101000b67fb4a8dc4b77738ea46f5\"" Jan 29 16:21:53.919356 containerd[1510]: time="2025-01-29T16:21:53.919319445Z" level=info msg="CreateContainer within sandbox \"a60ea1ada347796a138b348dfb9aa8ddca8fde362b1e8068f57e35a18b1ad13c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8afb26d6e73b371c2696c1b06b24801465d57186ff8a84795c68dbf6b11c37d7\"" Jan 29 16:21:53.919792 containerd[1510]: time="2025-01-29T16:21:53.919763229Z" level=info msg="StartContainer for \"8afb26d6e73b371c2696c1b06b24801465d57186ff8a84795c68dbf6b11c37d7\"" Jan 29 16:21:53.949815 systemd[1]: Started cri-containerd-8afb26d6e73b371c2696c1b06b24801465d57186ff8a84795c68dbf6b11c37d7.scope - libcontainer container 8afb26d6e73b371c2696c1b06b24801465d57186ff8a84795c68dbf6b11c37d7. Jan 29 16:21:53.951276 systemd[1]: Started cri-containerd-9c25ea6cdd33703b0634a5f22e2a63abbd6101000b67fb4a8dc4b77738ea46f5.scope - libcontainer container 9c25ea6cdd33703b0634a5f22e2a63abbd6101000b67fb4a8dc4b77738ea46f5. Jan 29 16:21:53.987917 containerd[1510]: time="2025-01-29T16:21:53.987855508Z" level=info msg="StartContainer for \"9c25ea6cdd33703b0634a5f22e2a63abbd6101000b67fb4a8dc4b77738ea46f5\" returns successfully" Jan 29 16:21:53.988069 containerd[1510]: time="2025-01-29T16:21:53.987934786Z" level=info msg="StartContainer for \"8afb26d6e73b371c2696c1b06b24801465d57186ff8a84795c68dbf6b11c37d7\" returns successfully" Jan 29 16:21:54.300318 kubelet[2604]: E0129 16:21:54.300184 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:54.301881 kubelet[2604]: E0129 16:21:54.301852 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:54.322353 kubelet[2604]: I0129 16:21:54.322259 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4k26v" podStartSLOduration=23.322236494 podStartE2EDuration="23.322236494s" podCreationTimestamp="2025-01-29 16:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:21:54.310761734 +0000 UTC m=+30.268804653" watchObservedRunningTime="2025-01-29 16:21:54.322236494 +0000 UTC m=+30.280279413" Jan 29 16:21:54.520399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681382905.mount: Deactivated successfully. Jan 29 16:21:55.304285 kubelet[2604]: E0129 16:21:55.304239 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:55.304715 kubelet[2604]: E0129 16:21:55.304374 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:55.444043 kubelet[2604]: I0129 16:21:55.443986 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:21:55.444653 kubelet[2604]: E0129 16:21:55.444459 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:55.460801 kubelet[2604]: I0129 16:21:55.460718 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-sc2z5" podStartSLOduration=24.460699442 podStartE2EDuration="24.460699442s" podCreationTimestamp="2025-01-29 16:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:21:54.335839254 +0000 UTC m=+30.293882173" watchObservedRunningTime="2025-01-29 16:21:55.460699442 +0000 UTC m=+31.418742361" Jan 29 16:21:55.935925 systemd[1]: Started sshd@8-10.0.0.106:22-10.0.0.1:57850.service - OpenSSH per-connection server daemon (10.0.0.1:57850). Jan 29 16:21:55.982095 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 57850 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:21:55.983988 sshd-session[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:21:55.988781 systemd-logind[1493]: New session 9 of user core. Jan 29 16:21:56.002745 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:21:56.124104 sshd[4025]: Connection closed by 10.0.0.1 port 57850 Jan 29 16:21:56.124499 sshd-session[4023]: pam_unix(sshd:session): session closed for user core Jan 29 16:21:56.129002 systemd[1]: sshd@8-10.0.0.106:22-10.0.0.1:57850.service: Deactivated successfully. Jan 29 16:21:56.131280 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:21:56.132000 systemd-logind[1493]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:21:56.133183 systemd-logind[1493]: Removed session 9. Jan 29 16:21:56.305878 kubelet[2604]: E0129 16:21:56.305752 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:56.305878 kubelet[2604]: E0129 16:21:56.305808 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:21:56.306282 kubelet[2604]: E0129 16:21:56.305948 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:22:01.136230 systemd[1]: Started sshd@9-10.0.0.106:22-10.0.0.1:54736.service - OpenSSH per-connection server daemon (10.0.0.1:54736). Jan 29 16:22:01.176076 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 54736 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:01.177490 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:01.181601 systemd-logind[1493]: New session 10 of user core. Jan 29 16:22:01.191704 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:22:01.302764 sshd[4041]: Connection closed by 10.0.0.1 port 54736 Jan 29 16:22:01.303104 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:01.306628 systemd[1]: sshd@9-10.0.0.106:22-10.0.0.1:54736.service: Deactivated successfully. Jan 29 16:22:01.308484 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:22:01.309170 systemd-logind[1493]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:22:01.309981 systemd-logind[1493]: Removed session 10. Jan 29 16:22:06.318232 systemd[1]: Started sshd@10-10.0.0.106:22-10.0.0.1:54748.service - OpenSSH per-connection server daemon (10.0.0.1:54748). Jan 29 16:22:06.357405 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 54748 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:06.358971 sshd-session[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:06.363252 systemd-logind[1493]: New session 11 of user core. Jan 29 16:22:06.369715 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:22:06.478696 sshd[4060]: Connection closed by 10.0.0.1 port 54748 Jan 29 16:22:06.479102 sshd-session[4058]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:06.499769 systemd[1]: sshd@10-10.0.0.106:22-10.0.0.1:54748.service: Deactivated successfully. Jan 29 16:22:06.501953 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:22:06.503543 systemd-logind[1493]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:22:06.509901 systemd[1]: Started sshd@11-10.0.0.106:22-10.0.0.1:54754.service - OpenSSH per-connection server daemon (10.0.0.1:54754). Jan 29 16:22:06.511238 systemd-logind[1493]: Removed session 11. Jan 29 16:22:06.546679 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 54754 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:06.548394 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:06.552561 systemd-logind[1493]: New session 12 of user core. Jan 29 16:22:06.562691 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:22:06.738807 sshd[4077]: Connection closed by 10.0.0.1 port 54754 Jan 29 16:22:06.739715 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:06.753074 systemd[1]: Started sshd@12-10.0.0.106:22-10.0.0.1:54758.service - OpenSSH per-connection server daemon (10.0.0.1:54758). Jan 29 16:22:06.753899 systemd[1]: sshd@11-10.0.0.106:22-10.0.0.1:54754.service: Deactivated successfully. Jan 29 16:22:06.756659 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:22:06.761760 systemd-logind[1493]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:22:06.762867 systemd-logind[1493]: Removed session 12. Jan 29 16:22:06.794772 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 54758 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:06.796381 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:06.800629 systemd-logind[1493]: New session 13 of user core. Jan 29 16:22:06.810704 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:22:06.918202 sshd[4091]: Connection closed by 10.0.0.1 port 54758 Jan 29 16:22:06.918584 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:06.922776 systemd[1]: sshd@12-10.0.0.106:22-10.0.0.1:54758.service: Deactivated successfully. Jan 29 16:22:06.924972 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:22:06.925762 systemd-logind[1493]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:22:06.926669 systemd-logind[1493]: Removed session 13. Jan 29 16:22:11.931206 systemd[1]: Started sshd@13-10.0.0.106:22-10.0.0.1:51118.service - OpenSSH per-connection server daemon (10.0.0.1:51118). Jan 29 16:22:11.972900 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 51118 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:11.974948 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:11.980190 systemd-logind[1493]: New session 14 of user core. Jan 29 16:22:11.993833 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:22:12.113215 sshd[4107]: Connection closed by 10.0.0.1 port 51118 Jan 29 16:22:12.113612 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:12.118862 systemd[1]: sshd@13-10.0.0.106:22-10.0.0.1:51118.service: Deactivated successfully. Jan 29 16:22:12.121259 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:22:12.122120 systemd-logind[1493]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:22:12.123170 systemd-logind[1493]: Removed session 14. Jan 29 16:22:17.126692 systemd[1]: Started sshd@14-10.0.0.106:22-10.0.0.1:51120.service - OpenSSH per-connection server daemon (10.0.0.1:51120). Jan 29 16:22:17.166428 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 51120 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:17.168252 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:17.172672 systemd-logind[1493]: New session 15 of user core. Jan 29 16:22:17.185709 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:22:17.296274 sshd[4122]: Connection closed by 10.0.0.1 port 51120 Jan 29 16:22:17.296759 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:17.311753 systemd[1]: sshd@14-10.0.0.106:22-10.0.0.1:51120.service: Deactivated successfully. Jan 29 16:22:17.313524 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:22:17.315362 systemd-logind[1493]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:22:17.330919 systemd[1]: Started sshd@15-10.0.0.106:22-10.0.0.1:51124.service - OpenSSH per-connection server daemon (10.0.0.1:51124). Jan 29 16:22:17.332069 systemd-logind[1493]: Removed session 15. Jan 29 16:22:17.371876 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 51124 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:17.373530 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:17.378958 systemd-logind[1493]: New session 16 of user core. Jan 29 16:22:17.389771 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:22:17.631862 sshd[4138]: Connection closed by 10.0.0.1 port 51124 Jan 29 16:22:17.632257 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:17.644631 systemd[1]: sshd@15-10.0.0.106:22-10.0.0.1:51124.service: Deactivated successfully. Jan 29 16:22:17.646838 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:22:17.648484 systemd-logind[1493]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:22:17.655997 systemd[1]: Started sshd@16-10.0.0.106:22-10.0.0.1:51140.service - OpenSSH per-connection server daemon (10.0.0.1:51140). Jan 29 16:22:17.657081 systemd-logind[1493]: Removed session 16. Jan 29 16:22:17.695252 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 51140 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:17.696613 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:17.700853 systemd-logind[1493]: New session 17 of user core. Jan 29 16:22:17.707690 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:22:19.154782 sshd[4151]: Connection closed by 10.0.0.1 port 51140 Jan 29 16:22:19.157290 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:19.168484 systemd[1]: sshd@16-10.0.0.106:22-10.0.0.1:51140.service: Deactivated successfully. Jan 29 16:22:19.170826 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:22:19.171769 systemd-logind[1493]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:22:19.185007 systemd[1]: Started sshd@17-10.0.0.106:22-10.0.0.1:51156.service - OpenSSH per-connection server daemon (10.0.0.1:51156). Jan 29 16:22:19.185972 systemd-logind[1493]: Removed session 17. Jan 29 16:22:19.221760 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 51156 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:19.223260 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:19.227810 systemd-logind[1493]: New session 18 of user core. Jan 29 16:22:19.240842 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:22:19.493759 sshd[4174]: Connection closed by 10.0.0.1 port 51156 Jan 29 16:22:19.494076 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:19.503695 systemd[1]: sshd@17-10.0.0.106:22-10.0.0.1:51156.service: Deactivated successfully. Jan 29 16:22:19.505783 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:22:19.507500 systemd-logind[1493]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:22:19.516191 systemd[1]: Started sshd@18-10.0.0.106:22-10.0.0.1:51172.service - OpenSSH per-connection server daemon (10.0.0.1:51172). Jan 29 16:22:19.517379 systemd-logind[1493]: Removed session 18. Jan 29 16:22:19.551524 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 51172 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:19.553101 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:19.557692 systemd-logind[1493]: New session 19 of user core. Jan 29 16:22:19.565786 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:22:19.702283 sshd[4188]: Connection closed by 10.0.0.1 port 51172 Jan 29 16:22:19.702671 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:19.707072 systemd[1]: sshd@18-10.0.0.106:22-10.0.0.1:51172.service: Deactivated successfully. Jan 29 16:22:19.709343 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:22:19.710059 systemd-logind[1493]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:22:19.710907 systemd-logind[1493]: Removed session 19. Jan 29 16:22:24.714902 systemd[1]: Started sshd@19-10.0.0.106:22-10.0.0.1:50156.service - OpenSSH per-connection server daemon (10.0.0.1:50156). Jan 29 16:22:24.755110 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 50156 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:24.756739 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:24.761400 systemd-logind[1493]: New session 20 of user core. Jan 29 16:22:24.769723 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:22:24.878687 sshd[4205]: Connection closed by 10.0.0.1 port 50156 Jan 29 16:22:24.879035 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:24.883234 systemd[1]: sshd@19-10.0.0.106:22-10.0.0.1:50156.service: Deactivated successfully. Jan 29 16:22:24.885452 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:22:24.886215 systemd-logind[1493]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:22:24.887087 systemd-logind[1493]: Removed session 20. Jan 29 16:22:29.891926 systemd[1]: Started sshd@20-10.0.0.106:22-10.0.0.1:50158.service - OpenSSH per-connection server daemon (10.0.0.1:50158). Jan 29 16:22:29.930175 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 50158 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:29.931535 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:29.935417 systemd-logind[1493]: New session 21 of user core. Jan 29 16:22:29.941701 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 16:22:30.042208 sshd[4223]: Connection closed by 10.0.0.1 port 50158 Jan 29 16:22:30.042576 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:30.045261 systemd[1]: sshd@20-10.0.0.106:22-10.0.0.1:50158.service: Deactivated successfully. Jan 29 16:22:30.048052 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 16:22:30.049297 systemd-logind[1493]: Session 21 logged out. Waiting for processes to exit. Jan 29 16:22:30.050107 systemd-logind[1493]: Removed session 21. Jan 29 16:22:35.056198 systemd[1]: Started sshd@21-10.0.0.106:22-10.0.0.1:45122.service - OpenSSH per-connection server daemon (10.0.0.1:45122). Jan 29 16:22:35.097777 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 45122 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:35.099089 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:35.103596 systemd-logind[1493]: New session 22 of user core. Jan 29 16:22:35.113696 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 16:22:35.217384 sshd[4240]: Connection closed by 10.0.0.1 port 45122 Jan 29 16:22:35.217754 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:35.222067 systemd[1]: sshd@21-10.0.0.106:22-10.0.0.1:45122.service: Deactivated successfully. Jan 29 16:22:35.224043 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 16:22:35.224717 systemd-logind[1493]: Session 22 logged out. Waiting for processes to exit. Jan 29 16:22:35.225505 systemd-logind[1493]: Removed session 22. Jan 29 16:22:40.230556 systemd[1]: Started sshd@22-10.0.0.106:22-10.0.0.1:45126.service - OpenSSH per-connection server daemon (10.0.0.1:45126). Jan 29 16:22:40.268284 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 45126 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:40.269498 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:40.273639 systemd-logind[1493]: New session 23 of user core. Jan 29 16:22:40.284686 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 16:22:40.389238 sshd[4256]: Connection closed by 10.0.0.1 port 45126 Jan 29 16:22:40.389635 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:40.400400 systemd[1]: sshd@22-10.0.0.106:22-10.0.0.1:45126.service: Deactivated successfully. Jan 29 16:22:40.402474 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 16:22:40.404031 systemd-logind[1493]: Session 23 logged out. Waiting for processes to exit. Jan 29 16:22:40.411832 systemd[1]: Started sshd@23-10.0.0.106:22-10.0.0.1:45140.service - OpenSSH per-connection server daemon (10.0.0.1:45140). Jan 29 16:22:40.412792 systemd-logind[1493]: Removed session 23. Jan 29 16:22:40.447368 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 45140 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:40.448847 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:40.452819 systemd-logind[1493]: New session 24 of user core. Jan 29 16:22:40.465703 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 16:22:41.808335 containerd[1510]: time="2025-01-29T16:22:41.808293866Z" level=info msg="StopContainer for \"55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3\" with timeout 30 (s)" Jan 29 16:22:41.809225 containerd[1510]: time="2025-01-29T16:22:41.809194556Z" level=info msg="Stop container \"55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3\" with signal terminated" Jan 29 16:22:41.824443 systemd[1]: cri-containerd-55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3.scope: Deactivated successfully. Jan 29 16:22:41.836318 containerd[1510]: time="2025-01-29T16:22:41.836284672Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:22:41.845792 containerd[1510]: time="2025-01-29T16:22:41.845725124Z" level=info msg="StopContainer for \"5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c\" with timeout 2 (s)" Jan 29 16:22:41.846190 containerd[1510]: time="2025-01-29T16:22:41.846163902Z" level=info msg="Stop container \"5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c\" with signal terminated" Jan 29 16:22:41.848722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3-rootfs.mount: Deactivated successfully. Jan 29 16:22:41.853216 systemd-networkd[1428]: lxc_health: Link DOWN Jan 29 16:22:41.853229 systemd-networkd[1428]: lxc_health: Lost carrier Jan 29 16:22:41.862182 containerd[1510]: time="2025-01-29T16:22:41.862124501Z" level=info msg="shim disconnected" id=55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3 namespace=k8s.io Jan 29 16:22:41.862182 containerd[1510]: time="2025-01-29T16:22:41.862175919Z" level=warning msg="cleaning up after shim disconnected" id=55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3 namespace=k8s.io Jan 29 16:22:41.862182 containerd[1510]: time="2025-01-29T16:22:41.862183574Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:22:41.875256 systemd[1]: cri-containerd-5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c.scope: Deactivated successfully. Jan 29 16:22:41.876051 systemd[1]: cri-containerd-5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c.scope: Consumed 6.954s CPU time, 124.8M memory peak, 236K read from disk, 13.3M written to disk. Jan 29 16:22:41.880516 containerd[1510]: time="2025-01-29T16:22:41.880475093Z" level=info msg="StopContainer for \"55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3\" returns successfully" Jan 29 16:22:41.886069 containerd[1510]: time="2025-01-29T16:22:41.885951256Z" level=info msg="StopPodSandbox for \"31ecba73adbd57426895cfb1fa5a52dfd6015a3e9816fb9d5e7632539927eaf6\"" Jan 29 16:22:41.894670 containerd[1510]: time="2025-01-29T16:22:41.885988598Z" level=info msg="Container to stop \"55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:22:41.897352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c-rootfs.mount: Deactivated successfully. Jan 29 16:22:41.897478 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31ecba73adbd57426895cfb1fa5a52dfd6015a3e9816fb9d5e7632539927eaf6-shm.mount: Deactivated successfully. Jan 29 16:22:41.902291 systemd[1]: cri-containerd-31ecba73adbd57426895cfb1fa5a52dfd6015a3e9816fb9d5e7632539927eaf6.scope: Deactivated successfully. Jan 29 16:22:41.915783 containerd[1510]: time="2025-01-29T16:22:41.915650534Z" level=info msg="shim disconnected" id=5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c namespace=k8s.io Jan 29 16:22:41.916226 containerd[1510]: time="2025-01-29T16:22:41.915801181Z" level=warning msg="cleaning up after shim disconnected" id=5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c namespace=k8s.io Jan 29 16:22:41.916226 containerd[1510]: time="2025-01-29T16:22:41.915816120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:22:41.923282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31ecba73adbd57426895cfb1fa5a52dfd6015a3e9816fb9d5e7632539927eaf6-rootfs.mount: Deactivated successfully. Jan 29 16:22:41.932461 containerd[1510]: time="2025-01-29T16:22:41.932228922Z" level=info msg="shim disconnected" id=31ecba73adbd57426895cfb1fa5a52dfd6015a3e9816fb9d5e7632539927eaf6 namespace=k8s.io Jan 29 16:22:41.932461 containerd[1510]: time="2025-01-29T16:22:41.932456607Z" level=warning msg="cleaning up after shim disconnected" id=31ecba73adbd57426895cfb1fa5a52dfd6015a3e9816fb9d5e7632539927eaf6 namespace=k8s.io Jan 29 16:22:41.932461 containerd[1510]: time="2025-01-29T16:22:41.932469041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:22:41.937489 containerd[1510]: time="2025-01-29T16:22:41.937438518Z" level=info msg="StopContainer for \"5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c\" returns successfully" Jan 29 16:22:41.938110 containerd[1510]: time="2025-01-29T16:22:41.938062829Z" level=info msg="StopPodSandbox for \"b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f\"" Jan 29 16:22:41.938277 containerd[1510]: time="2025-01-29T16:22:41.938106373Z" level=info msg="Container to stop \"d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:22:41.938277 containerd[1510]: time="2025-01-29T16:22:41.938154665Z" level=info msg="Container to stop \"5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:22:41.938277 containerd[1510]: time="2025-01-29T16:22:41.938169022Z" level=info msg="Container to stop \"1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:22:41.938277 containerd[1510]: time="2025-01-29T16:22:41.938179993Z" level=info msg="Container to stop \"c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:22:41.938277 containerd[1510]: time="2025-01-29T16:22:41.938191375Z" level=info msg="Container to stop \"6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:22:41.945445 systemd[1]: cri-containerd-b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f.scope: Deactivated successfully. Jan 29 16:22:41.951027 containerd[1510]: time="2025-01-29T16:22:41.950971703Z" level=info msg="TearDown network for sandbox \"31ecba73adbd57426895cfb1fa5a52dfd6015a3e9816fb9d5e7632539927eaf6\" successfully" Jan 29 16:22:41.951027 containerd[1510]: time="2025-01-29T16:22:41.951021077Z" level=info msg="StopPodSandbox for \"31ecba73adbd57426895cfb1fa5a52dfd6015a3e9816fb9d5e7632539927eaf6\" returns successfully" Jan 29 16:22:42.043802 kubelet[2604]: I0129 16:22:42.043737 2604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9457d0b1-f09a-4e10-8110-b61d3716790e-cilium-config-path\") pod \"9457d0b1-f09a-4e10-8110-b61d3716790e\" (UID: \"9457d0b1-f09a-4e10-8110-b61d3716790e\") " Jan 29 16:22:42.043802 kubelet[2604]: I0129 16:22:42.043795 2604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zstn9\" (UniqueName: \"kubernetes.io/projected/9457d0b1-f09a-4e10-8110-b61d3716790e-kube-api-access-zstn9\") pod \"9457d0b1-f09a-4e10-8110-b61d3716790e\" (UID: \"9457d0b1-f09a-4e10-8110-b61d3716790e\") " Jan 29 16:22:42.047292 kubelet[2604]: I0129 16:22:42.047239 2604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9457d0b1-f09a-4e10-8110-b61d3716790e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9457d0b1-f09a-4e10-8110-b61d3716790e" (UID: "9457d0b1-f09a-4e10-8110-b61d3716790e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:42.078130 kubelet[2604]: I0129 16:22:42.078088 2604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9457d0b1-f09a-4e10-8110-b61d3716790e-kube-api-access-zstn9" (OuterVolumeSpecName: "kube-api-access-zstn9") pod "9457d0b1-f09a-4e10-8110-b61d3716790e" (UID: "9457d0b1-f09a-4e10-8110-b61d3716790e"). InnerVolumeSpecName "kube-api-access-zstn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:42.109878 containerd[1510]: time="2025-01-29T16:22:42.109704354Z" level=info msg="shim disconnected" id=b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f namespace=k8s.io Jan 29 16:22:42.109878 containerd[1510]: time="2025-01-29T16:22:42.109769919Z" level=warning msg="cleaning up after shim disconnected" id=b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f namespace=k8s.io Jan 29 16:22:42.109878 containerd[1510]: time="2025-01-29T16:22:42.109779948Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:22:42.124905 containerd[1510]: time="2025-01-29T16:22:42.124844484Z" level=info msg="TearDown network for sandbox \"b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f\" successfully" Jan 29 16:22:42.124905 containerd[1510]: time="2025-01-29T16:22:42.124885472Z" level=info msg="StopPodSandbox for \"b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f\" returns successfully" Jan 29 16:22:42.144932 kubelet[2604]: I0129 16:22:42.144879 2604 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9457d0b1-f09a-4e10-8110-b61d3716790e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 16:22:42.144932 kubelet[2604]: I0129 16:22:42.144918 2604 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zstn9\" (UniqueName: \"kubernetes.io/projected/9457d0b1-f09a-4e10-8110-b61d3716790e-kube-api-access-zstn9\") on node \"localhost\" DevicePath \"\"" Jan 29 16:22:42.198153 systemd[1]: Removed slice kubepods-besteffort-pod9457d0b1_f09a_4e10_8110_b61d3716790e.slice - libcontainer container kubepods-besteffort-pod9457d0b1_f09a_4e10_8110_b61d3716790e.slice. Jan 29 16:22:42.247605 kubelet[2604]: I0129 16:22:42.246217 2604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-etc-cni-netd\") pod \"f173e8f1-728d-4219-8446-e38d80e68400\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " Jan 29 16:22:42.247605 kubelet[2604]: I0129 16:22:42.246269 2604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-xtables-lock\") pod \"f173e8f1-728d-4219-8446-e38d80e68400\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " Jan 29 16:22:42.247605 kubelet[2604]: I0129 16:22:42.246295 2604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-cilium-run\") pod \"f173e8f1-728d-4219-8446-e38d80e68400\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " Jan 29 16:22:42.247605 kubelet[2604]: I0129 16:22:42.246314 2604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-host-proc-sys-kernel\") pod \"f173e8f1-728d-4219-8446-e38d80e68400\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " Jan 29 16:22:42.247605 kubelet[2604]: I0129 16:22:42.246342 2604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-cilium-cgroup\") pod \"f173e8f1-728d-4219-8446-e38d80e68400\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " Jan 29 16:22:42.247605 kubelet[2604]: I0129 16:22:42.246369 2604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f173e8f1-728d-4219-8446-e38d80e68400-clustermesh-secrets\") pod \"f173e8f1-728d-4219-8446-e38d80e68400\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " Jan 29 16:22:42.247877 kubelet[2604]: I0129 16:22:42.246388 2604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-cni-path\") pod \"f173e8f1-728d-4219-8446-e38d80e68400\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " Jan 29 16:22:42.247877 kubelet[2604]: I0129 16:22:42.246404 2604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-bpf-maps\") pod \"f173e8f1-728d-4219-8446-e38d80e68400\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " Jan 29 16:22:42.247877 kubelet[2604]: I0129 16:22:42.246426 2604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f173e8f1-728d-4219-8446-e38d80e68400-hubble-tls\") pod \"f173e8f1-728d-4219-8446-e38d80e68400\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " Jan 29 16:22:42.247877 kubelet[2604]: I0129 16:22:42.246447 2604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-hostproc\") pod \"f173e8f1-728d-4219-8446-e38d80e68400\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " Jan 29 16:22:42.247877 kubelet[2604]: I0129 16:22:42.246462 2604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd9jl\" (UniqueName: \"kubernetes.io/projected/f173e8f1-728d-4219-8446-e38d80e68400-kube-api-access-wd9jl\") pod \"f173e8f1-728d-4219-8446-e38d80e68400\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " Jan 29 16:22:42.247877 kubelet[2604]: I0129 16:22:42.246481 2604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-host-proc-sys-net\") pod \"f173e8f1-728d-4219-8446-e38d80e68400\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " Jan 29 16:22:42.248013 kubelet[2604]: I0129 16:22:42.246500 2604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-lib-modules\") pod \"f173e8f1-728d-4219-8446-e38d80e68400\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " Jan 29 16:22:42.248013 kubelet[2604]: I0129 16:22:42.246518 2604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f173e8f1-728d-4219-8446-e38d80e68400-cilium-config-path\") pod \"f173e8f1-728d-4219-8446-e38d80e68400\" (UID: \"f173e8f1-728d-4219-8446-e38d80e68400\") " Jan 29 16:22:42.248013 kubelet[2604]: I0129 16:22:42.246731 2604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-cni-path" (OuterVolumeSpecName: "cni-path") pod "f173e8f1-728d-4219-8446-e38d80e68400" (UID: "f173e8f1-728d-4219-8446-e38d80e68400"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:42.248013 kubelet[2604]: I0129 16:22:42.246826 2604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f173e8f1-728d-4219-8446-e38d80e68400" (UID: "f173e8f1-728d-4219-8446-e38d80e68400"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:42.248013 kubelet[2604]: I0129 16:22:42.246803 2604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-hostproc" (OuterVolumeSpecName: "hostproc") pod "f173e8f1-728d-4219-8446-e38d80e68400" (UID: "f173e8f1-728d-4219-8446-e38d80e68400"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:42.248137 kubelet[2604]: I0129 16:22:42.246850 2604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f173e8f1-728d-4219-8446-e38d80e68400" (UID: "f173e8f1-728d-4219-8446-e38d80e68400"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:42.248137 kubelet[2604]: I0129 16:22:42.246875 2604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f173e8f1-728d-4219-8446-e38d80e68400" (UID: "f173e8f1-728d-4219-8446-e38d80e68400"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:42.248137 kubelet[2604]: I0129 16:22:42.246901 2604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f173e8f1-728d-4219-8446-e38d80e68400" (UID: "f173e8f1-728d-4219-8446-e38d80e68400"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:42.248137 kubelet[2604]: I0129 16:22:42.246920 2604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f173e8f1-728d-4219-8446-e38d80e68400" (UID: "f173e8f1-728d-4219-8446-e38d80e68400"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:42.248137 kubelet[2604]: I0129 16:22:42.246947 2604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f173e8f1-728d-4219-8446-e38d80e68400" (UID: "f173e8f1-728d-4219-8446-e38d80e68400"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:42.250990 kubelet[2604]: I0129 16:22:42.250954 2604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f173e8f1-728d-4219-8446-e38d80e68400" (UID: "f173e8f1-728d-4219-8446-e38d80e68400"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:42.251048 kubelet[2604]: I0129 16:22:42.251029 2604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f173e8f1-728d-4219-8446-e38d80e68400" (UID: "f173e8f1-728d-4219-8446-e38d80e68400"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:22:42.253537 kubelet[2604]: I0129 16:22:42.253512 2604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f173e8f1-728d-4219-8446-e38d80e68400-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f173e8f1-728d-4219-8446-e38d80e68400" (UID: "f173e8f1-728d-4219-8446-e38d80e68400"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:42.256152 kubelet[2604]: I0129 16:22:42.256101 2604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f173e8f1-728d-4219-8446-e38d80e68400-kube-api-access-wd9jl" (OuterVolumeSpecName: "kube-api-access-wd9jl") pod "f173e8f1-728d-4219-8446-e38d80e68400" (UID: "f173e8f1-728d-4219-8446-e38d80e68400"). InnerVolumeSpecName "kube-api-access-wd9jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:42.256266 kubelet[2604]: I0129 16:22:42.256245 2604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f173e8f1-728d-4219-8446-e38d80e68400-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f173e8f1-728d-4219-8446-e38d80e68400" (UID: "f173e8f1-728d-4219-8446-e38d80e68400"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:22:42.258004 kubelet[2604]: I0129 16:22:42.257537 2604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f173e8f1-728d-4219-8446-e38d80e68400-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f173e8f1-728d-4219-8446-e38d80e68400" (UID: "f173e8f1-728d-4219-8446-e38d80e68400"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:22:42.347405 kubelet[2604]: I0129 16:22:42.347258 2604 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 16:22:42.347405 kubelet[2604]: I0129 16:22:42.347301 2604 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 16:22:42.347405 kubelet[2604]: I0129 16:22:42.347316 2604 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 16:22:42.347405 kubelet[2604]: I0129 16:22:42.347336 2604 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f173e8f1-728d-4219-8446-e38d80e68400-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 16:22:42.347405 kubelet[2604]: I0129 16:22:42.347346 2604 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 16:22:42.347405 kubelet[2604]: I0129 16:22:42.347358 2604 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 16:22:42.347405 kubelet[2604]: I0129 16:22:42.347368 2604 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f173e8f1-728d-4219-8446-e38d80e68400-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 16:22:42.347405 kubelet[2604]: I0129 16:22:42.347379 2604 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 16:22:42.347716 kubelet[2604]: I0129 16:22:42.347389 2604 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 16:22:42.347716 kubelet[2604]: I0129 16:22:42.347399 2604 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wd9jl\" (UniqueName: \"kubernetes.io/projected/f173e8f1-728d-4219-8446-e38d80e68400-kube-api-access-wd9jl\") on node \"localhost\" DevicePath \"\"" Jan 29 16:22:42.347716 kubelet[2604]: I0129 16:22:42.347410 2604 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 16:22:42.347716 kubelet[2604]: I0129 16:22:42.347419 2604 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f173e8f1-728d-4219-8446-e38d80e68400-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 16:22:42.347716 kubelet[2604]: I0129 16:22:42.347430 2604 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 16:22:42.347716 kubelet[2604]: I0129 16:22:42.347440 2604 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f173e8f1-728d-4219-8446-e38d80e68400-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 16:22:42.393289 kubelet[2604]: I0129 16:22:42.393251 2604 scope.go:117] "RemoveContainer" containerID="5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c" Jan 29 16:22:42.395552 containerd[1510]: time="2025-01-29T16:22:42.395518802Z" level=info msg="RemoveContainer for \"5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c\"" Jan 29 16:22:42.401445 systemd[1]: Removed slice kubepods-burstable-podf173e8f1_728d_4219_8446_e38d80e68400.slice - libcontainer container kubepods-burstable-podf173e8f1_728d_4219_8446_e38d80e68400.slice. Jan 29 16:22:42.401793 systemd[1]: kubepods-burstable-podf173e8f1_728d_4219_8446_e38d80e68400.slice: Consumed 7.057s CPU time, 125.2M memory peak, 248K read from disk, 16.6M written to disk. Jan 29 16:22:42.423180 containerd[1510]: time="2025-01-29T16:22:42.423134287Z" level=info msg="RemoveContainer for \"5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c\" returns successfully" Jan 29 16:22:42.423506 kubelet[2604]: I0129 16:22:42.423477 2604 scope.go:117] "RemoveContainer" containerID="6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b" Jan 29 16:22:42.424801 containerd[1510]: time="2025-01-29T16:22:42.424777943Z" level=info msg="RemoveContainer for \"6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b\"" Jan 29 16:22:42.429066 containerd[1510]: time="2025-01-29T16:22:42.429030157Z" level=info msg="RemoveContainer for \"6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b\" returns successfully" Jan 29 16:22:42.429200 kubelet[2604]: I0129 16:22:42.429177 2604 scope.go:117] "RemoveContainer" containerID="d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d" Jan 29 16:22:42.430035 containerd[1510]: time="2025-01-29T16:22:42.430008123Z" level=info msg="RemoveContainer for \"d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d\"" Jan 29 16:22:42.433359 containerd[1510]: time="2025-01-29T16:22:42.433335162Z" level=info msg="RemoveContainer for \"d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d\" returns successfully" Jan 29 16:22:42.433472 kubelet[2604]: I0129 16:22:42.433450 2604 scope.go:117] "RemoveContainer" containerID="c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c" Jan 29 16:22:42.438704 containerd[1510]: time="2025-01-29T16:22:42.438644402Z" level=info msg="RemoveContainer for \"c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c\"" Jan 29 16:22:42.441981 containerd[1510]: time="2025-01-29T16:22:42.441950040Z" level=info msg="RemoveContainer for \"c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c\" returns successfully" Jan 29 16:22:42.442114 kubelet[2604]: I0129 16:22:42.442084 2604 scope.go:117] "RemoveContainer" containerID="1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2" Jan 29 16:22:42.442916 containerd[1510]: time="2025-01-29T16:22:42.442892468Z" level=info msg="RemoveContainer for \"1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2\"" Jan 29 16:22:42.446103 containerd[1510]: time="2025-01-29T16:22:42.446076404Z" level=info msg="RemoveContainer for \"1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2\" returns successfully" Jan 29 16:22:42.446237 kubelet[2604]: I0129 16:22:42.446218 2604 scope.go:117] "RemoveContainer" containerID="5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c" Jan 29 16:22:42.446404 containerd[1510]: time="2025-01-29T16:22:42.446353763Z" level=error msg="ContainerStatus for \"5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c\": not found" Jan 29 16:22:42.452304 kubelet[2604]: E0129 16:22:42.452247 2604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c\": not found" containerID="5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c" Jan 29 16:22:42.452399 kubelet[2604]: I0129 16:22:42.452305 2604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c"} err="failed to get container status \"5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ff7c8243ca2896cb925471d3430b27a57926b3b7ff96e16e5446272e4282d0c\": not found" Jan 29 16:22:42.452399 kubelet[2604]: I0129 16:22:42.452398 2604 scope.go:117] "RemoveContainer" containerID="6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b" Jan 29 16:22:42.452656 containerd[1510]: time="2025-01-29T16:22:42.452623637Z" level=error msg="ContainerStatus for \"6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b\": not found" Jan 29 16:22:42.452904 kubelet[2604]: E0129 16:22:42.452865 2604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b\": not found" containerID="6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b" Jan 29 16:22:42.452954 kubelet[2604]: I0129 16:22:42.452899 2604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b"} err="failed to get container status \"6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b\": rpc error: code = NotFound desc = an error occurred when try to find container \"6345b124ac27933bfbd58b0c735679577f7fac15961c064f0a01ffc780d6e73b\": not found" Jan 29 16:22:42.452954 kubelet[2604]: I0129 16:22:42.452927 2604 scope.go:117] "RemoveContainer" containerID="d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d" Jan 29 16:22:42.453126 containerd[1510]: time="2025-01-29T16:22:42.453097852Z" level=error msg="ContainerStatus for \"d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d\": not found" Jan 29 16:22:42.453257 kubelet[2604]: E0129 16:22:42.453233 2604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d\": not found" containerID="d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d" Jan 29 16:22:42.453316 kubelet[2604]: I0129 16:22:42.453257 2604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d"} err="failed to get container status \"d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3df4f7a6e55840519fb4b9a1a3abbabc94e0a7330188c692ae168ee8837611d\": not found" Jan 29 16:22:42.453316 kubelet[2604]: I0129 16:22:42.453274 2604 scope.go:117] "RemoveContainer" containerID="c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c" Jan 29 16:22:42.453434 containerd[1510]: time="2025-01-29T16:22:42.453402232Z" level=error msg="ContainerStatus for \"c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c\": not found" Jan 29 16:22:42.453639 kubelet[2604]: E0129 16:22:42.453496 2604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c\": not found" containerID="c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c" Jan 29 16:22:42.453639 kubelet[2604]: I0129 16:22:42.453516 2604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c"} err="failed to get container status \"c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7c9c5aac0a65b61af542fdbe3e9402f5f1835ce17667824c555764cc668896c\": not found" Jan 29 16:22:42.453639 kubelet[2604]: I0129 16:22:42.453533 2604 scope.go:117] "RemoveContainer" containerID="1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2" Jan 29 16:22:42.453745 containerd[1510]: time="2025-01-29T16:22:42.453704679Z" level=error msg="ContainerStatus for \"1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2\": not found" Jan 29 16:22:42.453839 kubelet[2604]: E0129 16:22:42.453817 2604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2\": not found" containerID="1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2" Jan 29 16:22:42.453892 kubelet[2604]: I0129 16:22:42.453836 2604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2"} err="failed to get container status \"1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"1bd18f193412a428955d1af97f098c14659db11c0c0620ed6adf95e61f87a0c2\": not found" Jan 29 16:22:42.453892 kubelet[2604]: I0129 16:22:42.453849 2604 scope.go:117] "RemoveContainer" containerID="55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3" Jan 29 16:22:42.454690 containerd[1510]: time="2025-01-29T16:22:42.454661876Z" level=info msg="RemoveContainer for \"55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3\"" Jan 29 16:22:42.458138 containerd[1510]: time="2025-01-29T16:22:42.458100878Z" level=info msg="RemoveContainer for \"55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3\" returns successfully" Jan 29 16:22:42.458244 kubelet[2604]: I0129 16:22:42.458213 2604 scope.go:117] "RemoveContainer" containerID="55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3" Jan 29 16:22:42.458365 containerd[1510]: time="2025-01-29T16:22:42.458331408Z" level=error msg="ContainerStatus for \"55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3\": not found" Jan 29 16:22:42.458481 kubelet[2604]: E0129 16:22:42.458446 2604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3\": not found" containerID="55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3" Jan 29 16:22:42.458481 kubelet[2604]: I0129 16:22:42.458472 2604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3"} err="failed to get container status \"55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"55e5ab91d4fb37946dbc7a678a3fed36f3a3503743ce72945a4f85d1087f34b3\": not found" Jan 29 16:22:42.819157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f-rootfs.mount: Deactivated successfully. Jan 29 16:22:42.819328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2b408eefb48d832c6a55f38e93631b603f20f5855f80ddc6b70959f30bd401f-shm.mount: Deactivated successfully. Jan 29 16:22:42.819447 systemd[1]: var-lib-kubelet-pods-9457d0b1\x2df09a\x2d4e10\x2d8110\x2db61d3716790e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzstn9.mount: Deactivated successfully. Jan 29 16:22:42.819559 systemd[1]: var-lib-kubelet-pods-f173e8f1\x2d728d\x2d4219\x2d8446\x2de38d80e68400-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwd9jl.mount: Deactivated successfully. Jan 29 16:22:42.819721 systemd[1]: var-lib-kubelet-pods-f173e8f1\x2d728d\x2d4219\x2d8446\x2de38d80e68400-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 16:22:42.819865 systemd[1]: var-lib-kubelet-pods-f173e8f1\x2d728d\x2d4219\x2d8446\x2de38d80e68400-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 16:22:43.773587 sshd[4271]: Connection closed by 10.0.0.1 port 45140 Jan 29 16:22:43.774040 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:43.784857 systemd[1]: sshd@23-10.0.0.106:22-10.0.0.1:45140.service: Deactivated successfully. Jan 29 16:22:43.787154 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 16:22:43.788708 systemd-logind[1493]: Session 24 logged out. Waiting for processes to exit. Jan 29 16:22:43.797829 systemd[1]: Started sshd@24-10.0.0.106:22-10.0.0.1:34468.service - OpenSSH per-connection server daemon (10.0.0.1:34468). Jan 29 16:22:43.798906 systemd-logind[1493]: Removed session 24. Jan 29 16:22:43.834060 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 34468 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:43.835743 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:43.840205 systemd-logind[1493]: New session 25 of user core. Jan 29 16:22:43.849700 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 16:22:44.189062 kubelet[2604]: I0129 16:22:44.189012 2604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9457d0b1-f09a-4e10-8110-b61d3716790e" path="/var/lib/kubelet/pods/9457d0b1-f09a-4e10-8110-b61d3716790e/volumes" Jan 29 16:22:44.189625 kubelet[2604]: I0129 16:22:44.189603 2604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f173e8f1-728d-4219-8446-e38d80e68400" path="/var/lib/kubelet/pods/f173e8f1-728d-4219-8446-e38d80e68400/volumes" Jan 29 16:22:44.246875 kubelet[2604]: E0129 16:22:44.246831 2604 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:22:44.536385 sshd[4431]: Connection closed by 10.0.0.1 port 34468 Jan 29 16:22:44.537150 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:44.558201 kubelet[2604]: E0129 16:22:44.553976 2604 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9457d0b1-f09a-4e10-8110-b61d3716790e" containerName="cilium-operator" Jan 29 16:22:44.558201 kubelet[2604]: E0129 16:22:44.554017 2604 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f173e8f1-728d-4219-8446-e38d80e68400" containerName="clean-cilium-state" Jan 29 16:22:44.558201 kubelet[2604]: E0129 16:22:44.554027 2604 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f173e8f1-728d-4219-8446-e38d80e68400" containerName="cilium-agent" Jan 29 16:22:44.558201 kubelet[2604]: E0129 16:22:44.554039 2604 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f173e8f1-728d-4219-8446-e38d80e68400" containerName="mount-cgroup" Jan 29 16:22:44.558201 kubelet[2604]: E0129 16:22:44.554046 2604 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f173e8f1-728d-4219-8446-e38d80e68400" containerName="apply-sysctl-overwrites" Jan 29 16:22:44.558201 kubelet[2604]: E0129 16:22:44.554054 2604 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f173e8f1-728d-4219-8446-e38d80e68400" containerName="mount-bpf-fs" Jan 29 16:22:44.558201 kubelet[2604]: I0129 16:22:44.554086 2604 memory_manager.go:354] "RemoveStaleState removing state" podUID="9457d0b1-f09a-4e10-8110-b61d3716790e" containerName="cilium-operator" Jan 29 16:22:44.558201 kubelet[2604]: I0129 16:22:44.554093 2604 memory_manager.go:354] "RemoveStaleState removing state" podUID="f173e8f1-728d-4219-8446-e38d80e68400" containerName="cilium-agent" Jan 29 16:22:44.554544 systemd[1]: sshd@24-10.0.0.106:22-10.0.0.1:34468.service: Deactivated successfully. Jan 29 16:22:44.556828 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 16:22:44.565656 systemd-logind[1493]: Session 25 logged out. Waiting for processes to exit. Jan 29 16:22:44.574087 systemd[1]: Started sshd@25-10.0.0.106:22-10.0.0.1:34480.service - OpenSSH per-connection server daemon (10.0.0.1:34480). Jan 29 16:22:44.581210 systemd-logind[1493]: Removed session 25. Jan 29 16:22:44.589544 systemd[1]: Created slice kubepods-burstable-pod5aa06eec_ac30_445e_b33b_7d92ffc0524f.slice - libcontainer container kubepods-burstable-pod5aa06eec_ac30_445e_b33b_7d92ffc0524f.slice. Jan 29 16:22:44.617507 sshd[4442]: Accepted publickey for core from 10.0.0.1 port 34480 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:44.619239 sshd-session[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:44.624098 systemd-logind[1493]: New session 26 of user core. Jan 29 16:22:44.634833 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 16:22:44.659861 kubelet[2604]: I0129 16:22:44.659805 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5aa06eec-ac30-445e-b33b-7d92ffc0524f-cilium-run\") pod \"cilium-mlvtt\" (UID: \"5aa06eec-ac30-445e-b33b-7d92ffc0524f\") " pod="kube-system/cilium-mlvtt" Jan 29 16:22:44.659861 kubelet[2604]: I0129 16:22:44.659840 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5aa06eec-ac30-445e-b33b-7d92ffc0524f-cilium-config-path\") pod \"cilium-mlvtt\" (UID: \"5aa06eec-ac30-445e-b33b-7d92ffc0524f\") " pod="kube-system/cilium-mlvtt" Jan 29 16:22:44.659861 kubelet[2604]: I0129 16:22:44.659872 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5aa06eec-ac30-445e-b33b-7d92ffc0524f-xtables-lock\") pod \"cilium-mlvtt\" (UID: \"5aa06eec-ac30-445e-b33b-7d92ffc0524f\") " pod="kube-system/cilium-mlvtt" Jan 29 16:22:44.660134 kubelet[2604]: I0129 16:22:44.659888 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5aa06eec-ac30-445e-b33b-7d92ffc0524f-clustermesh-secrets\") pod \"cilium-mlvtt\" (UID: \"5aa06eec-ac30-445e-b33b-7d92ffc0524f\") " pod="kube-system/cilium-mlvtt" Jan 29 16:22:44.660134 kubelet[2604]: I0129 16:22:44.659903 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5aa06eec-ac30-445e-b33b-7d92ffc0524f-cilium-cgroup\") pod \"cilium-mlvtt\" (UID: \"5aa06eec-ac30-445e-b33b-7d92ffc0524f\") " pod="kube-system/cilium-mlvtt" Jan 29 16:22:44.660134 kubelet[2604]: I0129 16:22:44.659916 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5aa06eec-ac30-445e-b33b-7d92ffc0524f-cni-path\") pod \"cilium-mlvtt\" (UID: \"5aa06eec-ac30-445e-b33b-7d92ffc0524f\") " pod="kube-system/cilium-mlvtt" Jan 29 16:22:44.660134 kubelet[2604]: I0129 16:22:44.659929 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5aa06eec-ac30-445e-b33b-7d92ffc0524f-lib-modules\") pod \"cilium-mlvtt\" (UID: \"5aa06eec-ac30-445e-b33b-7d92ffc0524f\") " pod="kube-system/cilium-mlvtt" Jan 29 16:22:44.660134 kubelet[2604]: I0129 16:22:44.659941 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5aa06eec-ac30-445e-b33b-7d92ffc0524f-host-proc-sys-net\") pod \"cilium-mlvtt\" (UID: \"5aa06eec-ac30-445e-b33b-7d92ffc0524f\") " pod="kube-system/cilium-mlvtt" Jan 29 16:22:44.660134 kubelet[2604]: I0129 16:22:44.659956 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5aa06eec-ac30-445e-b33b-7d92ffc0524f-cilium-ipsec-secrets\") pod \"cilium-mlvtt\" (UID: \"5aa06eec-ac30-445e-b33b-7d92ffc0524f\") " pod="kube-system/cilium-mlvtt" Jan 29 16:22:44.660312 kubelet[2604]: I0129 16:22:44.659973 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5aa06eec-ac30-445e-b33b-7d92ffc0524f-host-proc-sys-kernel\") pod \"cilium-mlvtt\" (UID: \"5aa06eec-ac30-445e-b33b-7d92ffc0524f\") " pod="kube-system/cilium-mlvtt" Jan 29 16:22:44.660312 kubelet[2604]: I0129 16:22:44.659989 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5aa06eec-ac30-445e-b33b-7d92ffc0524f-bpf-maps\") pod \"cilium-mlvtt\" (UID: \"5aa06eec-ac30-445e-b33b-7d92ffc0524f\") " pod="kube-system/cilium-mlvtt" Jan 29 16:22:44.660312 kubelet[2604]: I0129 16:22:44.660002 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5aa06eec-ac30-445e-b33b-7d92ffc0524f-etc-cni-netd\") pod \"cilium-mlvtt\" (UID: \"5aa06eec-ac30-445e-b33b-7d92ffc0524f\") " pod="kube-system/cilium-mlvtt" Jan 29 16:22:44.660312 kubelet[2604]: I0129 16:22:44.660031 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prfng\" (UniqueName: \"kubernetes.io/projected/5aa06eec-ac30-445e-b33b-7d92ffc0524f-kube-api-access-prfng\") pod \"cilium-mlvtt\" (UID: \"5aa06eec-ac30-445e-b33b-7d92ffc0524f\") " pod="kube-system/cilium-mlvtt" Jan 29 16:22:44.660312 kubelet[2604]: I0129 16:22:44.660047 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5aa06eec-ac30-445e-b33b-7d92ffc0524f-hostproc\") pod \"cilium-mlvtt\" (UID: \"5aa06eec-ac30-445e-b33b-7d92ffc0524f\") " pod="kube-system/cilium-mlvtt" Jan 29 16:22:44.660312 kubelet[2604]: I0129 16:22:44.660063 2604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5aa06eec-ac30-445e-b33b-7d92ffc0524f-hubble-tls\") pod \"cilium-mlvtt\" (UID: \"5aa06eec-ac30-445e-b33b-7d92ffc0524f\") " pod="kube-system/cilium-mlvtt" Jan 29 16:22:44.688265 sshd[4445]: Connection closed by 10.0.0.1 port 34480 Jan 29 16:22:44.688896 sshd-session[4442]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:44.701743 systemd[1]: sshd@25-10.0.0.106:22-10.0.0.1:34480.service: Deactivated successfully. Jan 29 16:22:44.703770 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 16:22:44.705495 systemd-logind[1493]: Session 26 logged out. Waiting for processes to exit. Jan 29 16:22:44.718832 systemd[1]: Started sshd@26-10.0.0.106:22-10.0.0.1:34484.service - OpenSSH per-connection server daemon (10.0.0.1:34484). Jan 29 16:22:44.719929 systemd-logind[1493]: Removed session 26. Jan 29 16:22:44.758879 sshd[4451]: Accepted publickey for core from 10.0.0.1 port 34484 ssh2: RSA SHA256:iTwLJl+w7Ez7JFCWAsEn1TN8GCNd/VrH/0gLrHyIQs8 Jan 29 16:22:44.760379 sshd-session[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:44.783869 systemd-logind[1493]: New session 27 of user core. Jan 29 16:22:44.803926 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 16:22:44.895248 kubelet[2604]: E0129 16:22:44.894539 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:22:44.896121 containerd[1510]: time="2025-01-29T16:22:44.895712582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mlvtt,Uid:5aa06eec-ac30-445e-b33b-7d92ffc0524f,Namespace:kube-system,Attempt:0,}" Jan 29 16:22:44.928375 containerd[1510]: time="2025-01-29T16:22:44.928268799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:22:44.929459 containerd[1510]: time="2025-01-29T16:22:44.929355632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:22:44.929459 containerd[1510]: time="2025-01-29T16:22:44.929428100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:22:44.929745 containerd[1510]: time="2025-01-29T16:22:44.929700630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:22:44.953842 systemd[1]: Started cri-containerd-a54341b4955e46a6f8d787a7c0cab34c4244682d3b9bc0d84805b2e8e2c8c275.scope - libcontainer container a54341b4955e46a6f8d787a7c0cab34c4244682d3b9bc0d84805b2e8e2c8c275. Jan 29 16:22:44.979906 containerd[1510]: time="2025-01-29T16:22:44.979841562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mlvtt,Uid:5aa06eec-ac30-445e-b33b-7d92ffc0524f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a54341b4955e46a6f8d787a7c0cab34c4244682d3b9bc0d84805b2e8e2c8c275\"" Jan 29 16:22:44.980858 kubelet[2604]: E0129 16:22:44.980820 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:22:44.983592 containerd[1510]: time="2025-01-29T16:22:44.983526898Z" level=info msg="CreateContainer within sandbox \"a54341b4955e46a6f8d787a7c0cab34c4244682d3b9bc0d84805b2e8e2c8c275\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:22:44.999742 containerd[1510]: time="2025-01-29T16:22:44.999666107Z" level=info msg="CreateContainer within sandbox \"a54341b4955e46a6f8d787a7c0cab34c4244682d3b9bc0d84805b2e8e2c8c275\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d18abd4897584ed6b69d6a7fd3775ab5a896031bdc51256d357e410fbaea3fb6\"" Jan 29 16:22:45.000366 containerd[1510]: time="2025-01-29T16:22:45.000297801Z" level=info msg="StartContainer for \"d18abd4897584ed6b69d6a7fd3775ab5a896031bdc51256d357e410fbaea3fb6\"" Jan 29 16:22:45.027735 systemd[1]: Started cri-containerd-d18abd4897584ed6b69d6a7fd3775ab5a896031bdc51256d357e410fbaea3fb6.scope - libcontainer container d18abd4897584ed6b69d6a7fd3775ab5a896031bdc51256d357e410fbaea3fb6. Jan 29 16:22:45.053983 containerd[1510]: time="2025-01-29T16:22:45.053914127Z" level=info msg="StartContainer for \"d18abd4897584ed6b69d6a7fd3775ab5a896031bdc51256d357e410fbaea3fb6\" returns successfully" Jan 29 16:22:45.066187 systemd[1]: cri-containerd-d18abd4897584ed6b69d6a7fd3775ab5a896031bdc51256d357e410fbaea3fb6.scope: Deactivated successfully. Jan 29 16:22:45.104116 containerd[1510]: time="2025-01-29T16:22:45.104031444Z" level=info msg="shim disconnected" id=d18abd4897584ed6b69d6a7fd3775ab5a896031bdc51256d357e410fbaea3fb6 namespace=k8s.io Jan 29 16:22:45.104116 containerd[1510]: time="2025-01-29T16:22:45.104095436Z" level=warning msg="cleaning up after shim disconnected" id=d18abd4897584ed6b69d6a7fd3775ab5a896031bdc51256d357e410fbaea3fb6 namespace=k8s.io Jan 29 16:22:45.104116 containerd[1510]: time="2025-01-29T16:22:45.104106015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:22:45.186936 kubelet[2604]: E0129 16:22:45.186902 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:22:45.423369 kubelet[2604]: E0129 16:22:45.423328 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:22:45.425228 containerd[1510]: time="2025-01-29T16:22:45.425197491Z" level=info msg="CreateContainer within sandbox \"a54341b4955e46a6f8d787a7c0cab34c4244682d3b9bc0d84805b2e8e2c8c275\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:22:45.439639 containerd[1510]: time="2025-01-29T16:22:45.439415857Z" level=info msg="CreateContainer within sandbox \"a54341b4955e46a6f8d787a7c0cab34c4244682d3b9bc0d84805b2e8e2c8c275\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dc46e8d6ba3ac0e998e381d51a7b5fdf5b439165ad22d9044fa2d336699e34f4\"" Jan 29 16:22:45.440624 containerd[1510]: time="2025-01-29T16:22:45.440165656Z" level=info msg="StartContainer for \"dc46e8d6ba3ac0e998e381d51a7b5fdf5b439165ad22d9044fa2d336699e34f4\"" Jan 29 16:22:45.469803 systemd[1]: Started cri-containerd-dc46e8d6ba3ac0e998e381d51a7b5fdf5b439165ad22d9044fa2d336699e34f4.scope - libcontainer container dc46e8d6ba3ac0e998e381d51a7b5fdf5b439165ad22d9044fa2d336699e34f4. Jan 29 16:22:45.497264 containerd[1510]: time="2025-01-29T16:22:45.497216315Z" level=info msg="StartContainer for \"dc46e8d6ba3ac0e998e381d51a7b5fdf5b439165ad22d9044fa2d336699e34f4\" returns successfully" Jan 29 16:22:45.504475 systemd[1]: cri-containerd-dc46e8d6ba3ac0e998e381d51a7b5fdf5b439165ad22d9044fa2d336699e34f4.scope: Deactivated successfully. Jan 29 16:22:45.533259 containerd[1510]: time="2025-01-29T16:22:45.533187956Z" level=info msg="shim disconnected" id=dc46e8d6ba3ac0e998e381d51a7b5fdf5b439165ad22d9044fa2d336699e34f4 namespace=k8s.io Jan 29 16:22:45.533259 containerd[1510]: time="2025-01-29T16:22:45.533256216Z" level=warning msg="cleaning up after shim disconnected" id=dc46e8d6ba3ac0e998e381d51a7b5fdf5b439165ad22d9044fa2d336699e34f4 namespace=k8s.io Jan 29 16:22:45.533472 containerd[1510]: time="2025-01-29T16:22:45.533269140Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:22:45.855968 kubelet[2604]: I0129 16:22:45.855902 2604 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T16:22:45Z","lastTransitionTime":"2025-01-29T16:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 16:22:46.427211 kubelet[2604]: E0129 16:22:46.427173 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:22:46.428646 containerd[1510]: time="2025-01-29T16:22:46.428612323Z" level=info msg="CreateContainer within sandbox \"a54341b4955e46a6f8d787a7c0cab34c4244682d3b9bc0d84805b2e8e2c8c275\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:22:46.450397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount155704601.mount: Deactivated successfully. Jan 29 16:22:46.455759 containerd[1510]: time="2025-01-29T16:22:46.455710151Z" level=info msg="CreateContainer within sandbox \"a54341b4955e46a6f8d787a7c0cab34c4244682d3b9bc0d84805b2e8e2c8c275\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2d57382a22b4551b35c981b35fe59bf27954e462e66eae458a0a466b805d4be0\"" Jan 29 16:22:46.456329 containerd[1510]: time="2025-01-29T16:22:46.456290295Z" level=info msg="StartContainer for \"2d57382a22b4551b35c981b35fe59bf27954e462e66eae458a0a466b805d4be0\"" Jan 29 16:22:46.490820 systemd[1]: Started cri-containerd-2d57382a22b4551b35c981b35fe59bf27954e462e66eae458a0a466b805d4be0.scope - libcontainer container 2d57382a22b4551b35c981b35fe59bf27954e462e66eae458a0a466b805d4be0. Jan 29 16:22:46.525442 containerd[1510]: time="2025-01-29T16:22:46.525368091Z" level=info msg="StartContainer for \"2d57382a22b4551b35c981b35fe59bf27954e462e66eae458a0a466b805d4be0\" returns successfully" Jan 29 16:22:46.525837 systemd[1]: cri-containerd-2d57382a22b4551b35c981b35fe59bf27954e462e66eae458a0a466b805d4be0.scope: Deactivated successfully. Jan 29 16:22:46.552926 containerd[1510]: time="2025-01-29T16:22:46.552852868Z" level=info msg="shim disconnected" id=2d57382a22b4551b35c981b35fe59bf27954e462e66eae458a0a466b805d4be0 namespace=k8s.io Jan 29 16:22:46.552926 containerd[1510]: time="2025-01-29T16:22:46.552906189Z" level=warning msg="cleaning up after shim disconnected" id=2d57382a22b4551b35c981b35fe59bf27954e462e66eae458a0a466b805d4be0 namespace=k8s.io Jan 29 16:22:46.552926 containerd[1510]: time="2025-01-29T16:22:46.552914706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:22:46.767194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d57382a22b4551b35c981b35fe59bf27954e462e66eae458a0a466b805d4be0-rootfs.mount: Deactivated successfully. Jan 29 16:22:47.430428 kubelet[2604]: E0129 16:22:47.430396 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:22:47.431867 containerd[1510]: time="2025-01-29T16:22:47.431830376Z" level=info msg="CreateContainer within sandbox \"a54341b4955e46a6f8d787a7c0cab34c4244682d3b9bc0d84805b2e8e2c8c275\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:22:47.447484 containerd[1510]: time="2025-01-29T16:22:47.447431062Z" level=info msg="CreateContainer within sandbox \"a54341b4955e46a6f8d787a7c0cab34c4244682d3b9bc0d84805b2e8e2c8c275\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"12b33be8a24587d9ed3011c35bd7482c306de1626244565aa65f9354e7266b73\"" Jan 29 16:22:47.447982 containerd[1510]: time="2025-01-29T16:22:47.447883593Z" level=info msg="StartContainer for \"12b33be8a24587d9ed3011c35bd7482c306de1626244565aa65f9354e7266b73\"" Jan 29 16:22:47.475723 systemd[1]: Started cri-containerd-12b33be8a24587d9ed3011c35bd7482c306de1626244565aa65f9354e7266b73.scope - libcontainer container 12b33be8a24587d9ed3011c35bd7482c306de1626244565aa65f9354e7266b73. Jan 29 16:22:47.501842 systemd[1]: cri-containerd-12b33be8a24587d9ed3011c35bd7482c306de1626244565aa65f9354e7266b73.scope: Deactivated successfully. Jan 29 16:22:47.503681 containerd[1510]: time="2025-01-29T16:22:47.503642056Z" level=info msg="StartContainer for \"12b33be8a24587d9ed3011c35bd7482c306de1626244565aa65f9354e7266b73\" returns successfully" Jan 29 16:22:47.527625 containerd[1510]: time="2025-01-29T16:22:47.527547847Z" level=info msg="shim disconnected" id=12b33be8a24587d9ed3011c35bd7482c306de1626244565aa65f9354e7266b73 namespace=k8s.io Jan 29 16:22:47.527625 containerd[1510]: time="2025-01-29T16:22:47.527617389Z" level=warning msg="cleaning up after shim disconnected" id=12b33be8a24587d9ed3011c35bd7482c306de1626244565aa65f9354e7266b73 namespace=k8s.io Jan 29 16:22:47.527625 containerd[1510]: time="2025-01-29T16:22:47.527626977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:22:47.766755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12b33be8a24587d9ed3011c35bd7482c306de1626244565aa65f9354e7266b73-rootfs.mount: Deactivated successfully. Jan 29 16:22:48.434059 kubelet[2604]: E0129 16:22:48.434027 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:22:48.435695 containerd[1510]: time="2025-01-29T16:22:48.435656632Z" level=info msg="CreateContainer within sandbox \"a54341b4955e46a6f8d787a7c0cab34c4244682d3b9bc0d84805b2e8e2c8c275\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:22:48.451976 containerd[1510]: time="2025-01-29T16:22:48.451926862Z" level=info msg="CreateContainer within sandbox \"a54341b4955e46a6f8d787a7c0cab34c4244682d3b9bc0d84805b2e8e2c8c275\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aff47699988f65e278df836cd1bb24ecdadaefd1c90604b24ebabfa837db2853\"" Jan 29 16:22:48.452418 containerd[1510]: time="2025-01-29T16:22:48.452334197Z" level=info msg="StartContainer for \"aff47699988f65e278df836cd1bb24ecdadaefd1c90604b24ebabfa837db2853\"" Jan 29 16:22:48.482689 systemd[1]: Started cri-containerd-aff47699988f65e278df836cd1bb24ecdadaefd1c90604b24ebabfa837db2853.scope - libcontainer container aff47699988f65e278df836cd1bb24ecdadaefd1c90604b24ebabfa837db2853. Jan 29 16:22:48.512252 containerd[1510]: time="2025-01-29T16:22:48.512210396Z" level=info msg="StartContainer for \"aff47699988f65e278df836cd1bb24ecdadaefd1c90604b24ebabfa837db2853\" returns successfully" Jan 29 16:22:48.921601 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 16:22:49.187093 kubelet[2604]: E0129 16:22:49.186958 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:22:49.443436 kubelet[2604]: E0129 16:22:49.443320 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:22:49.462513 kubelet[2604]: I0129 16:22:49.462364 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mlvtt" podStartSLOduration=5.462346357 podStartE2EDuration="5.462346357s" podCreationTimestamp="2025-01-29 16:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:22:49.46196937 +0000 UTC m=+85.420012309" watchObservedRunningTime="2025-01-29 16:22:49.462346357 +0000 UTC m=+85.420389276" Jan 29 16:22:50.896126 kubelet[2604]: E0129 16:22:50.896066 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:22:52.022404 systemd-networkd[1428]: lxc_health: Link UP Jan 29 16:22:52.022864 systemd-networkd[1428]: lxc_health: Gained carrier Jan 29 16:22:52.897791 kubelet[2604]: E0129 16:22:52.896885 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:22:53.187148 kubelet[2604]: E0129 16:22:53.186687 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:22:53.450262 kubelet[2604]: E0129 16:22:53.450123 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:22:54.080789 systemd-networkd[1428]: lxc_health: Gained IPv6LL Jan 29 16:22:54.186891 kubelet[2604]: E0129 16:22:54.186837 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:22:54.452613 kubelet[2604]: E0129 16:22:54.452290 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 16:22:55.336346 kubelet[2604]: E0129 16:22:55.336265 2604 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37454->127.0.0.1:34805: write tcp 127.0.0.1:37454->127.0.0.1:34805: write: broken pipe Jan 29 16:22:57.373535 systemd[1]: run-containerd-runc-k8s.io-aff47699988f65e278df836cd1bb24ecdadaefd1c90604b24ebabfa837db2853-runc.wLewXI.mount: Deactivated successfully. Jan 29 16:22:59.529317 sshd[4458]: Connection closed by 10.0.0.1 port 34484 Jan 29 16:22:59.529801 sshd-session[4451]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:59.534186 systemd[1]: sshd@26-10.0.0.106:22-10.0.0.1:34484.service: Deactivated successfully. Jan 29 16:22:59.536506 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 16:22:59.537453 systemd-logind[1493]: Session 27 logged out. Waiting for processes to exit. Jan 29 16:22:59.538313 systemd-logind[1493]: Removed session 27. Jan 29 16:23:00.187117 kubelet[2604]: E0129 16:23:00.187066 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"