Mar 17 17:39:25.980786 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:07:40 -00 2025 Mar 17 17:39:25.980823 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:39:25.980836 kernel: BIOS-provided physical RAM map: Mar 17 17:39:25.980843 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 17:39:25.980849 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 17 17:39:25.980855 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 17 17:39:25.980862 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 17 17:39:25.980869 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 17 17:39:25.980875 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 17 17:39:25.980881 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 17 17:39:25.980889 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 17 17:39:25.980899 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 17 17:39:25.980905 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 17 17:39:25.980911 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 17 17:39:25.980919 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 17 17:39:25.980928 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 17 17:39:25.980937 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Mar 17 17:39:25.980944 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Mar 17 17:39:25.980950 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Mar 17 17:39:25.980957 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Mar 17 17:39:25.980964 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 17 17:39:25.980970 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 17 17:39:25.980977 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 17 17:39:25.980983 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:39:25.980990 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 17 17:39:25.980996 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 17:39:25.981003 kernel: NX (Execute Disable) protection: active Mar 17 17:39:25.981012 kernel: APIC: Static calls initialized Mar 17 17:39:25.981019 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Mar 17 17:39:25.981026 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Mar 17 17:39:25.981032 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Mar 17 17:39:25.981039 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Mar 17 17:39:25.981045 kernel: extended physical RAM map: Mar 17 17:39:25.981052 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 17:39:25.981059 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 17 17:39:25.981065 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 17 17:39:25.981072 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 17 17:39:25.981079 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 17 17:39:25.981088 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 17 17:39:25.981094 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 17 17:39:25.981109 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Mar 17 17:39:25.981117 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Mar 17 17:39:25.981126 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Mar 17 17:39:25.981133 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Mar 17 17:39:25.981142 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Mar 17 17:39:25.981152 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 17 17:39:25.981159 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 17 17:39:25.981166 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 17 17:39:25.981173 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 17 17:39:25.981180 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 17 17:39:25.981187 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Mar 17 17:39:25.981194 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Mar 17 17:39:25.981201 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Mar 17 17:39:25.981208 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Mar 17 17:39:25.981218 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 17 17:39:25.981225 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 17 17:39:25.981232 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 17 17:39:25.981239 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:39:25.981248 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 17 17:39:25.981255 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 17:39:25.981262 kernel: efi: EFI v2.7 by EDK II Mar 17 17:39:25.981269 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Mar 17 17:39:25.981276 kernel: random: crng init done Mar 17 17:39:25.981284 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 17 17:39:25.981291 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 17 17:39:25.981300 kernel: secureboot: Secure boot disabled Mar 17 17:39:25.981307 kernel: SMBIOS 2.8 present. Mar 17 17:39:25.981314 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 17 17:39:25.981321 kernel: Hypervisor detected: KVM Mar 17 17:39:25.981328 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:39:25.981335 kernel: kvm-clock: using sched offset of 4679172498 cycles Mar 17 17:39:25.981343 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:39:25.981350 kernel: tsc: Detected 2794.746 MHz processor Mar 17 17:39:25.981358 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:39:25.981365 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:39:25.981372 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 17 17:39:25.981382 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 17 17:39:25.981389 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:39:25.981396 kernel: Using GB pages for direct mapping Mar 17 17:39:25.981403 kernel: ACPI: Early table checksum verification disabled Mar 17 17:39:25.981410 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 17 17:39:25.981418 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:39:25.981425 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:39:25.981432 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:39:25.981439 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 17 17:39:25.981449 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:39:25.981456 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:39:25.981463 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:39:25.981470 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:39:25.981478 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 17 17:39:25.981494 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 17 17:39:25.981501 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Mar 17 17:39:25.981508 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 17 17:39:25.981515 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 17 17:39:25.981525 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 17 17:39:25.981532 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 17 17:39:25.981539 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 17 17:39:25.981546 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 17 17:39:25.981553 kernel: No NUMA configuration found Mar 17 17:39:25.981560 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 17 17:39:25.981567 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Mar 17 17:39:25.981575 kernel: Zone ranges: Mar 17 17:39:25.981586 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:39:25.981601 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 17 17:39:25.981617 kernel: Normal empty Mar 17 17:39:25.981624 kernel: Movable zone start for each node Mar 17 17:39:25.981648 kernel: Early memory node ranges Mar 17 17:39:25.981655 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 17 17:39:25.981662 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 17 17:39:25.981670 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 17 17:39:25.981677 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 17 17:39:25.981688 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 17 17:39:25.981699 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 17 17:39:25.981706 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Mar 17 17:39:25.981713 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Mar 17 17:39:25.981720 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 17 17:39:25.981728 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:39:25.981735 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 17 17:39:25.981750 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 17 17:39:25.981760 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:39:25.981767 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 17 17:39:25.981774 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 17 17:39:25.981782 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 17 17:39:25.981792 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 17 17:39:25.981802 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 17 17:39:25.981809 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:39:25.981816 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:39:25.981824 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:39:25.981831 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:39:25.981841 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:39:25.981849 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:39:25.981856 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:39:25.981864 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:39:25.981871 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:39:25.981878 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:39:25.981886 kernel: TSC deadline timer available Mar 17 17:39:25.981893 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 17 17:39:25.981901 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:39:25.981911 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 17 17:39:25.981918 kernel: kvm-guest: setup PV sched yield Mar 17 17:39:25.981936 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 17 17:39:25.981954 kernel: Booting paravirtualized kernel on KVM Mar 17 17:39:25.981963 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:39:25.981971 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 17 17:39:25.981978 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 17 17:39:25.981986 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 17 17:39:25.981993 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 17 17:39:25.982003 kernel: kvm-guest: PV spinlocks enabled Mar 17 17:39:25.982011 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 17:39:25.982020 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:39:25.982028 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:39:25.982038 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:39:25.982046 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:39:25.982060 kernel: Fallback order for Node 0: 0 Mar 17 17:39:25.982068 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Mar 17 17:39:25.982078 kernel: Policy zone: DMA32 Mar 17 17:39:25.982086 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:39:25.982094 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2303K rwdata, 22744K rodata, 42992K init, 2196K bss, 175776K reserved, 0K cma-reserved) Mar 17 17:39:25.982101 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:39:25.982109 kernel: ftrace: allocating 37938 entries in 149 pages Mar 17 17:39:25.982116 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:39:25.982124 kernel: Dynamic Preempt: voluntary Mar 17 17:39:25.982132 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:39:25.982140 kernel: rcu: RCU event tracing is enabled. Mar 17 17:39:25.982150 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:39:25.982158 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:39:25.982166 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:39:25.982173 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:39:25.982181 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:39:25.982189 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:39:25.982196 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 17 17:39:25.982204 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:39:25.982211 kernel: Console: colour dummy device 80x25 Mar 17 17:39:25.982218 kernel: printk: console [ttyS0] enabled Mar 17 17:39:25.982228 kernel: ACPI: Core revision 20230628 Mar 17 17:39:25.982236 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:39:25.982244 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:39:25.982251 kernel: x2apic enabled Mar 17 17:39:25.982259 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:39:25.982266 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 17 17:39:25.982274 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 17 17:39:25.982281 kernel: kvm-guest: setup PV IPIs Mar 17 17:39:25.982289 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:39:25.982299 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 17:39:25.982306 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Mar 17 17:39:25.982314 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 17:39:25.982321 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 17:39:25.982329 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 17:39:25.982336 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:39:25.982344 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:39:25.982351 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:39:25.982361 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:39:25.982369 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 17:39:25.982376 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 17:39:25.982384 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:39:25.982391 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:39:25.982399 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 17 17:39:25.982410 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 17 17:39:25.982418 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 17 17:39:25.982425 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:39:25.982435 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:39:25.982443 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:39:25.982451 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:39:25.982458 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 17 17:39:25.982466 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:39:25.982473 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:39:25.982481 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:39:25.982496 kernel: landlock: Up and running. Mar 17 17:39:25.982503 kernel: SELinux: Initializing. Mar 17 17:39:25.982520 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:39:25.982528 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:39:25.982536 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 17:39:25.982544 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:39:25.982551 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:39:25.982559 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:39:25.982567 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 17:39:25.982574 kernel: ... version: 0 Mar 17 17:39:25.982584 kernel: ... bit width: 48 Mar 17 17:39:25.982592 kernel: ... generic registers: 6 Mar 17 17:39:25.982599 kernel: ... value mask: 0000ffffffffffff Mar 17 17:39:25.982612 kernel: ... max period: 00007fffffffffff Mar 17 17:39:25.982621 kernel: ... fixed-purpose events: 0 Mar 17 17:39:25.982628 kernel: ... event mask: 000000000000003f Mar 17 17:39:25.982649 kernel: signal: max sigframe size: 1776 Mar 17 17:39:25.982656 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:39:25.982664 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:39:25.982672 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:39:25.982682 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:39:25.982690 kernel: .... node #0, CPUs: #1 #2 #3 Mar 17 17:39:25.982697 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:39:25.982705 kernel: smpboot: Max logical packages: 1 Mar 17 17:39:25.982726 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Mar 17 17:39:25.982734 kernel: devtmpfs: initialized Mar 17 17:39:25.982741 kernel: x86/mm: Memory block size: 128MB Mar 17 17:39:25.982749 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 17 17:39:25.982757 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 17 17:39:25.982767 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 17 17:39:25.982775 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 17 17:39:25.982782 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Mar 17 17:39:25.982790 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 17 17:39:25.982798 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:39:25.982806 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:39:25.982813 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:39:25.982821 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:39:25.982828 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:39:25.982846 kernel: audit: type=2000 audit(1742233164.166:1): state=initialized audit_enabled=0 res=1 Mar 17 17:39:25.982854 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:39:25.982861 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:39:25.982869 kernel: cpuidle: using governor menu Mar 17 17:39:25.982876 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:39:25.982884 kernel: dca service started, version 1.12.1 Mar 17 17:39:25.982891 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Mar 17 17:39:25.982899 kernel: PCI: Using configuration type 1 for base access Mar 17 17:39:25.982906 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:39:25.982916 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:39:25.982924 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:39:25.982931 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:39:25.982940 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:39:25.982950 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:39:25.982960 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:39:25.982970 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:39:25.982980 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:39:25.982989 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:39:25.982999 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:39:25.983007 kernel: ACPI: Interpreter enabled Mar 17 17:39:25.983015 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 17:39:25.983022 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:39:25.983030 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:39:25.983037 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:39:25.983045 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 17:39:25.983052 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:39:25.983289 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:39:25.983445 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 17:39:25.983589 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 17:39:25.983601 kernel: PCI host bridge to bus 0000:00 Mar 17 17:39:25.983779 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:39:25.983979 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:39:25.984097 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:39:25.984244 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 17 17:39:25.984372 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 17 17:39:25.984512 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 17 17:39:25.984717 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:39:25.984910 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 17:39:25.985075 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 17 17:39:25.985206 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 17 17:39:25.985327 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 17 17:39:25.985446 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 17 17:39:25.985580 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 17 17:39:25.985717 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:39:25.985857 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:39:25.985982 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 17 17:39:25.986108 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 17 17:39:25.986230 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Mar 17 17:39:25.986380 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:39:25.986514 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 17 17:39:25.986661 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 17 17:39:25.986791 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Mar 17 17:39:25.986928 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:39:25.987057 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 17 17:39:25.987177 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 17 17:39:25.987298 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 17 17:39:25.987419 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 17 17:39:25.987567 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 17:39:25.987727 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 17:39:25.987898 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 17:39:25.988046 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 17 17:39:25.988178 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 17 17:39:25.988353 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 17:39:25.988743 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 17 17:39:25.988755 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:39:25.988763 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:39:25.988771 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:39:25.988784 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:39:25.988792 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 17:39:25.988799 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 17:39:25.988807 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 17:39:25.988814 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 17:39:25.988822 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 17:39:25.988830 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 17:39:25.988838 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 17:39:25.988845 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 17:39:25.988855 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 17:39:25.988863 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 17:39:25.988870 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 17:39:25.988883 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 17:39:25.988901 kernel: iommu: Default domain type: Translated Mar 17 17:39:25.988919 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:39:25.988937 kernel: efivars: Registered efivars operations Mar 17 17:39:25.988952 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:39:25.988960 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:39:25.988971 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 17 17:39:25.988979 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 17 17:39:25.988986 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Mar 17 17:39:25.988994 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Mar 17 17:39:25.989001 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 17 17:39:25.989009 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 17 17:39:25.989016 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Mar 17 17:39:25.989024 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 17 17:39:25.989154 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 17:39:25.989283 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 17:39:25.989429 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:39:25.989441 kernel: vgaarb: loaded Mar 17 17:39:25.989449 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:39:25.989457 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:39:25.989465 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:39:25.989473 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:39:25.989480 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:39:25.989503 kernel: pnp: PnP ACPI init Mar 17 17:39:25.989697 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 17 17:39:25.989711 kernel: pnp: PnP ACPI: found 6 devices Mar 17 17:39:25.989719 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:39:25.989727 kernel: NET: Registered PF_INET protocol family Mar 17 17:39:25.989755 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:39:25.989765 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:39:25.989773 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:39:25.989791 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:39:25.989805 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:39:25.989813 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:39:25.989821 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:39:25.989829 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:39:25.989839 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:39:25.989847 kernel: NET: Registered PF_XDP protocol family Mar 17 17:39:25.989980 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 17 17:39:25.990177 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 17 17:39:25.990294 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:39:25.990417 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:39:25.990555 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:39:25.990774 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 17 17:39:25.990891 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 17 17:39:25.991003 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 17 17:39:25.991013 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:39:25.991028 kernel: Initialise system trusted keyrings Mar 17 17:39:25.991036 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:39:25.991044 kernel: Key type asymmetric registered Mar 17 17:39:25.991052 kernel: Asymmetric key parser 'x509' registered Mar 17 17:39:25.991060 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:39:25.991068 kernel: io scheduler mq-deadline registered Mar 17 17:39:25.991076 kernel: io scheduler kyber registered Mar 17 17:39:25.991084 kernel: io scheduler bfq registered Mar 17 17:39:25.991092 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:39:25.991100 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 17:39:25.991111 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 17:39:25.991121 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 17:39:25.991129 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:39:25.991137 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:39:25.991145 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:39:25.991156 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:39:25.991164 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:39:25.991297 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 17:39:25.991309 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:39:25.991422 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 17:39:25.991549 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T17:39:25 UTC (1742233165) Mar 17 17:39:25.991691 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 17 17:39:25.991703 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 17 17:39:25.991716 kernel: efifb: probing for efifb Mar 17 17:39:25.991724 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 17 17:39:25.991732 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 17 17:39:25.991740 kernel: efifb: scrolling: redraw Mar 17 17:39:25.991748 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 17:39:25.991757 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 17:39:25.991765 kernel: fb0: EFI VGA frame buffer device Mar 17 17:39:25.991773 kernel: pstore: Using crash dump compression: deflate Mar 17 17:39:25.991781 kernel: pstore: Registered efi_pstore as persistent store backend Mar 17 17:39:25.991791 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:39:25.991799 kernel: Segment Routing with IPv6 Mar 17 17:39:25.991807 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:39:25.991815 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:39:25.991823 kernel: Key type dns_resolver registered Mar 17 17:39:25.991831 kernel: IPI shorthand broadcast: enabled Mar 17 17:39:25.991840 kernel: sched_clock: Marking stable (1304004422, 171949836)->(1590982577, -115028319) Mar 17 17:39:25.991848 kernel: registered taskstats version 1 Mar 17 17:39:25.991856 kernel: Loading compiled-in X.509 certificates Mar 17 17:39:25.991867 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 608fb88224bc0ea76afefc598557abb0413f36c0' Mar 17 17:39:25.991875 kernel: Key type .fscrypt registered Mar 17 17:39:25.991883 kernel: Key type fscrypt-provisioning registered Mar 17 17:39:25.991891 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:39:25.991899 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:39:25.991907 kernel: ima: No architecture policies found Mar 17 17:39:25.991915 kernel: clk: Disabling unused clocks Mar 17 17:39:25.991923 kernel: Freeing unused kernel image (initmem) memory: 42992K Mar 17 17:39:25.991931 kernel: Write protecting the kernel read-only data: 36864k Mar 17 17:39:25.991941 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Mar 17 17:39:25.991949 kernel: Run /init as init process Mar 17 17:39:25.991958 kernel: with arguments: Mar 17 17:39:25.991965 kernel: /init Mar 17 17:39:25.991973 kernel: with environment: Mar 17 17:39:25.991981 kernel: HOME=/ Mar 17 17:39:25.991988 kernel: TERM=linux Mar 17 17:39:25.991996 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:39:25.992009 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:39:25.992023 systemd[1]: Detected virtualization kvm. Mar 17 17:39:25.992032 systemd[1]: Detected architecture x86-64. Mar 17 17:39:25.992040 systemd[1]: Running in initrd. Mar 17 17:39:25.992048 systemd[1]: No hostname configured, using default hostname. Mar 17 17:39:25.992056 systemd[1]: Hostname set to . Mar 17 17:39:25.992064 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:39:25.992073 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:39:25.992084 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:39:25.992092 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:39:25.992101 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:39:25.992113 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:39:25.992121 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:39:25.992130 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:39:25.992140 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:39:25.992151 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:39:25.992159 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:39:25.992168 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:39:25.992176 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:39:25.992184 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:39:25.992193 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:39:25.992201 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:39:25.992209 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:39:25.992220 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:39:25.992229 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:39:25.992237 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:39:25.992246 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:39:25.992254 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:39:25.992263 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:39:25.992271 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:39:25.992280 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:39:25.992291 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:39:25.992304 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:39:25.992315 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:39:25.992326 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:39:25.992337 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:39:25.992347 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:39:25.992358 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:39:25.992369 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:39:25.992379 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:39:25.992390 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:39:25.992399 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:39:25.992438 systemd-journald[195]: Collecting audit messages is disabled. Mar 17 17:39:25.992471 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:39:25.992480 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:39:25.992498 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:39:25.992507 systemd-journald[195]: Journal started Mar 17 17:39:25.992529 systemd-journald[195]: Runtime Journal (/run/log/journal/7357744f14cb4ca1bc675194700462f6) is 6.0M, max 48.3M, 42.2M free. Mar 17 17:39:25.970559 systemd-modules-load[196]: Inserted module 'overlay' Mar 17 17:39:25.996674 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:39:26.001839 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:39:26.008994 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:39:26.009053 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:39:26.011569 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 17 17:39:26.013305 kernel: Bridge firewalling registered Mar 17 17:39:26.015082 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:39:26.031875 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:39:26.034159 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:39:26.034810 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:39:26.037284 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:39:26.045603 dracut-cmdline[224]: dracut-dracut-053 Mar 17 17:39:26.049058 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:39:26.061740 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:39:26.069803 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:39:26.103229 systemd-resolved[254]: Positive Trust Anchors: Mar 17 17:39:26.103243 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:39:26.103274 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:39:26.105839 systemd-resolved[254]: Defaulting to hostname 'linux'. Mar 17 17:39:26.107138 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:39:26.114125 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:39:26.162670 kernel: SCSI subsystem initialized Mar 17 17:39:26.173655 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:39:26.184663 kernel: iscsi: registered transport (tcp) Mar 17 17:39:26.205996 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:39:26.206034 kernel: QLogic iSCSI HBA Driver Mar 17 17:39:26.262159 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:39:26.273783 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:39:26.298682 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:39:26.298756 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:39:26.300475 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:39:26.347690 kernel: raid6: avx2x4 gen() 25980 MB/s Mar 17 17:39:26.364680 kernel: raid6: avx2x2 gen() 29884 MB/s Mar 17 17:39:26.381828 kernel: raid6: avx2x1 gen() 25753 MB/s Mar 17 17:39:26.381930 kernel: raid6: using algorithm avx2x2 gen() 29884 MB/s Mar 17 17:39:26.399791 kernel: raid6: .... xor() 19891 MB/s, rmw enabled Mar 17 17:39:26.399872 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:39:26.420675 kernel: xor: automatically using best checksumming function avx Mar 17 17:39:26.574666 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:39:26.587532 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:39:26.613798 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:39:26.626175 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 17 17:39:26.630996 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:39:26.640848 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:39:26.655306 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Mar 17 17:39:26.688865 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:39:26.710817 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:39:26.777564 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:39:26.785824 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:39:26.800340 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:39:26.804359 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:39:26.806350 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:39:26.810496 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:39:26.821859 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:39:26.826113 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 17 17:39:26.860233 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:39:26.860299 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:39:26.860738 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:39:26.860772 kernel: GPT:9289727 != 19775487 Mar 17 17:39:26.860802 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:39:26.860831 kernel: GPT:9289727 != 19775487 Mar 17 17:39:26.860856 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:39:26.860882 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:39:26.836058 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:39:26.856273 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:39:26.856528 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:39:26.895402 kernel: libata version 3.00 loaded. Mar 17 17:39:26.858508 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:39:26.862370 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:39:26.862917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:39:26.889257 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:39:26.902145 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:39:26.906307 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:39:26.906330 kernel: AES CTR mode by8 optimization enabled Mar 17 17:39:26.918043 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 17:39:27.002317 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 17:39:27.002342 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (475) Mar 17 17:39:27.002353 kernel: BTRFS: device fsid 2b8ebefd-e897-48f6-96d5-0893fbb7c64a devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (469) Mar 17 17:39:27.002364 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 17:39:27.002529 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 17:39:27.002687 kernel: scsi host0: ahci Mar 17 17:39:27.002869 kernel: scsi host1: ahci Mar 17 17:39:27.003021 kernel: scsi host2: ahci Mar 17 17:39:27.003170 kernel: scsi host3: ahci Mar 17 17:39:27.003313 kernel: scsi host4: ahci Mar 17 17:39:27.003456 kernel: scsi host5: ahci Mar 17 17:39:27.003663 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 17 17:39:27.003675 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 17 17:39:27.003685 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 17 17:39:27.003695 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 17 17:39:27.003709 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 17 17:39:27.003720 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 17 17:39:26.919302 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:39:26.934840 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:39:26.966536 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:39:27.010054 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:39:27.018767 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:39:27.036291 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:39:27.036543 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:39:27.061863 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:39:27.062406 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:39:27.278191 disk-uuid[574]: Primary Header is updated. Mar 17 17:39:27.278191 disk-uuid[574]: Secondary Entries is updated. Mar 17 17:39:27.278191 disk-uuid[574]: Secondary Header is updated. Mar 17 17:39:27.288691 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:39:27.293670 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:39:27.311693 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 17:39:27.311753 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 17:39:27.318995 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 17:39:27.319024 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 17:39:27.319038 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 17:39:27.320531 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 17:39:27.320560 kernel: ata3.00: applying bridge limits Mar 17 17:39:27.321597 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 17:39:27.322376 kernel: ata3.00: configured for UDMA/100 Mar 17 17:39:27.324652 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:39:27.368682 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 17:39:27.394706 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:39:27.394729 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:39:28.350667 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:39:28.351244 disk-uuid[575]: The operation has completed successfully. Mar 17 17:39:28.380426 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:39:28.380564 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:39:28.407861 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:39:28.414185 sh[590]: Success Mar 17 17:39:28.426670 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 17:39:28.460453 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:39:28.472121 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:39:28.476851 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:39:28.492568 kernel: BTRFS info (device dm-0): first mount of filesystem 2b8ebefd-e897-48f6-96d5-0893fbb7c64a Mar 17 17:39:28.492601 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:39:28.492612 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:39:28.492622 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:39:28.493316 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:39:28.498375 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:39:28.499490 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:39:28.507837 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:39:28.509150 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:39:28.519393 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:39:28.519455 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:39:28.519467 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:39:28.522673 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:39:28.532897 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:39:28.534712 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:39:28.614212 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:39:28.635809 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:39:28.657002 systemd-networkd[768]: lo: Link UP Mar 17 17:39:28.657012 systemd-networkd[768]: lo: Gained carrier Mar 17 17:39:28.658743 systemd-networkd[768]: Enumeration completed Mar 17 17:39:28.658846 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:39:28.659195 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:39:28.659199 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:39:28.660265 systemd-networkd[768]: eth0: Link UP Mar 17 17:39:28.660270 systemd-networkd[768]: eth0: Gained carrier Mar 17 17:39:28.660303 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:39:28.661175 systemd[1]: Reached target network.target - Network. Mar 17 17:39:28.679690 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:39:28.744978 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:39:28.756779 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:39:28.969836 ignition[774]: Ignition 2.20.0 Mar 17 17:39:28.969851 ignition[774]: Stage: fetch-offline Mar 17 17:39:28.969958 ignition[774]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:39:28.972384 systemd-resolved[254]: Detected conflict on linux IN A 10.0.0.43 Mar 17 17:39:28.969969 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:39:28.972404 systemd-resolved[254]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Mar 17 17:39:28.970088 ignition[774]: parsed url from cmdline: "" Mar 17 17:39:28.970092 ignition[774]: no config URL provided Mar 17 17:39:28.970098 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:39:28.970108 ignition[774]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:39:28.970141 ignition[774]: op(1): [started] loading QEMU firmware config module Mar 17 17:39:28.970147 ignition[774]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:39:28.979616 ignition[774]: op(1): [finished] loading QEMU firmware config module Mar 17 17:39:28.999043 ignition[774]: parsing config with SHA512: 05043cc99d5af1f000b39b41ef13072792466873f41966e33a6ffa5ba13d891be01ae1c84939d49546d7ab01a4d1c28025dd1941f85cb702f831134db6f45e21 Mar 17 17:39:29.004195 unknown[774]: fetched base config from "system" Mar 17 17:39:29.004211 unknown[774]: fetched user config from "qemu" Mar 17 17:39:29.004907 ignition[774]: fetch-offline: fetch-offline passed Mar 17 17:39:29.005058 ignition[774]: Ignition finished successfully Mar 17 17:39:29.008502 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:39:29.010227 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:39:29.018901 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:39:29.037647 ignition[785]: Ignition 2.20.0 Mar 17 17:39:29.037660 ignition[785]: Stage: kargs Mar 17 17:39:29.037856 ignition[785]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:39:29.037869 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:39:29.038718 ignition[785]: kargs: kargs passed Mar 17 17:39:29.041780 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:39:29.038767 ignition[785]: Ignition finished successfully Mar 17 17:39:29.045043 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:39:29.061440 ignition[794]: Ignition 2.20.0 Mar 17 17:39:29.061454 ignition[794]: Stage: disks Mar 17 17:39:29.061616 ignition[794]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:39:29.061627 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:39:29.062439 ignition[794]: disks: disks passed Mar 17 17:39:29.065083 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:39:29.062488 ignition[794]: Ignition finished successfully Mar 17 17:39:29.066873 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:39:29.068720 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:39:29.070676 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:39:29.072716 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:39:29.074905 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:39:29.088026 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:39:29.130289 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:39:29.313155 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:39:29.327738 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:39:29.496664 kernel: EXT4-fs (vda9): mounted filesystem 345fc709-8965-4219-b368-16e508c3d632 r/w with ordered data mode. Quota mode: none. Mar 17 17:39:29.497031 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:39:29.498580 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:39:29.514794 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:39:29.562295 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:39:29.563996 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:39:29.572268 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (813) Mar 17 17:39:29.572290 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:39:29.572301 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:39:29.572318 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:39:29.564038 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:39:29.576285 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:39:29.564061 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:39:29.573165 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:39:29.577375 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:39:29.580321 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:39:29.628580 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:39:29.633739 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:39:29.637977 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:39:29.642959 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:39:29.740068 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:39:29.751726 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:39:29.753361 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:39:29.759405 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:39:29.762685 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:39:29.780382 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:39:30.065303 ignition[930]: INFO : Ignition 2.20.0 Mar 17 17:39:30.065303 ignition[930]: INFO : Stage: mount Mar 17 17:39:30.067269 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:39:30.067269 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:39:30.067269 ignition[930]: INFO : mount: mount passed Mar 17 17:39:30.067269 ignition[930]: INFO : Ignition finished successfully Mar 17 17:39:30.072836 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:39:30.080786 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:39:30.396850 systemd-networkd[768]: eth0: Gained IPv6LL Mar 17 17:39:30.506834 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:39:30.514621 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) Mar 17 17:39:30.514687 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:39:30.514702 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:39:30.516125 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:39:30.519148 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:39:30.520308 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:39:30.540306 ignition[957]: INFO : Ignition 2.20.0 Mar 17 17:39:30.540306 ignition[957]: INFO : Stage: files Mar 17 17:39:30.542210 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:39:30.542210 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:39:30.545201 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:39:30.547148 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:39:30.547148 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:39:30.551456 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:39:30.553023 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:39:30.554850 unknown[957]: wrote ssh authorized keys file for user: core Mar 17 17:39:30.556095 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:39:30.556095 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:39:30.556095 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 17:39:30.972126 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:39:31.256332 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:39:31.256332 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:39:31.261098 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:39:31.261098 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:39:31.264989 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:39:31.266884 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:39:31.268820 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:39:31.270971 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:39:31.273376 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:39:31.275668 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:39:31.277958 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:39:31.280174 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:39:31.334735 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:39:31.337288 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:39:31.339425 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 17:39:31.841829 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 17 17:39:32.337370 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:39:32.337370 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 17 17:39:32.341391 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:39:32.343604 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:39:32.343604 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 17 17:39:32.343604 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 17 17:39:32.343604 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:39:32.343604 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:39:32.343604 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 17 17:39:32.343604 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:39:32.380236 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:39:32.386728 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:39:32.388541 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:39:32.388541 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:39:32.388541 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:39:32.388541 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:39:32.388541 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:39:32.388541 ignition[957]: INFO : files: files passed Mar 17 17:39:32.388541 ignition[957]: INFO : Ignition finished successfully Mar 17 17:39:32.401076 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:39:32.407816 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:39:32.409222 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:39:32.416226 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:39:32.419611 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:39:32.422588 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:39:32.425041 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:39:32.425041 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:39:32.428844 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:39:32.430836 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:39:32.433978 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:39:32.447828 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:39:32.472808 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:39:32.474224 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:39:32.477004 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:39:32.479061 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:39:32.481163 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:39:32.492867 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:39:32.507469 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:39:32.514794 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:39:32.550374 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:39:32.603310 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:39:32.605488 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:39:32.607661 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:39:32.607829 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:39:32.611427 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:39:32.612107 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:39:32.612456 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:39:32.612950 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:39:32.613277 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:39:32.613609 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:39:32.622962 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:39:32.623571 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:39:32.624062 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:39:32.624394 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:39:32.624701 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:39:32.624830 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:39:32.633549 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:39:32.634108 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:39:32.634402 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:39:32.634509 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:39:32.689476 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:39:32.689647 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:39:32.693363 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:39:32.693482 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:39:32.695604 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:39:32.696042 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:39:32.697699 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:39:32.698581 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:39:32.700907 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:39:32.702342 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:39:32.702438 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:39:32.704189 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:39:32.704281 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:39:32.710182 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:39:32.710297 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:39:32.712049 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:39:32.712153 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:39:32.725794 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:39:32.727760 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:39:32.728169 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:39:32.728283 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:39:32.730324 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:39:32.730442 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:39:32.739344 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:39:32.739482 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:39:32.743136 ignition[1013]: INFO : Ignition 2.20.0 Mar 17 17:39:32.743136 ignition[1013]: INFO : Stage: umount Mar 17 17:39:32.744723 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:39:32.744723 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:39:32.744723 ignition[1013]: INFO : umount: umount passed Mar 17 17:39:32.744723 ignition[1013]: INFO : Ignition finished successfully Mar 17 17:39:32.753935 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:39:32.754072 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:39:32.754967 systemd[1]: Stopped target network.target - Network. Mar 17 17:39:32.757089 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:39:32.757143 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:39:32.757443 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:39:32.757485 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:39:32.757943 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:39:32.757987 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:39:32.758265 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:39:32.758307 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:39:32.758733 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:39:32.759250 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:39:32.767906 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:39:32.768039 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:39:32.770850 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:39:32.770914 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:39:32.807758 systemd-networkd[768]: eth0: DHCPv6 lease lost Mar 17 17:39:32.809628 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:39:32.809831 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:39:32.810602 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:39:32.810664 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:39:32.822779 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:39:32.823175 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:39:32.823229 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:39:32.823550 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:39:32.823595 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:39:32.824034 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:39:32.824078 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:39:32.824442 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:39:32.836229 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:39:32.836431 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:39:32.851670 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:39:32.910569 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:39:32.913275 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:39:32.914281 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:39:32.916501 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:39:32.916551 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:39:32.919765 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:39:32.919819 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:39:32.922758 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:39:32.923718 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:39:32.925758 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:39:32.926706 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:39:32.940761 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:39:32.942907 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:39:32.942963 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:39:32.946357 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:39:32.947443 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:39:32.950054 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:39:32.950105 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:39:32.953341 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:39:32.954312 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:39:33.029953 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:39:33.031051 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:39:33.244651 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:39:33.273936 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:39:33.274070 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:39:33.297581 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:39:33.299890 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:39:33.299979 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:39:33.309820 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:39:33.355873 systemd[1]: Switching root. Mar 17 17:39:33.393578 systemd-journald[195]: Journal stopped Mar 17 17:39:34.604988 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 17 17:39:34.605056 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:39:34.605071 kernel: SELinux: policy capability open_perms=1 Mar 17 17:39:34.605082 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:39:34.605097 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:39:34.605109 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:39:34.605125 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:39:34.605136 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:39:34.605147 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:39:34.605158 kernel: audit: type=1403 audit(1742233173.809:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:39:34.605171 systemd[1]: Successfully loaded SELinux policy in 51.269ms. Mar 17 17:39:34.605190 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.982ms. Mar 17 17:39:34.605206 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:39:34.605221 systemd[1]: Detected virtualization kvm. Mar 17 17:39:34.605233 systemd[1]: Detected architecture x86-64. Mar 17 17:39:34.605244 systemd[1]: Detected first boot. Mar 17 17:39:34.605256 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:39:34.605268 zram_generator::config[1058]: No configuration found. Mar 17 17:39:34.605281 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:39:34.605301 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:39:34.605313 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:39:34.605328 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:39:34.605340 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:39:34.605353 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:39:34.605366 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:39:34.605378 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:39:34.605390 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:39:34.605402 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:39:34.605414 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:39:34.605428 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:39:34.605441 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:39:34.605454 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:39:34.605466 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:39:34.605478 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:39:34.605490 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:39:34.605502 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:39:34.605514 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:39:34.605526 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:39:34.605540 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:39:34.605552 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:39:34.605564 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:39:34.605576 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:39:34.605588 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:39:34.605604 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:39:34.605617 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:39:34.605629 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:39:34.605774 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:39:34.605798 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:39:34.605811 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:39:34.605822 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:39:34.605834 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:39:34.605846 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:39:34.605867 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:39:34.605878 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:39:34.605891 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:39:34.605906 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:39:34.605918 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:39:34.605930 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:39:34.605942 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:39:34.605954 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:39:34.605966 systemd[1]: Reached target machines.target - Containers. Mar 17 17:39:34.605985 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:39:34.606000 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:39:34.606015 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:39:34.606027 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:39:34.606047 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:39:34.606059 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:39:34.606071 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:39:34.606083 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:39:34.606095 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:39:34.606108 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:39:34.606119 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:39:34.606134 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:39:34.606146 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:39:34.606158 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:39:34.606174 kernel: fuse: init (API version 7.39) Mar 17 17:39:34.606186 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:39:34.606199 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:39:34.606211 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:39:34.606223 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:39:34.606234 kernel: loop: module loaded Mar 17 17:39:34.606248 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:39:34.606278 systemd-journald[1142]: Collecting audit messages is disabled. Mar 17 17:39:34.606316 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:39:34.606328 systemd[1]: Stopped verity-setup.service. Mar 17 17:39:34.606340 kernel: ACPI: bus type drm_connector registered Mar 17 17:39:34.606353 systemd-journald[1142]: Journal started Mar 17 17:39:34.606377 systemd-journald[1142]: Runtime Journal (/run/log/journal/7357744f14cb4ca1bc675194700462f6) is 6.0M, max 48.3M, 42.2M free. Mar 17 17:39:34.364654 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:39:34.606687 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:39:34.386801 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:39:34.387268 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:39:34.612579 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:39:34.613178 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:39:34.615697 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:39:34.617048 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:39:34.618175 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:39:34.619424 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:39:34.620828 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:39:34.622089 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:39:34.623594 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:39:34.625179 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:39:34.625382 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:39:34.627037 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:39:34.627231 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:39:34.628778 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:39:34.628954 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:39:34.630566 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:39:34.630754 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:39:34.632322 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:39:34.632517 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:39:34.634054 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:39:34.634250 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:39:34.635814 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:39:34.637337 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:39:34.639138 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:39:34.657914 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:39:34.666846 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:39:34.669737 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:39:34.671009 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:39:34.671057 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:39:34.673579 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:39:34.676505 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:39:34.683072 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:39:34.684689 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:39:34.690589 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:39:34.697254 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:39:34.699695 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:39:34.708221 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:39:34.709614 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:39:34.719347 systemd-journald[1142]: Time spent on flushing to /var/log/journal/7357744f14cb4ca1bc675194700462f6 is 14.344ms for 1039 entries. Mar 17 17:39:34.719347 systemd-journald[1142]: System Journal (/var/log/journal/7357744f14cb4ca1bc675194700462f6) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:39:34.744132 systemd-journald[1142]: Received client request to flush runtime journal. Mar 17 17:39:34.715791 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:39:34.718913 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:39:34.727489 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:39:34.730786 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:39:34.736411 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:39:34.737884 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:39:34.739415 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:39:34.741040 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:39:34.746105 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:39:34.756669 kernel: loop0: detected capacity change from 0 to 140992 Mar 17 17:39:34.756132 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:39:34.767763 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:39:34.771164 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Mar 17 17:39:34.771770 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Mar 17 17:39:34.771866 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:39:34.773741 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:39:34.779130 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:39:34.787853 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:39:34.794673 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:39:34.803630 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:39:34.807300 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:39:34.809197 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:39:34.818685 kernel: loop1: detected capacity change from 0 to 138184 Mar 17 17:39:34.820927 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:39:34.831077 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:39:34.852480 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Mar 17 17:39:34.852504 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Mar 17 17:39:34.858722 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:39:34.863676 kernel: loop2: detected capacity change from 0 to 210664 Mar 17 17:39:34.907690 kernel: loop3: detected capacity change from 0 to 140992 Mar 17 17:39:34.919842 kernel: loop4: detected capacity change from 0 to 138184 Mar 17 17:39:34.932658 kernel: loop5: detected capacity change from 0 to 210664 Mar 17 17:39:34.941072 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:39:34.941735 (sd-merge)[1200]: Merged extensions into '/usr'. Mar 17 17:39:34.945822 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:39:34.945840 systemd[1]: Reloading... Mar 17 17:39:34.999757 zram_generator::config[1226]: No configuration found. Mar 17 17:39:35.059672 ldconfig[1167]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:39:35.133653 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:39:35.183769 systemd[1]: Reloading finished in 237 ms. Mar 17 17:39:35.217720 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:39:35.219352 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:39:35.239016 systemd[1]: Starting ensure-sysext.service... Mar 17 17:39:35.241317 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:39:35.266512 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:39:35.267074 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:39:35.268365 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:39:35.268853 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 17 17:39:35.268955 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 17 17:39:35.273398 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:39:35.273412 systemd-tmpfiles[1264]: Skipping /boot Mar 17 17:39:35.273687 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:39:35.273705 systemd[1]: Reloading... Mar 17 17:39:35.286520 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:39:35.286536 systemd-tmpfiles[1264]: Skipping /boot Mar 17 17:39:35.334702 zram_generator::config[1294]: No configuration found. Mar 17 17:39:35.445938 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:39:35.496766 systemd[1]: Reloading finished in 222 ms. Mar 17 17:39:35.514778 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:39:35.527204 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:39:35.537493 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:39:35.540773 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:39:35.543546 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:39:35.547701 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:39:35.551293 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:39:35.557927 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:39:35.561586 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:39:35.561808 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:39:35.563065 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:39:35.566962 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:39:35.573902 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:39:35.575829 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:39:35.581921 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:39:35.583019 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:39:35.584321 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:39:35.584945 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:39:35.587841 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:39:35.588062 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:39:35.590036 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:39:35.590260 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:39:35.603390 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Mar 17 17:39:35.603886 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:39:35.607731 augenrules[1361]: No rules Mar 17 17:39:35.608556 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:39:35.608839 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:39:35.618373 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:39:35.623314 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:39:35.631892 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:39:35.633018 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:39:35.637807 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:39:35.642796 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:39:35.651927 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:39:35.655894 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:39:35.657813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:39:35.659962 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:39:35.661705 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:39:35.666068 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:39:35.667857 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:39:35.669694 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:39:35.671586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:39:35.671894 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:39:35.674221 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:39:35.674453 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:39:35.677302 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:39:35.677500 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:39:35.679630 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:39:35.683288 augenrules[1371]: /sbin/augenrules: No change Mar 17 17:39:35.692920 systemd[1]: Finished ensure-sysext.service. Mar 17 17:39:35.698778 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1381) Mar 17 17:39:35.695662 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:39:35.695852 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:39:36.032031 augenrules[1425]: No rules Mar 17 17:39:36.030803 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:39:36.031064 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:39:36.040190 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:39:36.061458 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:39:36.063390 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:39:36.063502 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:39:36.077126 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:39:36.078545 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:39:36.082587 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:39:36.086293 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:39:36.111100 systemd-resolved[1333]: Positive Trust Anchors: Mar 17 17:39:36.111124 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:39:36.111157 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:39:36.123670 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 17:39:36.123869 systemd-resolved[1333]: Defaulting to hostname 'linux'. Mar 17 17:39:36.129239 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:39:36.129765 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:39:36.132550 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:39:36.136484 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:39:36.138970 systemd-networkd[1437]: lo: Link UP Mar 17 17:39:36.139227 systemd-networkd[1437]: lo: Gained carrier Mar 17 17:39:36.141205 systemd-networkd[1437]: Enumeration completed Mar 17 17:39:36.141324 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:39:36.141624 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:39:36.141629 systemd-networkd[1437]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:39:36.142682 systemd[1]: Reached target network.target - Network. Mar 17 17:39:36.144023 systemd-networkd[1437]: eth0: Link UP Mar 17 17:39:36.144028 systemd-networkd[1437]: eth0: Gained carrier Mar 17 17:39:36.144041 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:39:36.149861 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:39:36.157729 systemd-networkd[1437]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:39:36.165938 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 17 17:39:36.177069 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 17:39:36.177240 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 17:39:36.177653 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 17:39:36.181811 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 17:39:36.190235 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:39:36.191980 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:39:36.193816 systemd-timesyncd[1438]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:39:36.193915 systemd-timesyncd[1438]: Initial clock synchronization to Mon 2025-03-17 17:39:36.205554 UTC. Mar 17 17:39:36.253531 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:39:36.261325 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:39:36.261601 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:39:36.264781 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:39:36.312675 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:39:36.326427 kernel: kvm_amd: TSC scaling supported Mar 17 17:39:36.326512 kernel: kvm_amd: Nested Virtualization enabled Mar 17 17:39:36.326525 kernel: kvm_amd: Nested Paging enabled Mar 17 17:39:36.327978 kernel: kvm_amd: LBR virtualization supported Mar 17 17:39:36.328033 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 17 17:39:36.328856 kernel: kvm_amd: Virtual GIF supported Mar 17 17:39:36.376665 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:39:36.384345 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:39:36.405152 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:39:36.416839 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:39:36.426467 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:39:36.460166 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:39:36.461967 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:39:36.463330 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:39:36.464621 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:39:36.465967 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:39:36.467984 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:39:36.469253 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:39:36.470819 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:39:36.472299 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:39:36.472332 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:39:36.473368 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:39:36.475368 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:39:36.478771 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:39:36.491660 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:39:36.494303 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:39:36.496027 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:39:36.497211 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:39:36.498329 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:39:36.499354 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:39:36.499393 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:39:36.500616 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:39:36.502852 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:39:36.507746 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:39:36.510764 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:39:36.512780 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:39:36.514505 jq[1466]: false Mar 17 17:39:36.514740 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:39:36.513992 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:39:36.516089 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:39:36.520823 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:39:36.523824 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:39:36.531856 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:39:36.533419 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:39:36.534437 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:39:36.535599 dbus-daemon[1465]: [system] SELinux support is enabled Mar 17 17:39:36.537819 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:39:36.541867 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:39:36.544456 extend-filesystems[1467]: Found loop3 Mar 17 17:39:36.551745 extend-filesystems[1467]: Found loop4 Mar 17 17:39:36.551745 extend-filesystems[1467]: Found loop5 Mar 17 17:39:36.551745 extend-filesystems[1467]: Found sr0 Mar 17 17:39:36.551745 extend-filesystems[1467]: Found vda Mar 17 17:39:36.551745 extend-filesystems[1467]: Found vda1 Mar 17 17:39:36.551745 extend-filesystems[1467]: Found vda2 Mar 17 17:39:36.551745 extend-filesystems[1467]: Found vda3 Mar 17 17:39:36.551745 extend-filesystems[1467]: Found usr Mar 17 17:39:36.551745 extend-filesystems[1467]: Found vda4 Mar 17 17:39:36.551745 extend-filesystems[1467]: Found vda6 Mar 17 17:39:36.551745 extend-filesystems[1467]: Found vda7 Mar 17 17:39:36.551745 extend-filesystems[1467]: Found vda9 Mar 17 17:39:36.551745 extend-filesystems[1467]: Checking size of /dev/vda9 Mar 17 17:39:36.545678 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:39:36.571617 extend-filesystems[1467]: Resized partition /dev/vda9 Mar 17 17:39:36.554729 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:39:36.568444 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:39:36.568711 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:39:36.570317 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:39:36.570520 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:39:36.576579 extend-filesystems[1489]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:39:36.585005 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:39:36.585037 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1398) Mar 17 17:39:36.585569 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:39:36.585959 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:39:36.588301 jq[1479]: true Mar 17 17:39:36.601003 update_engine[1475]: I20250317 17:39:36.600813 1475 main.cc:92] Flatcar Update Engine starting Mar 17 17:39:36.605849 update_engine[1475]: I20250317 17:39:36.604857 1475 update_check_scheduler.cc:74] Next update check in 2m53s Mar 17 17:39:36.618035 systemd-logind[1473]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:39:36.618075 systemd-logind[1473]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:39:36.618519 systemd-logind[1473]: New seat seat0. Mar 17 17:39:36.622511 (ntainerd)[1494]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:39:36.629078 jq[1491]: true Mar 17 17:39:36.633521 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:39:36.639349 dbus-daemon[1465]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 17:39:36.642992 tar[1490]: linux-amd64/helm Mar 17 17:39:36.650314 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:39:36.658545 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:39:36.668038 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:39:36.668201 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:39:36.669967 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:39:36.670088 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:39:36.680938 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:39:36.695066 sshd_keygen[1484]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:39:36.696740 extend-filesystems[1489]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:39:36.696740 extend-filesystems[1489]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:39:36.696740 extend-filesystems[1489]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:39:36.729804 extend-filesystems[1467]: Resized filesystem in /dev/vda9 Mar 17 17:39:36.701223 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:39:36.701546 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:39:36.734492 bash[1518]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:39:36.737715 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:39:36.742103 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:39:36.746937 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:39:36.752185 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:39:36.764140 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:39:36.772480 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:39:36.772741 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:39:36.777226 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:39:36.810157 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:39:36.836422 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:39:36.841765 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:39:36.843961 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:39:37.152791 containerd[1494]: time="2025-03-17T17:39:37.152537724Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:39:37.180718 containerd[1494]: time="2025-03-17T17:39:37.180664743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:39:37.183051 containerd[1494]: time="2025-03-17T17:39:37.182995591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:39:37.183094 containerd[1494]: time="2025-03-17T17:39:37.183065554Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:39:37.183094 containerd[1494]: time="2025-03-17T17:39:37.183082047Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:39:37.183312 containerd[1494]: time="2025-03-17T17:39:37.183281240Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:39:37.183312 containerd[1494]: time="2025-03-17T17:39:37.183300550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:39:37.183584 containerd[1494]: time="2025-03-17T17:39:37.183553844Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:39:37.183632 containerd[1494]: time="2025-03-17T17:39:37.183593779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:39:37.184250 containerd[1494]: time="2025-03-17T17:39:37.183942032Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:39:37.184250 containerd[1494]: time="2025-03-17T17:39:37.183962014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:39:37.184250 containerd[1494]: time="2025-03-17T17:39:37.183979309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:39:37.184250 containerd[1494]: time="2025-03-17T17:39:37.183991762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:39:37.184250 containerd[1494]: time="2025-03-17T17:39:37.184100246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:39:37.184409 containerd[1494]: time="2025-03-17T17:39:37.184384912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:39:37.184578 containerd[1494]: time="2025-03-17T17:39:37.184554577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:39:37.184578 containerd[1494]: time="2025-03-17T17:39:37.184575130Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:39:37.184725 containerd[1494]: time="2025-03-17T17:39:37.184705201Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:39:37.185031 containerd[1494]: time="2025-03-17T17:39:37.184772227Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:39:37.190440 containerd[1494]: time="2025-03-17T17:39:37.190407174Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:39:37.190489 containerd[1494]: time="2025-03-17T17:39:37.190464234Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:39:37.190509 containerd[1494]: time="2025-03-17T17:39:37.190486281Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:39:37.190527 containerd[1494]: time="2025-03-17T17:39:37.190505822Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:39:37.190556 containerd[1494]: time="2025-03-17T17:39:37.190523589Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:39:37.190701 containerd[1494]: time="2025-03-17T17:39:37.190679678Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:39:37.190963 containerd[1494]: time="2025-03-17T17:39:37.190938235Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:39:37.191080 containerd[1494]: time="2025-03-17T17:39:37.191063263Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:39:37.191099 containerd[1494]: time="2025-03-17T17:39:37.191085521Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:39:37.191117 containerd[1494]: time="2025-03-17T17:39:37.191102105Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:39:37.191146 containerd[1494]: time="2025-03-17T17:39:37.191119259Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:39:37.191146 containerd[1494]: time="2025-03-17T17:39:37.191134951Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:39:37.191182 containerd[1494]: time="2025-03-17T17:39:37.191149760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:39:37.191182 containerd[1494]: time="2025-03-17T17:39:37.191165922Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:39:37.191217 containerd[1494]: time="2025-03-17T17:39:37.191187740Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:39:37.191217 containerd[1494]: time="2025-03-17T17:39:37.191204032Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:39:37.191253 containerd[1494]: time="2025-03-17T17:39:37.191218449Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:39:37.191253 containerd[1494]: time="2025-03-17T17:39:37.191231885Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:39:37.191293 containerd[1494]: time="2025-03-17T17:39:37.191254163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.191293 containerd[1494]: time="2025-03-17T17:39:37.191269283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.191293 containerd[1494]: time="2025-03-17T17:39:37.191282187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.191540 containerd[1494]: time="2025-03-17T17:39:37.191294850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.191540 containerd[1494]: time="2025-03-17T17:39:37.191306561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.191540 containerd[1494]: time="2025-03-17T17:39:37.191318752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.191540 containerd[1494]: time="2025-03-17T17:39:37.191329591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.191540 containerd[1494]: time="2025-03-17T17:39:37.191341442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.191540 containerd[1494]: time="2025-03-17T17:39:37.191353614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.191540 containerd[1494]: time="2025-03-17T17:39:37.191380614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.191540 containerd[1494]: time="2025-03-17T17:39:37.191395574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.191540 containerd[1494]: time="2025-03-17T17:39:37.191411375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.191540 containerd[1494]: time="2025-03-17T17:39:37.191432591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.191540 containerd[1494]: time="2025-03-17T17:39:37.191455711Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:39:37.191540 containerd[1494]: time="2025-03-17T17:39:37.191480076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.191540 containerd[1494]: time="2025-03-17T17:39:37.191498063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.191540 containerd[1494]: time="2025-03-17T17:39:37.191513924Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:39:37.191787 containerd[1494]: time="2025-03-17T17:39:37.191575876Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:39:37.191787 containerd[1494]: time="2025-03-17T17:39:37.191597112Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:39:37.191787 containerd[1494]: time="2025-03-17T17:39:37.191610537Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:39:37.191787 containerd[1494]: time="2025-03-17T17:39:37.191624754Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:39:37.191787 containerd[1494]: time="2025-03-17T17:39:37.191651294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.191787 containerd[1494]: time="2025-03-17T17:39:37.191667586Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:39:37.191787 containerd[1494]: time="2025-03-17T17:39:37.191681503Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:39:37.191787 containerd[1494]: time="2025-03-17T17:39:37.191693745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:39:37.192138 containerd[1494]: time="2025-03-17T17:39:37.192075356Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:39:37.192349 containerd[1494]: time="2025-03-17T17:39:37.192141128Z" level=info msg="Connect containerd service" Mar 17 17:39:37.192349 containerd[1494]: time="2025-03-17T17:39:37.192176390Z" level=info msg="using legacy CRI server" Mar 17 17:39:37.192349 containerd[1494]: time="2025-03-17T17:39:37.192184952Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:39:37.192349 containerd[1494]: time="2025-03-17T17:39:37.192297768Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:39:37.193055 containerd[1494]: time="2025-03-17T17:39:37.193008852Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:39:37.193235 containerd[1494]: time="2025-03-17T17:39:37.193189405Z" level=info msg="Start subscribing containerd event" Mar 17 17:39:37.193261 containerd[1494]: time="2025-03-17T17:39:37.193246394Z" level=info msg="Start recovering state" Mar 17 17:39:37.193353 containerd[1494]: time="2025-03-17T17:39:37.193325592Z" level=info msg="Start event monitor" Mar 17 17:39:37.193409 containerd[1494]: time="2025-03-17T17:39:37.193381999Z" level=info msg="Start snapshots syncer" Mar 17 17:39:37.193433 containerd[1494]: time="2025-03-17T17:39:37.193410986Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:39:37.193433 containerd[1494]: time="2025-03-17T17:39:37.193421704Z" level=info msg="Start streaming server" Mar 17 17:39:37.193580 containerd[1494]: time="2025-03-17T17:39:37.193482843Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:39:37.193580 containerd[1494]: time="2025-03-17T17:39:37.193554582Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:39:37.193730 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:39:37.194571 containerd[1494]: time="2025-03-17T17:39:37.194546592Z" level=info msg="containerd successfully booted in 0.072517s" Mar 17 17:39:37.300512 tar[1490]: linux-amd64/LICENSE Mar 17 17:39:37.301911 tar[1490]: linux-amd64/README.md Mar 17 17:39:37.317941 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:39:37.436972 systemd-networkd[1437]: eth0: Gained IPv6LL Mar 17 17:39:37.440959 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:39:37.442843 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:39:37.460873 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:39:37.463522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:39:37.465913 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:39:37.486161 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:39:37.486429 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:39:37.488266 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:39:37.491078 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:39:38.596106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:39:38.598173 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:39:38.599716 systemd[1]: Startup finished in 1.451s (kernel) + 8.050s (initrd) + 4.839s (userspace) = 14.341s. Mar 17 17:39:38.602329 (kubelet)[1578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:39:39.269765 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:39:39.274876 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:34850.service - OpenSSH per-connection server daemon (10.0.0.1:34850). Mar 17 17:39:39.333811 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 34850 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:39:39.337929 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:39.348727 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:39:39.354896 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:39:39.357271 systemd-logind[1473]: New session 1 of user core. Mar 17 17:39:39.385395 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:39:39.394996 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:39:39.397901 kubelet[1578]: E0317 17:39:39.397802 1578 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:39:39.399796 (systemd)[1596]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:39:39.402853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:39:39.403122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:39:39.403539 systemd[1]: kubelet.service: Consumed 1.777s CPU time. Mar 17 17:39:39.525745 systemd[1596]: Queued start job for default target default.target. Mar 17 17:39:39.536113 systemd[1596]: Created slice app.slice - User Application Slice. Mar 17 17:39:39.536143 systemd[1596]: Reached target paths.target - Paths. Mar 17 17:39:39.536157 systemd[1596]: Reached target timers.target - Timers. Mar 17 17:39:39.537974 systemd[1596]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:39:39.552126 systemd[1596]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:39:39.552263 systemd[1596]: Reached target sockets.target - Sockets. Mar 17 17:39:39.552277 systemd[1596]: Reached target basic.target - Basic System. Mar 17 17:39:39.552316 systemd[1596]: Reached target default.target - Main User Target. Mar 17 17:39:39.552352 systemd[1596]: Startup finished in 143ms. Mar 17 17:39:39.553204 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:39:39.554883 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:39:39.617568 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:34864.service - OpenSSH per-connection server daemon (10.0.0.1:34864). Mar 17 17:39:39.669935 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 34864 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:39:39.671345 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:39.675129 systemd-logind[1473]: New session 2 of user core. Mar 17 17:39:39.690774 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:39:39.744540 sshd[1610]: Connection closed by 10.0.0.1 port 34864 Mar 17 17:39:39.744904 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:39.758190 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:34864.service: Deactivated successfully. Mar 17 17:39:39.759859 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:39:39.761270 systemd-logind[1473]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:39:39.768933 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:34880.service - OpenSSH per-connection server daemon (10.0.0.1:34880). Mar 17 17:39:39.769946 systemd-logind[1473]: Removed session 2. Mar 17 17:39:39.809510 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 34880 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:39:39.811197 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:39.815304 systemd-logind[1473]: New session 3 of user core. Mar 17 17:39:39.825911 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:39:39.878248 sshd[1617]: Connection closed by 10.0.0.1 port 34880 Mar 17 17:39:39.878718 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:39.893987 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:34880.service: Deactivated successfully. Mar 17 17:39:39.895866 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:39:39.897428 systemd-logind[1473]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:39:39.898760 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:34886.service - OpenSSH per-connection server daemon (10.0.0.1:34886). Mar 17 17:39:39.899529 systemd-logind[1473]: Removed session 3. Mar 17 17:39:39.942601 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 34886 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:39:39.944379 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:39.950062 systemd-logind[1473]: New session 4 of user core. Mar 17 17:39:39.958948 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:39:40.015901 sshd[1624]: Connection closed by 10.0.0.1 port 34886 Mar 17 17:39:40.016444 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Mar 17 17:39:40.027561 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:34886.service: Deactivated successfully. Mar 17 17:39:40.029522 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:39:40.031128 systemd-logind[1473]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:39:40.032944 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:34896.service - OpenSSH per-connection server daemon (10.0.0.1:34896). Mar 17 17:39:40.033926 systemd-logind[1473]: Removed session 4. Mar 17 17:39:40.089737 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 34896 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:39:40.091586 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:39:40.096949 systemd-logind[1473]: New session 5 of user core. Mar 17 17:39:40.105867 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:39:40.168669 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:39:40.169018 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:39:40.736872 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:39:40.737011 (dockerd)[1652]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:39:41.227115 dockerd[1652]: time="2025-03-17T17:39:41.226929535Z" level=info msg="Starting up" Mar 17 17:39:42.195629 dockerd[1652]: time="2025-03-17T17:39:42.195556469Z" level=info msg="Loading containers: start." Mar 17 17:39:42.387675 kernel: Initializing XFRM netlink socket Mar 17 17:39:42.467935 systemd-networkd[1437]: docker0: Link UP Mar 17 17:39:42.516014 dockerd[1652]: time="2025-03-17T17:39:42.515963808Z" level=info msg="Loading containers: done." Mar 17 17:39:42.542919 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck397775247-merged.mount: Deactivated successfully. Mar 17 17:39:42.586146 dockerd[1652]: time="2025-03-17T17:39:42.586068866Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:39:42.586378 dockerd[1652]: time="2025-03-17T17:39:42.586220853Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:39:42.586437 dockerd[1652]: time="2025-03-17T17:39:42.586408877Z" level=info msg="Daemon has completed initialization" Mar 17 17:39:42.772936 dockerd[1652]: time="2025-03-17T17:39:42.772763321Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:39:42.772999 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:39:44.001304 containerd[1494]: time="2025-03-17T17:39:44.001258533Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:39:44.787205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4110408208.mount: Deactivated successfully. Mar 17 17:39:46.728024 containerd[1494]: time="2025-03-17T17:39:46.727952885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:46.728926 containerd[1494]: time="2025-03-17T17:39:46.728844238Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674573" Mar 17 17:39:46.730280 containerd[1494]: time="2025-03-17T17:39:46.730241492Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:46.733341 containerd[1494]: time="2025-03-17T17:39:46.733302280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:46.734343 containerd[1494]: time="2025-03-17T17:39:46.734295436Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 2.732995486s" Mar 17 17:39:46.734405 containerd[1494]: time="2025-03-17T17:39:46.734346313Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 17:39:46.759699 containerd[1494]: time="2025-03-17T17:39:46.759398220Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:39:49.105442 containerd[1494]: time="2025-03-17T17:39:49.105364911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:49.106233 containerd[1494]: time="2025-03-17T17:39:49.106136132Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619772" Mar 17 17:39:49.107408 containerd[1494]: time="2025-03-17T17:39:49.107370842Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:49.110385 containerd[1494]: time="2025-03-17T17:39:49.110347408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:49.111401 containerd[1494]: time="2025-03-17T17:39:49.111356968Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 2.351912053s" Mar 17 17:39:49.111401 containerd[1494]: time="2025-03-17T17:39:49.111395976Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 17:39:49.133679 containerd[1494]: time="2025-03-17T17:39:49.133619677Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:39:49.653302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:39:49.665821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:39:49.835853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:39:49.845068 (kubelet)[1935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:39:50.196891 kubelet[1935]: E0317 17:39:50.196805 1935 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:39:50.204495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:39:50.204715 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:39:50.484984 containerd[1494]: time="2025-03-17T17:39:50.484868501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:50.487397 containerd[1494]: time="2025-03-17T17:39:50.485956452Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903309" Mar 17 17:39:50.488905 containerd[1494]: time="2025-03-17T17:39:50.488871970Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:50.491529 containerd[1494]: time="2025-03-17T17:39:50.491503745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:50.492480 containerd[1494]: time="2025-03-17T17:39:50.492451017Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 1.358779796s" Mar 17 17:39:50.492528 containerd[1494]: time="2025-03-17T17:39:50.492481614Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 17:39:50.519411 containerd[1494]: time="2025-03-17T17:39:50.519347073Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:39:51.635949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3825846116.mount: Deactivated successfully. Mar 17 17:39:52.756754 containerd[1494]: time="2025-03-17T17:39:52.756678358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:52.757802 containerd[1494]: time="2025-03-17T17:39:52.757719366Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185372" Mar 17 17:39:52.759428 containerd[1494]: time="2025-03-17T17:39:52.759391055Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:52.763048 containerd[1494]: time="2025-03-17T17:39:52.763013785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:52.763760 containerd[1494]: time="2025-03-17T17:39:52.763703465Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 2.244320632s" Mar 17 17:39:52.763760 containerd[1494]: time="2025-03-17T17:39:52.763754304Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 17:39:52.788054 containerd[1494]: time="2025-03-17T17:39:52.788013969Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:39:53.509464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1073555406.mount: Deactivated successfully. Mar 17 17:39:54.682087 containerd[1494]: time="2025-03-17T17:39:54.682007623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:54.726446 containerd[1494]: time="2025-03-17T17:39:54.726358177Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 17 17:39:54.730887 containerd[1494]: time="2025-03-17T17:39:54.730850572Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:54.735163 containerd[1494]: time="2025-03-17T17:39:54.735120725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:54.736205 containerd[1494]: time="2025-03-17T17:39:54.736174925Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.948120116s" Mar 17 17:39:54.736246 containerd[1494]: time="2025-03-17T17:39:54.736203696Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 17:39:54.759579 containerd[1494]: time="2025-03-17T17:39:54.759528464Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:39:55.636487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2976911443.mount: Deactivated successfully. Mar 17 17:39:55.651128 containerd[1494]: time="2025-03-17T17:39:55.651089099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:55.654383 containerd[1494]: time="2025-03-17T17:39:55.654293477Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Mar 17 17:39:55.658264 containerd[1494]: time="2025-03-17T17:39:55.658228858Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:55.661825 containerd[1494]: time="2025-03-17T17:39:55.661779467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:55.662509 containerd[1494]: time="2025-03-17T17:39:55.662465865Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 902.898839ms" Mar 17 17:39:55.662562 containerd[1494]: time="2025-03-17T17:39:55.662504076Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 17:39:55.683918 containerd[1494]: time="2025-03-17T17:39:55.683870952Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:39:56.308495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3935668633.mount: Deactivated successfully. Mar 17 17:39:59.771554 containerd[1494]: time="2025-03-17T17:39:59.771463832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:59.806515 containerd[1494]: time="2025-03-17T17:39:59.806439326Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Mar 17 17:39:59.840483 containerd[1494]: time="2025-03-17T17:39:59.840411068Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:59.903157 containerd[1494]: time="2025-03-17T17:39:59.903110819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:39:59.904490 containerd[1494]: time="2025-03-17T17:39:59.904453871Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.220547715s" Mar 17 17:39:59.904546 containerd[1494]: time="2025-03-17T17:39:59.904493943Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 17:40:00.454967 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:40:00.468902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:40:00.629568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:40:00.634965 (kubelet)[2095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:40:00.684370 kubelet[2095]: E0317 17:40:00.684295 2095 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:40:00.688763 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:40:00.688969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:40:03.163290 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:40:03.175907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:40:03.194386 systemd[1]: Reloading requested from client PID 2173 ('systemctl') (unit session-5.scope)... Mar 17 17:40:03.194403 systemd[1]: Reloading... Mar 17 17:40:03.282673 zram_generator::config[2212]: No configuration found. Mar 17 17:40:03.721348 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:40:03.801462 systemd[1]: Reloading finished in 606 ms. Mar 17 17:40:03.859890 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:40:03.862949 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:40:03.863267 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:40:03.874967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:40:04.032315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:40:04.038461 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:40:04.085435 kubelet[2263]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:40:04.085435 kubelet[2263]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:40:04.085435 kubelet[2263]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:40:04.085943 kubelet[2263]: I0317 17:40:04.085472 2263 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:40:04.420247 kubelet[2263]: I0317 17:40:04.420140 2263 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:40:04.420247 kubelet[2263]: I0317 17:40:04.420170 2263 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:40:04.420386 kubelet[2263]: I0317 17:40:04.420369 2263 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:40:04.434447 kubelet[2263]: I0317 17:40:04.434400 2263 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:40:04.434997 kubelet[2263]: E0317 17:40:04.434977 2263 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:04.516675 kubelet[2263]: I0317 17:40:04.516608 2263 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:40:04.518497 kubelet[2263]: I0317 17:40:04.518453 2263 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:40:04.518741 kubelet[2263]: I0317 17:40:04.518488 2263 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:40:04.520784 kubelet[2263]: I0317 17:40:04.520757 2263 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:40:04.520784 kubelet[2263]: I0317 17:40:04.520781 2263 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:40:04.520985 kubelet[2263]: I0317 17:40:04.520961 2263 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:40:04.521821 kubelet[2263]: I0317 17:40:04.521798 2263 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:40:04.521878 kubelet[2263]: I0317 17:40:04.521851 2263 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:40:04.521908 kubelet[2263]: I0317 17:40:04.521887 2263 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:40:04.521942 kubelet[2263]: I0317 17:40:04.521920 2263 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:40:04.522683 kubelet[2263]: W0317 17:40:04.522582 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:04.522683 kubelet[2263]: E0317 17:40:04.522656 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:04.523086 kubelet[2263]: W0317 17:40:04.523033 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:04.523086 kubelet[2263]: E0317 17:40:04.523084 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:04.526598 kubelet[2263]: I0317 17:40:04.526579 2263 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:40:04.527781 kubelet[2263]: I0317 17:40:04.527760 2263 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:40:04.527847 kubelet[2263]: W0317 17:40:04.527833 2263 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:40:04.528649 kubelet[2263]: I0317 17:40:04.528489 2263 server.go:1264] "Started kubelet" Mar 17 17:40:04.528649 kubelet[2263]: I0317 17:40:04.528569 2263 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:40:04.529378 kubelet[2263]: I0317 17:40:04.528784 2263 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:40:04.529378 kubelet[2263]: I0317 17:40:04.529185 2263 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:40:04.530623 kubelet[2263]: I0317 17:40:04.529976 2263 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:40:04.531817 kubelet[2263]: E0317 17:40:04.531487 2263 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:40:04.531918 kubelet[2263]: I0317 17:40:04.531900 2263 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:40:04.532058 kubelet[2263]: I0317 17:40:04.532033 2263 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:40:04.534661 kubelet[2263]: W0317 17:40:04.532620 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:04.534661 kubelet[2263]: E0317 17:40:04.532687 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:04.534661 kubelet[2263]: E0317 17:40:04.532806 2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:40:04.534661 kubelet[2263]: I0317 17:40:04.533868 2263 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:40:04.534661 kubelet[2263]: I0317 17:40:04.533933 2263 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:40:04.535141 kubelet[2263]: E0317 17:40:04.535029 2263 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da7de67cf3bca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:40:04.528462794 +0000 UTC m=+0.485750909,LastTimestamp:2025-03-17 17:40:04.528462794 +0000 UTC m=+0.485750909,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:40:04.535233 kubelet[2263]: I0317 17:40:04.535218 2263 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:40:04.535386 kubelet[2263]: I0317 17:40:04.535354 2263 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:40:04.536680 kubelet[2263]: I0317 17:40:04.536660 2263 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:40:04.537843 kubelet[2263]: E0317 17:40:04.537802 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="200ms" Mar 17 17:40:04.550892 kubelet[2263]: I0317 17:40:04.550822 2263 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:40:04.553005 kubelet[2263]: I0317 17:40:04.552877 2263 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:40:04.553005 kubelet[2263]: I0317 17:40:04.552925 2263 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:40:04.553005 kubelet[2263]: I0317 17:40:04.552947 2263 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:40:04.553005 kubelet[2263]: E0317 17:40:04.552996 2263 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:40:04.554459 kubelet[2263]: W0317 17:40:04.553734 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:04.554459 kubelet[2263]: E0317 17:40:04.553804 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:04.559465 kubelet[2263]: I0317 17:40:04.559443 2263 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:40:04.559465 kubelet[2263]: I0317 17:40:04.559460 2263 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:40:04.559556 kubelet[2263]: I0317 17:40:04.559479 2263 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:40:04.634139 kubelet[2263]: I0317 17:40:04.634095 2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:40:04.634553 kubelet[2263]: E0317 17:40:04.634509 2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Mar 17 17:40:04.653812 kubelet[2263]: E0317 17:40:04.653772 2263 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:40:04.738806 kubelet[2263]: E0317 17:40:04.738700 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="400ms" Mar 17 17:40:04.836431 kubelet[2263]: I0317 17:40:04.836378 2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:40:04.836895 kubelet[2263]: E0317 17:40:04.836851 2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Mar 17 17:40:04.853885 kubelet[2263]: E0317 17:40:04.853855 2263 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:40:04.970981 kubelet[2263]: I0317 17:40:04.970927 2263 policy_none.go:49] "None policy: Start" Mar 17 17:40:04.971812 kubelet[2263]: I0317 17:40:04.971780 2263 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:40:04.971890 kubelet[2263]: I0317 17:40:04.971829 2263 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:40:05.140304 kubelet[2263]: E0317 17:40:05.140147 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="800ms" Mar 17 17:40:05.241725 kubelet[2263]: I0317 17:40:05.241680 2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:40:05.242385 kubelet[2263]: E0317 17:40:05.242338 2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Mar 17 17:40:05.254491 kubelet[2263]: E0317 17:40:05.254407 2263 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:40:05.395293 kubelet[2263]: W0317 17:40:05.395072 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:05.395293 kubelet[2263]: E0317 17:40:05.395191 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:05.484096 kubelet[2263]: W0317 17:40:05.484023 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:05.484096 kubelet[2263]: E0317 17:40:05.484081 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:05.670422 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:40:05.688076 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:40:05.691772 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:40:05.702990 kubelet[2263]: I0317 17:40:05.702937 2263 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:40:05.703301 kubelet[2263]: I0317 17:40:05.703233 2263 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:40:05.703563 kubelet[2263]: I0317 17:40:05.703415 2263 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:40:05.704743 kubelet[2263]: E0317 17:40:05.704717 2263 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 17:40:05.887756 kubelet[2263]: W0317 17:40:05.887615 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:05.887756 kubelet[2263]: E0317 17:40:05.887734 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:05.941868 kubelet[2263]: E0317 17:40:05.941688 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="1.6s" Mar 17 17:40:06.044674 kubelet[2263]: I0317 17:40:06.044604 2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:40:06.045079 kubelet[2263]: E0317 17:40:06.045035 2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Mar 17 17:40:06.049457 kubelet[2263]: W0317 17:40:06.049384 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:06.049457 kubelet[2263]: E0317 17:40:06.049448 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:06.055568 kubelet[2263]: I0317 17:40:06.055509 2263 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 17:40:06.056805 kubelet[2263]: I0317 17:40:06.056767 2263 topology_manager.go:215] "Topology Admit Handler" podUID="352fe1dd1a7c46d9ec04c38129fdbd79" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 17:40:06.057817 kubelet[2263]: I0317 17:40:06.057788 2263 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 17:40:06.063164 systemd[1]: Created slice kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice - libcontainer container kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice. Mar 17 17:40:06.074774 systemd[1]: Created slice kubepods-burstable-pod352fe1dd1a7c46d9ec04c38129fdbd79.slice - libcontainer container kubepods-burstable-pod352fe1dd1a7c46d9ec04c38129fdbd79.slice. Mar 17 17:40:06.079168 systemd[1]: Created slice kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice - libcontainer container kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice. Mar 17 17:40:06.144623 kubelet[2263]: I0317 17:40:06.144555 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:40:06.144623 kubelet[2263]: I0317 17:40:06.144617 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:40:06.145107 kubelet[2263]: I0317 17:40:06.144678 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/352fe1dd1a7c46d9ec04c38129fdbd79-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"352fe1dd1a7c46d9ec04c38129fdbd79\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:40:06.145107 kubelet[2263]: I0317 17:40:06.144703 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/352fe1dd1a7c46d9ec04c38129fdbd79-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"352fe1dd1a7c46d9ec04c38129fdbd79\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:40:06.145107 kubelet[2263]: I0317 17:40:06.144729 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/352fe1dd1a7c46d9ec04c38129fdbd79-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"352fe1dd1a7c46d9ec04c38129fdbd79\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:40:06.145107 kubelet[2263]: I0317 17:40:06.144758 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:40:06.145107 kubelet[2263]: I0317 17:40:06.144782 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:40:06.145224 kubelet[2263]: I0317 17:40:06.144802 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:40:06.145224 kubelet[2263]: I0317 17:40:06.144824 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:40:06.371400 kubelet[2263]: E0317 17:40:06.371245 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:06.372207 containerd[1494]: time="2025-03-17T17:40:06.372163199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 17 17:40:06.377571 kubelet[2263]: E0317 17:40:06.377534 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:06.378019 containerd[1494]: time="2025-03-17T17:40:06.377992062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:352fe1dd1a7c46d9ec04c38129fdbd79,Namespace:kube-system,Attempt:0,}" Mar 17 17:40:06.381279 kubelet[2263]: E0317 17:40:06.381248 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:06.381577 containerd[1494]: time="2025-03-17T17:40:06.381537149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 17 17:40:06.599795 kubelet[2263]: E0317 17:40:06.599744 2263 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:07.228544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2138529294.mount: Deactivated successfully. Mar 17 17:40:07.240708 containerd[1494]: time="2025-03-17T17:40:07.240655352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:40:07.245133 containerd[1494]: time="2025-03-17T17:40:07.245073224Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:40:07.246138 containerd[1494]: time="2025-03-17T17:40:07.246085642Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:40:07.248267 kubelet[2263]: W0317 17:40:07.248221 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:07.248575 kubelet[2263]: E0317 17:40:07.248270 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused Mar 17 17:40:07.250120 containerd[1494]: time="2025-03-17T17:40:07.250062582Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:40:07.250911 containerd[1494]: time="2025-03-17T17:40:07.250834242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 17:40:07.251823 containerd[1494]: time="2025-03-17T17:40:07.251779126Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:40:07.252792 containerd[1494]: time="2025-03-17T17:40:07.252725673Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:40:07.257944 containerd[1494]: time="2025-03-17T17:40:07.257906961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:40:07.258745 containerd[1494]: time="2025-03-17T17:40:07.258706246Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 880.650447ms" Mar 17 17:40:07.259457 containerd[1494]: time="2025-03-17T17:40:07.259427858Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 887.141283ms" Mar 17 17:40:07.263927 containerd[1494]: time="2025-03-17T17:40:07.263897273Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 882.283842ms" Mar 17 17:40:07.512756 containerd[1494]: time="2025-03-17T17:40:07.508905207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:07.512756 containerd[1494]: time="2025-03-17T17:40:07.509483273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:07.512756 containerd[1494]: time="2025-03-17T17:40:07.509540146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:07.512756 containerd[1494]: time="2025-03-17T17:40:07.509735293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:07.512756 containerd[1494]: time="2025-03-17T17:40:07.507882208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:07.512756 containerd[1494]: time="2025-03-17T17:40:07.510140307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:07.512756 containerd[1494]: time="2025-03-17T17:40:07.510155627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:07.512756 containerd[1494]: time="2025-03-17T17:40:07.510299352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:07.542839 kubelet[2263]: E0317 17:40:07.542773 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="3.2s" Mar 17 17:40:07.551447 containerd[1494]: time="2025-03-17T17:40:07.551270060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:07.551447 containerd[1494]: time="2025-03-17T17:40:07.551348715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:07.551727 containerd[1494]: time="2025-03-17T17:40:07.551365839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:07.551727 containerd[1494]: time="2025-03-17T17:40:07.551603700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:07.569245 systemd[1]: Started cri-containerd-066da50caf0a6ff9328dca10d31d1b787bee59b0eda20281462b27d667923d95.scope - libcontainer container 066da50caf0a6ff9328dca10d31d1b787bee59b0eda20281462b27d667923d95. Mar 17 17:40:07.575231 systemd[1]: Started cri-containerd-8bc93605ad8ab698f01b6dd3481c55bdc3c4ee014b3264968797ddd49de48b08.scope - libcontainer container 8bc93605ad8ab698f01b6dd3481c55bdc3c4ee014b3264968797ddd49de48b08. Mar 17 17:40:07.585916 systemd[1]: Started cri-containerd-9a0ba9b488f628118694a6ee9bb9601f9c85b6e55fae153ed28433b66723a9f7.scope - libcontainer container 9a0ba9b488f628118694a6ee9bb9601f9c85b6e55fae153ed28433b66723a9f7. Mar 17 17:40:07.635111 containerd[1494]: time="2025-03-17T17:40:07.634981845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:352fe1dd1a7c46d9ec04c38129fdbd79,Namespace:kube-system,Attempt:0,} returns sandbox id \"066da50caf0a6ff9328dca10d31d1b787bee59b0eda20281462b27d667923d95\"" Mar 17 17:40:07.636050 kubelet[2263]: E0317 17:40:07.636026 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:07.639524 containerd[1494]: time="2025-03-17T17:40:07.639493835Z" level=info msg="CreateContainer within sandbox \"066da50caf0a6ff9328dca10d31d1b787bee59b0eda20281462b27d667923d95\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:40:07.646210 containerd[1494]: time="2025-03-17T17:40:07.646074217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a0ba9b488f628118694a6ee9bb9601f9c85b6e55fae153ed28433b66723a9f7\"" Mar 17 17:40:07.650880 kubelet[2263]: I0317 17:40:07.649828 2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:40:07.650880 kubelet[2263]: E0317 17:40:07.650153 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:07.650880 kubelet[2263]: E0317 17:40:07.650363 2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Mar 17 17:40:07.652528 containerd[1494]: time="2025-03-17T17:40:07.652501115Z" level=info msg="CreateContainer within sandbox \"9a0ba9b488f628118694a6ee9bb9601f9c85b6e55fae153ed28433b66723a9f7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:40:07.653774 containerd[1494]: time="2025-03-17T17:40:07.653754671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bc93605ad8ab698f01b6dd3481c55bdc3c4ee014b3264968797ddd49de48b08\"" Mar 17 17:40:07.654386 kubelet[2263]: E0317 17:40:07.654361 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:07.655882 containerd[1494]: time="2025-03-17T17:40:07.655857443Z" level=info msg="CreateContainer within sandbox \"8bc93605ad8ab698f01b6dd3481c55bdc3c4ee014b3264968797ddd49de48b08\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:40:07.673449 containerd[1494]: time="2025-03-17T17:40:07.673397874Z" level=info msg="CreateContainer within sandbox \"066da50caf0a6ff9328dca10d31d1b787bee59b0eda20281462b27d667923d95\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dfa294112734a1cf5b67512f5deb9473afc71f84a67acf421b9533a94e57944c\"" Mar 17 17:40:07.674226 containerd[1494]: time="2025-03-17T17:40:07.674191509Z" level=info msg="StartContainer for \"dfa294112734a1cf5b67512f5deb9473afc71f84a67acf421b9533a94e57944c\"" Mar 17 17:40:07.693677 containerd[1494]: time="2025-03-17T17:40:07.693623483Z" level=info msg="CreateContainer within sandbox \"9a0ba9b488f628118694a6ee9bb9601f9c85b6e55fae153ed28433b66723a9f7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ea604f5c12031e8f759ecea3f228783179a59133b6764541e1817c9eacf64747\"" Mar 17 17:40:07.696075 containerd[1494]: time="2025-03-17T17:40:07.694585652Z" level=info msg="StartContainer for \"ea604f5c12031e8f759ecea3f228783179a59133b6764541e1817c9eacf64747\"" Mar 17 17:40:07.704270 containerd[1494]: time="2025-03-17T17:40:07.704227246Z" level=info msg="CreateContainer within sandbox \"8bc93605ad8ab698f01b6dd3481c55bdc3c4ee014b3264968797ddd49de48b08\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9490bb21213c1fe9742e42c43456e156989b3eecb5eb128b789c71ecab33a3c1\"" Mar 17 17:40:07.704743 containerd[1494]: time="2025-03-17T17:40:07.704621157Z" level=info msg="StartContainer for \"9490bb21213c1fe9742e42c43456e156989b3eecb5eb128b789c71ecab33a3c1\"" Mar 17 17:40:07.705907 systemd[1]: Started cri-containerd-dfa294112734a1cf5b67512f5deb9473afc71f84a67acf421b9533a94e57944c.scope - libcontainer container dfa294112734a1cf5b67512f5deb9473afc71f84a67acf421b9533a94e57944c. Mar 17 17:40:07.728869 systemd[1]: Started cri-containerd-ea604f5c12031e8f759ecea3f228783179a59133b6764541e1817c9eacf64747.scope - libcontainer container ea604f5c12031e8f759ecea3f228783179a59133b6764541e1817c9eacf64747. Mar 17 17:40:07.733210 systemd[1]: Started cri-containerd-9490bb21213c1fe9742e42c43456e156989b3eecb5eb128b789c71ecab33a3c1.scope - libcontainer container 9490bb21213c1fe9742e42c43456e156989b3eecb5eb128b789c71ecab33a3c1. Mar 17 17:40:07.752575 containerd[1494]: time="2025-03-17T17:40:07.752488524Z" level=info msg="StartContainer for \"dfa294112734a1cf5b67512f5deb9473afc71f84a67acf421b9533a94e57944c\" returns successfully" Mar 17 17:40:07.790185 containerd[1494]: time="2025-03-17T17:40:07.789900774Z" level=info msg="StartContainer for \"9490bb21213c1fe9742e42c43456e156989b3eecb5eb128b789c71ecab33a3c1\" returns successfully" Mar 17 17:40:07.792499 containerd[1494]: time="2025-03-17T17:40:07.789863008Z" level=info msg="StartContainer for \"ea604f5c12031e8f759ecea3f228783179a59133b6764541e1817c9eacf64747\" returns successfully" Mar 17 17:40:08.567021 kubelet[2263]: E0317 17:40:08.566986 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:08.568995 kubelet[2263]: E0317 17:40:08.568968 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:08.571987 kubelet[2263]: E0317 17:40:08.571960 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:09.510854 kubelet[2263]: E0317 17:40:09.510780 2263 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 17 17:40:09.524897 kubelet[2263]: I0317 17:40:09.524826 2263 apiserver.go:52] "Watching apiserver" Mar 17 17:40:09.534283 kubelet[2263]: I0317 17:40:09.534236 2263 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:40:09.575009 kubelet[2263]: E0317 17:40:09.574965 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:09.875960 kubelet[2263]: E0317 17:40:09.875819 2263 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 17 17:40:10.302350 kubelet[2263]: E0317 17:40:10.302163 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:10.320848 kubelet[2263]: E0317 17:40:10.320811 2263 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 17 17:40:10.747109 kubelet[2263]: E0317 17:40:10.746955 2263 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 17:40:10.852049 kubelet[2263]: I0317 17:40:10.851980 2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:40:10.897540 kubelet[2263]: I0317 17:40:10.897490 2263 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 17:40:11.524729 systemd[1]: Reloading requested from client PID 2548 ('systemctl') (unit session-5.scope)... Mar 17 17:40:11.524747 systemd[1]: Reloading... Mar 17 17:40:11.616686 zram_generator::config[2590]: No configuration found. Mar 17 17:40:11.726012 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:40:11.815709 systemd[1]: Reloading finished in 290 ms. Mar 17 17:40:11.860102 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:40:11.880050 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:40:11.880325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:40:11.880368 systemd[1]: kubelet.service: Consumed 1.021s CPU time, 117.0M memory peak, 0B memory swap peak. Mar 17 17:40:11.892860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:40:12.047076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:40:12.052903 (kubelet)[2632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:40:12.096926 kubelet[2632]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:40:12.098658 kubelet[2632]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:40:12.098658 kubelet[2632]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:40:12.098658 kubelet[2632]: I0317 17:40:12.097397 2632 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:40:12.102780 kubelet[2632]: I0317 17:40:12.102748 2632 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:40:12.102780 kubelet[2632]: I0317 17:40:12.102769 2632 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:40:12.102949 kubelet[2632]: I0317 17:40:12.102934 2632 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:40:12.104450 kubelet[2632]: I0317 17:40:12.104185 2632 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:40:12.105340 kubelet[2632]: I0317 17:40:12.105316 2632 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:40:12.115626 kubelet[2632]: I0317 17:40:12.115545 2632 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:40:12.116133 kubelet[2632]: I0317 17:40:12.115949 2632 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:40:12.116827 kubelet[2632]: I0317 17:40:12.115996 2632 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:40:12.117018 kubelet[2632]: I0317 17:40:12.116884 2632 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:40:12.117018 kubelet[2632]: I0317 17:40:12.116932 2632 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:40:12.117079 kubelet[2632]: I0317 17:40:12.117020 2632 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:40:12.117255 kubelet[2632]: I0317 17:40:12.117223 2632 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:40:12.117255 kubelet[2632]: I0317 17:40:12.117246 2632 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:40:12.117312 kubelet[2632]: I0317 17:40:12.117275 2632 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:40:12.117312 kubelet[2632]: I0317 17:40:12.117301 2632 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:40:12.119652 kubelet[2632]: I0317 17:40:12.118391 2632 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:40:12.119652 kubelet[2632]: I0317 17:40:12.118612 2632 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:40:12.119652 kubelet[2632]: I0317 17:40:12.119145 2632 server.go:1264] "Started kubelet" Mar 17 17:40:12.120025 kubelet[2632]: I0317 17:40:12.119986 2632 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:40:12.120233 kubelet[2632]: I0317 17:40:12.120175 2632 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:40:12.120420 kubelet[2632]: I0317 17:40:12.120397 2632 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:40:12.121985 kubelet[2632]: I0317 17:40:12.121960 2632 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:40:12.122185 kubelet[2632]: I0317 17:40:12.122157 2632 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:40:12.125189 kubelet[2632]: E0317 17:40:12.125167 2632 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:40:12.125324 kubelet[2632]: I0317 17:40:12.125312 2632 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:40:12.125477 kubelet[2632]: I0317 17:40:12.125465 2632 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:40:12.125687 kubelet[2632]: I0317 17:40:12.125674 2632 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:40:12.135423 kubelet[2632]: I0317 17:40:12.135387 2632 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:40:12.135423 kubelet[2632]: I0317 17:40:12.135411 2632 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:40:12.135954 kubelet[2632]: I0317 17:40:12.135539 2632 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:40:12.142544 kubelet[2632]: I0317 17:40:12.142492 2632 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:40:12.143983 kubelet[2632]: I0317 17:40:12.143923 2632 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:40:12.143983 kubelet[2632]: I0317 17:40:12.143959 2632 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:40:12.143983 kubelet[2632]: I0317 17:40:12.143981 2632 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:40:12.144081 kubelet[2632]: E0317 17:40:12.144031 2632 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:40:12.187478 kubelet[2632]: I0317 17:40:12.187443 2632 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:40:12.187675 kubelet[2632]: I0317 17:40:12.187630 2632 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:40:12.187737 kubelet[2632]: I0317 17:40:12.187727 2632 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:40:12.187983 kubelet[2632]: I0317 17:40:12.187965 2632 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:40:12.188074 kubelet[2632]: I0317 17:40:12.188047 2632 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:40:12.188139 kubelet[2632]: I0317 17:40:12.188130 2632 policy_none.go:49] "None policy: Start" Mar 17 17:40:12.188672 kubelet[2632]: I0317 17:40:12.188648 2632 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:40:12.188672 kubelet[2632]: I0317 17:40:12.188675 2632 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:40:12.188829 kubelet[2632]: I0317 17:40:12.188814 2632 state_mem.go:75] "Updated machine memory state" Mar 17 17:40:12.192670 kubelet[2632]: I0317 17:40:12.192490 2632 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:40:12.192782 kubelet[2632]: I0317 17:40:12.192716 2632 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:40:12.192836 kubelet[2632]: I0317 17:40:12.192821 2632 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:40:12.229815 kubelet[2632]: I0317 17:40:12.229781 2632 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:40:12.240658 kubelet[2632]: I0317 17:40:12.240584 2632 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 17 17:40:12.240831 kubelet[2632]: I0317 17:40:12.240706 2632 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 17:40:12.244746 kubelet[2632]: I0317 17:40:12.244690 2632 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 17:40:12.244869 kubelet[2632]: I0317 17:40:12.244802 2632 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 17:40:12.244912 kubelet[2632]: I0317 17:40:12.244872 2632 topology_manager.go:215] "Topology Admit Handler" podUID="352fe1dd1a7c46d9ec04c38129fdbd79" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 17:40:12.426815 kubelet[2632]: I0317 17:40:12.426784 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:40:12.426933 kubelet[2632]: I0317 17:40:12.426817 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:40:12.426933 kubelet[2632]: I0317 17:40:12.426837 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:40:12.426933 kubelet[2632]: I0317 17:40:12.426856 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/352fe1dd1a7c46d9ec04c38129fdbd79-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"352fe1dd1a7c46d9ec04c38129fdbd79\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:40:12.426933 kubelet[2632]: I0317 17:40:12.426876 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:40:12.426933 kubelet[2632]: I0317 17:40:12.426913 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:40:12.427047 kubelet[2632]: I0317 17:40:12.426953 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:40:12.427047 kubelet[2632]: I0317 17:40:12.426987 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/352fe1dd1a7c46d9ec04c38129fdbd79-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"352fe1dd1a7c46d9ec04c38129fdbd79\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:40:12.427139 kubelet[2632]: I0317 17:40:12.427084 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/352fe1dd1a7c46d9ec04c38129fdbd79-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"352fe1dd1a7c46d9ec04c38129fdbd79\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:40:12.558298 kubelet[2632]: E0317 17:40:12.558258 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:12.561062 kubelet[2632]: E0317 17:40:12.560978 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:12.561333 kubelet[2632]: E0317 17:40:12.561310 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:13.118282 kubelet[2632]: I0317 17:40:13.118227 2632 apiserver.go:52] "Watching apiserver" Mar 17 17:40:13.126024 kubelet[2632]: I0317 17:40:13.126003 2632 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:40:13.174757 kubelet[2632]: E0317 17:40:13.174313 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:13.174757 kubelet[2632]: E0317 17:40:13.174610 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:13.315079 kubelet[2632]: E0317 17:40:13.314873 2632 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 17 17:40:13.315739 kubelet[2632]: E0317 17:40:13.315539 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:13.365966 kubelet[2632]: I0317 17:40:13.365479 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.365452555 podStartE2EDuration="1.365452555s" podCreationTimestamp="2025-03-17 17:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:40:13.314708112 +0000 UTC m=+1.257174165" watchObservedRunningTime="2025-03-17 17:40:13.365452555 +0000 UTC m=+1.307918608" Mar 17 17:40:13.414584 kubelet[2632]: I0317 17:40:13.414513 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.414485803 podStartE2EDuration="1.414485803s" podCreationTimestamp="2025-03-17 17:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:40:13.414408302 +0000 UTC m=+1.356874365" watchObservedRunningTime="2025-03-17 17:40:13.414485803 +0000 UTC m=+1.356951857" Mar 17 17:40:13.414840 kubelet[2632]: I0317 17:40:13.414606 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.414600587 podStartE2EDuration="1.414600587s" podCreationTimestamp="2025-03-17 17:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:40:13.365756037 +0000 UTC m=+1.308222090" watchObservedRunningTime="2025-03-17 17:40:13.414600587 +0000 UTC m=+1.357066650" Mar 17 17:40:14.177287 kubelet[2632]: E0317 17:40:14.177221 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:14.177287 kubelet[2632]: E0317 17:40:14.177252 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:14.284612 sudo[1632]: pam_unix(sudo:session): session closed for user root Mar 17 17:40:14.286646 sshd[1631]: Connection closed by 10.0.0.1 port 34896 Mar 17 17:40:14.287326 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:14.293397 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:34896.service: Deactivated successfully. Mar 17 17:40:14.295476 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:40:14.295675 systemd[1]: session-5.scope: Consumed 5.058s CPU time, 192.9M memory peak, 0B memory swap peak. Mar 17 17:40:14.296200 systemd-logind[1473]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:40:14.297417 systemd-logind[1473]: Removed session 5. Mar 17 17:40:15.176754 kubelet[2632]: E0317 17:40:15.176722 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:18.318832 kubelet[2632]: E0317 17:40:18.318798 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:19.041438 kubelet[2632]: E0317 17:40:19.041378 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:19.182715 kubelet[2632]: E0317 17:40:19.182678 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:19.182715 kubelet[2632]: E0317 17:40:19.182691 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:22.265789 update_engine[1475]: I20250317 17:40:22.265722 1475 update_attempter.cc:509] Updating boot flags... Mar 17 17:40:22.295783 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2706) Mar 17 17:40:22.325723 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2710) Mar 17 17:40:22.353673 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2710) Mar 17 17:40:24.347053 kubelet[2632]: E0317 17:40:24.347018 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:25.191810 kubelet[2632]: E0317 17:40:25.191772 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:27.912436 kubelet[2632]: I0317 17:40:27.912391 2632 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:40:27.913035 kubelet[2632]: I0317 17:40:27.912988 2632 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:40:27.913072 containerd[1494]: time="2025-03-17T17:40:27.912784897Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:40:28.191260 kubelet[2632]: I0317 17:40:28.190175 2632 topology_manager.go:215] "Topology Admit Handler" podUID="8affb01b-d5ae-42ed-83da-2d0d3b83e5d5" podNamespace="kube-system" podName="kube-proxy-jk7vl" Mar 17 17:40:28.198675 systemd[1]: Created slice kubepods-besteffort-pod8affb01b_d5ae_42ed_83da_2d0d3b83e5d5.slice - libcontainer container kubepods-besteffort-pod8affb01b_d5ae_42ed_83da_2d0d3b83e5d5.slice. Mar 17 17:40:28.223132 kubelet[2632]: I0317 17:40:28.223071 2632 topology_manager.go:215] "Topology Admit Handler" podUID="bd0299c7-8c66-4830-87bb-acfcc3a13fc2" podNamespace="kube-flannel" podName="kube-flannel-ds-4gwsf" Mar 17 17:40:28.234052 systemd[1]: Created slice kubepods-burstable-podbd0299c7_8c66_4830_87bb_acfcc3a13fc2.slice - libcontainer container kubepods-burstable-podbd0299c7_8c66_4830_87bb_acfcc3a13fc2.slice. Mar 17 17:40:28.315375 kubelet[2632]: I0317 17:40:28.315330 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8affb01b-d5ae-42ed-83da-2d0d3b83e5d5-kube-proxy\") pod \"kube-proxy-jk7vl\" (UID: \"8affb01b-d5ae-42ed-83da-2d0d3b83e5d5\") " pod="kube-system/kube-proxy-jk7vl" Mar 17 17:40:28.315375 kubelet[2632]: I0317 17:40:28.315372 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8affb01b-d5ae-42ed-83da-2d0d3b83e5d5-xtables-lock\") pod \"kube-proxy-jk7vl\" (UID: \"8affb01b-d5ae-42ed-83da-2d0d3b83e5d5\") " pod="kube-system/kube-proxy-jk7vl" Mar 17 17:40:28.315577 kubelet[2632]: I0317 17:40:28.315404 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8affb01b-d5ae-42ed-83da-2d0d3b83e5d5-lib-modules\") pod \"kube-proxy-jk7vl\" (UID: \"8affb01b-d5ae-42ed-83da-2d0d3b83e5d5\") " pod="kube-system/kube-proxy-jk7vl" Mar 17 17:40:28.315577 kubelet[2632]: I0317 17:40:28.315489 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cmg7\" (UniqueName: \"kubernetes.io/projected/8affb01b-d5ae-42ed-83da-2d0d3b83e5d5-kube-api-access-7cmg7\") pod \"kube-proxy-jk7vl\" (UID: \"8affb01b-d5ae-42ed-83da-2d0d3b83e5d5\") " pod="kube-system/kube-proxy-jk7vl" Mar 17 17:40:28.416601 kubelet[2632]: I0317 17:40:28.416553 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/bd0299c7-8c66-4830-87bb-acfcc3a13fc2-cni-plugin\") pod \"kube-flannel-ds-4gwsf\" (UID: \"bd0299c7-8c66-4830-87bb-acfcc3a13fc2\") " pod="kube-flannel/kube-flannel-ds-4gwsf" Mar 17 17:40:28.416601 kubelet[2632]: I0317 17:40:28.416596 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bd0299c7-8c66-4830-87bb-acfcc3a13fc2-run\") pod \"kube-flannel-ds-4gwsf\" (UID: \"bd0299c7-8c66-4830-87bb-acfcc3a13fc2\") " pod="kube-flannel/kube-flannel-ds-4gwsf" Mar 17 17:40:28.416601 kubelet[2632]: I0317 17:40:28.416611 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/bd0299c7-8c66-4830-87bb-acfcc3a13fc2-cni\") pod \"kube-flannel-ds-4gwsf\" (UID: \"bd0299c7-8c66-4830-87bb-acfcc3a13fc2\") " pod="kube-flannel/kube-flannel-ds-4gwsf" Mar 17 17:40:28.416789 kubelet[2632]: I0317 17:40:28.416647 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lcqm\" (UniqueName: \"kubernetes.io/projected/bd0299c7-8c66-4830-87bb-acfcc3a13fc2-kube-api-access-5lcqm\") pod \"kube-flannel-ds-4gwsf\" (UID: \"bd0299c7-8c66-4830-87bb-acfcc3a13fc2\") " pod="kube-flannel/kube-flannel-ds-4gwsf" Mar 17 17:40:28.416789 kubelet[2632]: I0317 17:40:28.416663 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/bd0299c7-8c66-4830-87bb-acfcc3a13fc2-flannel-cfg\") pod \"kube-flannel-ds-4gwsf\" (UID: \"bd0299c7-8c66-4830-87bb-acfcc3a13fc2\") " pod="kube-flannel/kube-flannel-ds-4gwsf" Mar 17 17:40:28.416789 kubelet[2632]: I0317 17:40:28.416697 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd0299c7-8c66-4830-87bb-acfcc3a13fc2-xtables-lock\") pod \"kube-flannel-ds-4gwsf\" (UID: \"bd0299c7-8c66-4830-87bb-acfcc3a13fc2\") " pod="kube-flannel/kube-flannel-ds-4gwsf" Mar 17 17:40:28.513261 kubelet[2632]: E0317 17:40:28.513163 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:28.513767 containerd[1494]: time="2025-03-17T17:40:28.513721004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jk7vl,Uid:8affb01b-d5ae-42ed-83da-2d0d3b83e5d5,Namespace:kube-system,Attempt:0,}" Mar 17 17:40:28.536443 kubelet[2632]: E0317 17:40:28.536412 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:28.536886 containerd[1494]: time="2025-03-17T17:40:28.536844376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4gwsf,Uid:bd0299c7-8c66-4830-87bb-acfcc3a13fc2,Namespace:kube-flannel,Attempt:0,}" Mar 17 17:40:28.893393 containerd[1494]: time="2025-03-17T17:40:28.893304139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:28.893551 containerd[1494]: time="2025-03-17T17:40:28.893379423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:28.893551 containerd[1494]: time="2025-03-17T17:40:28.893409009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:28.893596 containerd[1494]: time="2025-03-17T17:40:28.893525311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:28.896673 containerd[1494]: time="2025-03-17T17:40:28.895874914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:28.896673 containerd[1494]: time="2025-03-17T17:40:28.895932282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:28.896673 containerd[1494]: time="2025-03-17T17:40:28.895947211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:28.896673 containerd[1494]: time="2025-03-17T17:40:28.896021723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:28.915785 systemd[1]: Started cri-containerd-93d690ba6be20b96bf3f50554c374b64dac05966dd7f428638dcacc97cfa8192.scope - libcontainer container 93d690ba6be20b96bf3f50554c374b64dac05966dd7f428638dcacc97cfa8192. Mar 17 17:40:28.918978 systemd[1]: Started cri-containerd-585205a81720405ad696f8be5a9dc233cf9900924b4d37682f70b1caff8c78b8.scope - libcontainer container 585205a81720405ad696f8be5a9dc233cf9900924b4d37682f70b1caff8c78b8. Mar 17 17:40:28.943073 containerd[1494]: time="2025-03-17T17:40:28.942992415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jk7vl,Uid:8affb01b-d5ae-42ed-83da-2d0d3b83e5d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"93d690ba6be20b96bf3f50554c374b64dac05966dd7f428638dcacc97cfa8192\"" Mar 17 17:40:28.944454 kubelet[2632]: E0317 17:40:28.944404 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:28.946885 containerd[1494]: time="2025-03-17T17:40:28.946848867Z" level=info msg="CreateContainer within sandbox \"93d690ba6be20b96bf3f50554c374b64dac05966dd7f428638dcacc97cfa8192\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:40:28.962609 containerd[1494]: time="2025-03-17T17:40:28.962566410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-4gwsf,Uid:bd0299c7-8c66-4830-87bb-acfcc3a13fc2,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"585205a81720405ad696f8be5a9dc233cf9900924b4d37682f70b1caff8c78b8\"" Mar 17 17:40:28.963509 kubelet[2632]: E0317 17:40:28.963462 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:28.964951 containerd[1494]: time="2025-03-17T17:40:28.964894682Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Mar 17 17:40:28.983996 containerd[1494]: time="2025-03-17T17:40:28.983948848Z" level=info msg="CreateContainer within sandbox \"93d690ba6be20b96bf3f50554c374b64dac05966dd7f428638dcacc97cfa8192\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"02cbdf854947ccfbbec54209886f33de8e35537022c953878b6ddbf8c35e4179\"" Mar 17 17:40:28.984624 containerd[1494]: time="2025-03-17T17:40:28.984589727Z" level=info msg="StartContainer for \"02cbdf854947ccfbbec54209886f33de8e35537022c953878b6ddbf8c35e4179\"" Mar 17 17:40:29.014813 systemd[1]: Started cri-containerd-02cbdf854947ccfbbec54209886f33de8e35537022c953878b6ddbf8c35e4179.scope - libcontainer container 02cbdf854947ccfbbec54209886f33de8e35537022c953878b6ddbf8c35e4179. Mar 17 17:40:29.050309 containerd[1494]: time="2025-03-17T17:40:29.050255897Z" level=info msg="StartContainer for \"02cbdf854947ccfbbec54209886f33de8e35537022c953878b6ddbf8c35e4179\" returns successfully" Mar 17 17:40:29.202986 kubelet[2632]: E0317 17:40:29.202870 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:29.212021 kubelet[2632]: I0317 17:40:29.211958 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jk7vl" podStartSLOduration=2.211934989 podStartE2EDuration="2.211934989s" podCreationTimestamp="2025-03-17 17:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:40:29.211361789 +0000 UTC m=+17.153827852" watchObservedRunningTime="2025-03-17 17:40:29.211934989 +0000 UTC m=+17.154401042" Mar 17 17:40:30.764968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42814784.mount: Deactivated successfully. Mar 17 17:40:30.804689 containerd[1494]: time="2025-03-17T17:40:30.804605301Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:30.805267 containerd[1494]: time="2025-03-17T17:40:30.805210742Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Mar 17 17:40:30.806487 containerd[1494]: time="2025-03-17T17:40:30.806452401Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:30.808732 containerd[1494]: time="2025-03-17T17:40:30.808694262Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:30.809679 containerd[1494]: time="2025-03-17T17:40:30.809629019Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.844693388s" Mar 17 17:40:30.809720 containerd[1494]: time="2025-03-17T17:40:30.809680737Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Mar 17 17:40:30.811979 containerd[1494]: time="2025-03-17T17:40:30.811949147Z" level=info msg="CreateContainer within sandbox \"585205a81720405ad696f8be5a9dc233cf9900924b4d37682f70b1caff8c78b8\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 17 17:40:30.825552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3493937010.mount: Deactivated successfully. Mar 17 17:40:30.826741 containerd[1494]: time="2025-03-17T17:40:30.826696474Z" level=info msg="CreateContainer within sandbox \"585205a81720405ad696f8be5a9dc233cf9900924b4d37682f70b1caff8c78b8\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"8b263c3b523caea7402b8203d359ea30e8707d6de3489a201b2c95fab28ccbd6\"" Mar 17 17:40:30.827340 containerd[1494]: time="2025-03-17T17:40:30.827252741Z" level=info msg="StartContainer for \"8b263c3b523caea7402b8203d359ea30e8707d6de3489a201b2c95fab28ccbd6\"" Mar 17 17:40:30.857845 systemd[1]: Started cri-containerd-8b263c3b523caea7402b8203d359ea30e8707d6de3489a201b2c95fab28ccbd6.scope - libcontainer container 8b263c3b523caea7402b8203d359ea30e8707d6de3489a201b2c95fab28ccbd6. Mar 17 17:40:30.885684 systemd[1]: cri-containerd-8b263c3b523caea7402b8203d359ea30e8707d6de3489a201b2c95fab28ccbd6.scope: Deactivated successfully. Mar 17 17:40:30.886941 containerd[1494]: time="2025-03-17T17:40:30.886891590Z" level=info msg="StartContainer for \"8b263c3b523caea7402b8203d359ea30e8707d6de3489a201b2c95fab28ccbd6\" returns successfully" Mar 17 17:40:30.947078 containerd[1494]: time="2025-03-17T17:40:30.947001715Z" level=info msg="shim disconnected" id=8b263c3b523caea7402b8203d359ea30e8707d6de3489a201b2c95fab28ccbd6 namespace=k8s.io Mar 17 17:40:30.947078 containerd[1494]: time="2025-03-17T17:40:30.947067651Z" level=warning msg="cleaning up after shim disconnected" id=8b263c3b523caea7402b8203d359ea30e8707d6de3489a201b2c95fab28ccbd6 namespace=k8s.io Mar 17 17:40:30.947078 containerd[1494]: time="2025-03-17T17:40:30.947076207Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:31.209241 kubelet[2632]: E0317 17:40:31.209193 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:31.210078 containerd[1494]: time="2025-03-17T17:40:31.210050043Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Mar 17 17:40:31.691896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b263c3b523caea7402b8203d359ea30e8707d6de3489a201b2c95fab28ccbd6-rootfs.mount: Deactivated successfully. Mar 17 17:40:33.029503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3746772082.mount: Deactivated successfully. Mar 17 17:40:33.556914 containerd[1494]: time="2025-03-17T17:40:33.556859324Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:33.557563 containerd[1494]: time="2025-03-17T17:40:33.557481283Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Mar 17 17:40:33.558741 containerd[1494]: time="2025-03-17T17:40:33.558707508Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:33.562453 containerd[1494]: time="2025-03-17T17:40:33.562418515Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:40:33.563484 containerd[1494]: time="2025-03-17T17:40:33.563438779Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.353342399s" Mar 17 17:40:33.563553 containerd[1494]: time="2025-03-17T17:40:33.563482011Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Mar 17 17:40:33.566056 containerd[1494]: time="2025-03-17T17:40:33.566005326Z" level=info msg="CreateContainer within sandbox \"585205a81720405ad696f8be5a9dc233cf9900924b4d37682f70b1caff8c78b8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:40:33.579613 containerd[1494]: time="2025-03-17T17:40:33.579574305Z" level=info msg="CreateContainer within sandbox \"585205a81720405ad696f8be5a9dc233cf9900924b4d37682f70b1caff8c78b8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3cbaa826ea1d0a31acc7ca938d6e42fac723b40678291c1e73f964ac41e12910\"" Mar 17 17:40:33.580136 containerd[1494]: time="2025-03-17T17:40:33.579993019Z" level=info msg="StartContainer for \"3cbaa826ea1d0a31acc7ca938d6e42fac723b40678291c1e73f964ac41e12910\"" Mar 17 17:40:33.608772 systemd[1]: Started cri-containerd-3cbaa826ea1d0a31acc7ca938d6e42fac723b40678291c1e73f964ac41e12910.scope - libcontainer container 3cbaa826ea1d0a31acc7ca938d6e42fac723b40678291c1e73f964ac41e12910. Mar 17 17:40:33.632962 systemd[1]: cri-containerd-3cbaa826ea1d0a31acc7ca938d6e42fac723b40678291c1e73f964ac41e12910.scope: Deactivated successfully. Mar 17 17:40:33.635577 containerd[1494]: time="2025-03-17T17:40:33.635520908Z" level=info msg="StartContainer for \"3cbaa826ea1d0a31acc7ca938d6e42fac723b40678291c1e73f964ac41e12910\" returns successfully" Mar 17 17:40:33.652794 kubelet[2632]: I0317 17:40:33.652754 2632 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:40:33.673195 kubelet[2632]: I0317 17:40:33.673132 2632 topology_manager.go:215] "Topology Admit Handler" podUID="b99931ef-e00c-4534-82b7-c24604d6ac07" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zx98k" Mar 17 17:40:33.675252 kubelet[2632]: I0317 17:40:33.675196 2632 topology_manager.go:215] "Topology Admit Handler" podUID="2dbad3c2-a888-44a8-9f76-2ed86f3073f7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bfg4q" Mar 17 17:40:33.683157 systemd[1]: Created slice kubepods-burstable-podb99931ef_e00c_4534_82b7_c24604d6ac07.slice - libcontainer container kubepods-burstable-podb99931ef_e00c_4534_82b7_c24604d6ac07.slice. Mar 17 17:40:33.691619 systemd[1]: Created slice kubepods-burstable-pod2dbad3c2_a888_44a8_9f76_2ed86f3073f7.slice - libcontainer container kubepods-burstable-pod2dbad3c2_a888_44a8_9f76_2ed86f3073f7.slice. Mar 17 17:40:33.851164 kubelet[2632]: I0317 17:40:33.850998 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b99931ef-e00c-4534-82b7-c24604d6ac07-config-volume\") pod \"coredns-7db6d8ff4d-zx98k\" (UID: \"b99931ef-e00c-4534-82b7-c24604d6ac07\") " pod="kube-system/coredns-7db6d8ff4d-zx98k" Mar 17 17:40:33.851164 kubelet[2632]: I0317 17:40:33.851050 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dbad3c2-a888-44a8-9f76-2ed86f3073f7-config-volume\") pod \"coredns-7db6d8ff4d-bfg4q\" (UID: \"2dbad3c2-a888-44a8-9f76-2ed86f3073f7\") " pod="kube-system/coredns-7db6d8ff4d-bfg4q" Mar 17 17:40:33.851164 kubelet[2632]: I0317 17:40:33.851069 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngj4v\" (UniqueName: \"kubernetes.io/projected/2dbad3c2-a888-44a8-9f76-2ed86f3073f7-kube-api-access-ngj4v\") pod \"coredns-7db6d8ff4d-bfg4q\" (UID: \"2dbad3c2-a888-44a8-9f76-2ed86f3073f7\") " pod="kube-system/coredns-7db6d8ff4d-bfg4q" Mar 17 17:40:33.851164 kubelet[2632]: I0317 17:40:33.851097 2632 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdhxl\" (UniqueName: \"kubernetes.io/projected/b99931ef-e00c-4534-82b7-c24604d6ac07-kube-api-access-tdhxl\") pod \"coredns-7db6d8ff4d-zx98k\" (UID: \"b99931ef-e00c-4534-82b7-c24604d6ac07\") " pod="kube-system/coredns-7db6d8ff4d-zx98k" Mar 17 17:40:33.914382 containerd[1494]: time="2025-03-17T17:40:33.914292117Z" level=info msg="shim disconnected" id=3cbaa826ea1d0a31acc7ca938d6e42fac723b40678291c1e73f964ac41e12910 namespace=k8s.io Mar 17 17:40:33.914558 containerd[1494]: time="2025-03-17T17:40:33.914392417Z" level=warning msg="cleaning up after shim disconnected" id=3cbaa826ea1d0a31acc7ca938d6e42fac723b40678291c1e73f964ac41e12910 namespace=k8s.io Mar 17 17:40:33.914558 containerd[1494]: time="2025-03-17T17:40:33.914416853Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:40:33.940807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cbaa826ea1d0a31acc7ca938d6e42fac723b40678291c1e73f964ac41e12910-rootfs.mount: Deactivated successfully. Mar 17 17:40:33.987389 kubelet[2632]: E0317 17:40:33.987345 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:33.987884 containerd[1494]: time="2025-03-17T17:40:33.987835205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zx98k,Uid:b99931ef-e00c-4534-82b7-c24604d6ac07,Namespace:kube-system,Attempt:0,}" Mar 17 17:40:34.014072 kubelet[2632]: E0317 17:40:34.014033 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:34.014527 containerd[1494]: time="2025-03-17T17:40:34.014494558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bfg4q,Uid:2dbad3c2-a888-44a8-9f76-2ed86f3073f7,Namespace:kube-system,Attempt:0,}" Mar 17 17:40:34.018662 systemd[1]: run-netns-cni\x2d7fee4293\x2dd5c3\x2de8b6\x2d7076\x2d0ba4f7e583fa.mount: Deactivated successfully. Mar 17 17:40:34.018769 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-78da233c3798122294d32e56cec6d4d096a6411b6e39aa7f8f8a09292a281613-shm.mount: Deactivated successfully. Mar 17 17:40:34.019805 containerd[1494]: time="2025-03-17T17:40:34.019761959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zx98k,Uid:b99931ef-e00c-4534-82b7-c24604d6ac07,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78da233c3798122294d32e56cec6d4d096a6411b6e39aa7f8f8a09292a281613\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:40:34.020053 kubelet[2632]: E0317 17:40:34.020006 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78da233c3798122294d32e56cec6d4d096a6411b6e39aa7f8f8a09292a281613\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:40:34.020118 kubelet[2632]: E0317 17:40:34.020080 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78da233c3798122294d32e56cec6d4d096a6411b6e39aa7f8f8a09292a281613\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-zx98k" Mar 17 17:40:34.020118 kubelet[2632]: E0317 17:40:34.020104 2632 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78da233c3798122294d32e56cec6d4d096a6411b6e39aa7f8f8a09292a281613\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-zx98k" Mar 17 17:40:34.020189 kubelet[2632]: E0317 17:40:34.020160 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zx98k_kube-system(b99931ef-e00c-4534-82b7-c24604d6ac07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zx98k_kube-system(b99931ef-e00c-4534-82b7-c24604d6ac07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78da233c3798122294d32e56cec6d4d096a6411b6e39aa7f8f8a09292a281613\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-zx98k" podUID="b99931ef-e00c-4534-82b7-c24604d6ac07" Mar 17 17:40:34.036192 containerd[1494]: time="2025-03-17T17:40:34.036128421Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bfg4q,Uid:2dbad3c2-a888-44a8-9f76-2ed86f3073f7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"62ae6a435e0ac918faed0b75bf2272f93ec98d089a62d8fea55d5c40f6426ac3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:40:34.036419 kubelet[2632]: E0317 17:40:34.036386 2632 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62ae6a435e0ac918faed0b75bf2272f93ec98d089a62d8fea55d5c40f6426ac3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 17 17:40:34.036477 kubelet[2632]: E0317 17:40:34.036449 2632 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62ae6a435e0ac918faed0b75bf2272f93ec98d089a62d8fea55d5c40f6426ac3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-bfg4q" Mar 17 17:40:34.036477 kubelet[2632]: E0317 17:40:34.036466 2632 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62ae6a435e0ac918faed0b75bf2272f93ec98d089a62d8fea55d5c40f6426ac3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-bfg4q" Mar 17 17:40:34.036553 kubelet[2632]: E0317 17:40:34.036511 2632 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bfg4q_kube-system(2dbad3c2-a888-44a8-9f76-2ed86f3073f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bfg4q_kube-system(2dbad3c2-a888-44a8-9f76-2ed86f3073f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62ae6a435e0ac918faed0b75bf2272f93ec98d089a62d8fea55d5c40f6426ac3\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-bfg4q" podUID="2dbad3c2-a888-44a8-9f76-2ed86f3073f7" Mar 17 17:40:34.214760 kubelet[2632]: E0317 17:40:34.214725 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:34.216419 containerd[1494]: time="2025-03-17T17:40:34.216382824Z" level=info msg="CreateContainer within sandbox \"585205a81720405ad696f8be5a9dc233cf9900924b4d37682f70b1caff8c78b8\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 17 17:40:34.238206 containerd[1494]: time="2025-03-17T17:40:34.238143868Z" level=info msg="CreateContainer within sandbox \"585205a81720405ad696f8be5a9dc233cf9900924b4d37682f70b1caff8c78b8\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"2ead7c9e55fca3c9c04e0d2fc37276aee3a306d61762e35a0ed17cc2aa3e9d36\"" Mar 17 17:40:34.239454 containerd[1494]: time="2025-03-17T17:40:34.239387955Z" level=info msg="StartContainer for \"2ead7c9e55fca3c9c04e0d2fc37276aee3a306d61762e35a0ed17cc2aa3e9d36\"" Mar 17 17:40:34.271783 systemd[1]: Started cri-containerd-2ead7c9e55fca3c9c04e0d2fc37276aee3a306d61762e35a0ed17cc2aa3e9d36.scope - libcontainer container 2ead7c9e55fca3c9c04e0d2fc37276aee3a306d61762e35a0ed17cc2aa3e9d36. Mar 17 17:40:34.299322 containerd[1494]: time="2025-03-17T17:40:34.299276728Z" level=info msg="StartContainer for \"2ead7c9e55fca3c9c04e0d2fc37276aee3a306d61762e35a0ed17cc2aa3e9d36\" returns successfully" Mar 17 17:40:35.217518 kubelet[2632]: E0317 17:40:35.217470 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:35.225604 kubelet[2632]: I0317 17:40:35.225560 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-4gwsf" podStartSLOduration=2.6256197500000003 podStartE2EDuration="7.225542676s" podCreationTimestamp="2025-03-17 17:40:28 +0000 UTC" firstStartedPulling="2025-03-17 17:40:28.964386284 +0000 UTC m=+16.906852337" lastFinishedPulling="2025-03-17 17:40:33.56430921 +0000 UTC m=+21.506775263" observedRunningTime="2025-03-17 17:40:35.225225736 +0000 UTC m=+23.167691789" watchObservedRunningTime="2025-03-17 17:40:35.225542676 +0000 UTC m=+23.168008739" Mar 17 17:40:35.339776 systemd-networkd[1437]: flannel.1: Link UP Mar 17 17:40:35.339784 systemd-networkd[1437]: flannel.1: Gained carrier Mar 17 17:40:36.218400 kubelet[2632]: E0317 17:40:36.218362 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:36.956830 systemd-networkd[1437]: flannel.1: Gained IPv6LL Mar 17 17:40:39.090588 systemd[1]: Started sshd@5-10.0.0.43:22-10.0.0.1:38198.service - OpenSSH per-connection server daemon (10.0.0.1:38198). Mar 17 17:40:39.131685 sshd[3286]: Accepted publickey for core from 10.0.0.1 port 38198 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:40:39.133056 sshd-session[3286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:39.136864 systemd-logind[1473]: New session 6 of user core. Mar 17 17:40:39.145739 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:40:39.260767 sshd[3288]: Connection closed by 10.0.0.1 port 38198 Mar 17 17:40:39.261116 sshd-session[3286]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:39.265393 systemd[1]: sshd@5-10.0.0.43:22-10.0.0.1:38198.service: Deactivated successfully. Mar 17 17:40:39.267324 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:40:39.267957 systemd-logind[1473]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:40:39.268797 systemd-logind[1473]: Removed session 6. Mar 17 17:40:44.277039 systemd[1]: Started sshd@6-10.0.0.43:22-10.0.0.1:41662.service - OpenSSH per-connection server daemon (10.0.0.1:41662). Mar 17 17:40:44.316935 sshd[3322]: Accepted publickey for core from 10.0.0.1 port 41662 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:40:44.318520 sshd-session[3322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:44.322516 systemd-logind[1473]: New session 7 of user core. Mar 17 17:40:44.337787 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:40:44.457159 sshd[3324]: Connection closed by 10.0.0.1 port 41662 Mar 17 17:40:44.457672 sshd-session[3322]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:44.462461 systemd[1]: sshd@6-10.0.0.43:22-10.0.0.1:41662.service: Deactivated successfully. Mar 17 17:40:44.464995 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:40:44.465703 systemd-logind[1473]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:40:44.466574 systemd-logind[1473]: Removed session 7. Mar 17 17:40:45.144882 kubelet[2632]: E0317 17:40:45.144831 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:45.145872 containerd[1494]: time="2025-03-17T17:40:45.145831610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zx98k,Uid:b99931ef-e00c-4534-82b7-c24604d6ac07,Namespace:kube-system,Attempt:0,}" Mar 17 17:40:45.168792 systemd-networkd[1437]: cni0: Link UP Mar 17 17:40:45.168878 systemd-networkd[1437]: cni0: Gained carrier Mar 17 17:40:45.172124 systemd-networkd[1437]: cni0: Lost carrier Mar 17 17:40:45.174784 systemd-networkd[1437]: vethd95cb04c: Link UP Mar 17 17:40:45.179003 kernel: cni0: port 1(vethd95cb04c) entered blocking state Mar 17 17:40:45.179163 kernel: cni0: port 1(vethd95cb04c) entered disabled state Mar 17 17:40:45.179181 kernel: vethd95cb04c: entered allmulticast mode Mar 17 17:40:45.179193 kernel: vethd95cb04c: entered promiscuous mode Mar 17 17:40:45.180088 kernel: cni0: port 1(vethd95cb04c) entered blocking state Mar 17 17:40:45.180125 kernel: cni0: port 1(vethd95cb04c) entered forwarding state Mar 17 17:40:45.182312 kernel: cni0: port 1(vethd95cb04c) entered disabled state Mar 17 17:40:45.188788 kernel: cni0: port 1(vethd95cb04c) entered blocking state Mar 17 17:40:45.188851 kernel: cni0: port 1(vethd95cb04c) entered forwarding state Mar 17 17:40:45.188812 systemd-networkd[1437]: vethd95cb04c: Gained carrier Mar 17 17:40:45.189557 systemd-networkd[1437]: cni0: Gained carrier Mar 17 17:40:45.192120 containerd[1494]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000ae8e8), "name":"cbr0", "type":"bridge"} Mar 17 17:40:45.192120 containerd[1494]: delegateAdd: netconf sent to delegate plugin: Mar 17 17:40:45.210305 containerd[1494]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-03-17T17:40:45.210217894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:45.210305 containerd[1494]: time="2025-03-17T17:40:45.210271758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:45.210305 containerd[1494]: time="2025-03-17T17:40:45.210283140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:45.210471 containerd[1494]: time="2025-03-17T17:40:45.210353365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:45.230773 systemd[1]: Started cri-containerd-262de25775084e047551bbf0c3025cb33e3f6d3d145dbe739dd389330833cccd.scope - libcontainer container 262de25775084e047551bbf0c3025cb33e3f6d3d145dbe739dd389330833cccd. Mar 17 17:40:45.242630 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:40:45.265770 containerd[1494]: time="2025-03-17T17:40:45.265714793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zx98k,Uid:b99931ef-e00c-4534-82b7-c24604d6ac07,Namespace:kube-system,Attempt:0,} returns sandbox id \"262de25775084e047551bbf0c3025cb33e3f6d3d145dbe739dd389330833cccd\"" Mar 17 17:40:45.266627 kubelet[2632]: E0317 17:40:45.266594 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:45.268895 containerd[1494]: time="2025-03-17T17:40:45.268870792Z" level=info msg="CreateContainer within sandbox \"262de25775084e047551bbf0c3025cb33e3f6d3d145dbe739dd389330833cccd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:40:45.282219 containerd[1494]: time="2025-03-17T17:40:45.282168361Z" level=info msg="CreateContainer within sandbox \"262de25775084e047551bbf0c3025cb33e3f6d3d145dbe739dd389330833cccd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2788407e14ee21ef7aaef25955730fbc48a0026bcdf9d7dc0bf8bb03ad8c99f6\"" Mar 17 17:40:45.282670 containerd[1494]: time="2025-03-17T17:40:45.282619220Z" level=info msg="StartContainer for \"2788407e14ee21ef7aaef25955730fbc48a0026bcdf9d7dc0bf8bb03ad8c99f6\"" Mar 17 17:40:45.311765 systemd[1]: Started cri-containerd-2788407e14ee21ef7aaef25955730fbc48a0026bcdf9d7dc0bf8bb03ad8c99f6.scope - libcontainer container 2788407e14ee21ef7aaef25955730fbc48a0026bcdf9d7dc0bf8bb03ad8c99f6. Mar 17 17:40:45.341891 containerd[1494]: time="2025-03-17T17:40:45.341840964Z" level=info msg="StartContainer for \"2788407e14ee21ef7aaef25955730fbc48a0026bcdf9d7dc0bf8bb03ad8c99f6\" returns successfully" Mar 17 17:40:46.238050 kubelet[2632]: E0317 17:40:46.237827 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:46.245836 kubelet[2632]: I0317 17:40:46.245777 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zx98k" podStartSLOduration=18.245763658 podStartE2EDuration="18.245763658s" podCreationTimestamp="2025-03-17 17:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:40:46.245670047 +0000 UTC m=+34.188136100" watchObservedRunningTime="2025-03-17 17:40:46.245763658 +0000 UTC m=+34.188229711" Mar 17 17:40:46.684804 systemd-networkd[1437]: vethd95cb04c: Gained IPv6LL Mar 17 17:40:47.068906 systemd-networkd[1437]: cni0: Gained IPv6LL Mar 17 17:40:47.145057 kubelet[2632]: E0317 17:40:47.145006 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:47.145495 containerd[1494]: time="2025-03-17T17:40:47.145388533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bfg4q,Uid:2dbad3c2-a888-44a8-9f76-2ed86f3073f7,Namespace:kube-system,Attempt:0,}" Mar 17 17:40:47.174529 systemd-networkd[1437]: vethc1530803: Link UP Mar 17 17:40:47.176832 kernel: cni0: port 2(vethc1530803) entered blocking state Mar 17 17:40:47.176907 kernel: cni0: port 2(vethc1530803) entered disabled state Mar 17 17:40:47.177656 kernel: vethc1530803: entered allmulticast mode Mar 17 17:40:47.177784 kernel: vethc1530803: entered promiscuous mode Mar 17 17:40:47.184001 kernel: cni0: port 2(vethc1530803) entered blocking state Mar 17 17:40:47.184080 kernel: cni0: port 2(vethc1530803) entered forwarding state Mar 17 17:40:47.184944 systemd-networkd[1437]: vethc1530803: Gained carrier Mar 17 17:40:47.187448 containerd[1494]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001e938), "name":"cbr0", "type":"bridge"} Mar 17 17:40:47.187448 containerd[1494]: delegateAdd: netconf sent to delegate plugin: Mar 17 17:40:47.208528 containerd[1494]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-03-17T17:40:47.208402482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:40:47.208528 containerd[1494]: time="2025-03-17T17:40:47.208468418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:40:47.208528 containerd[1494]: time="2025-03-17T17:40:47.208479190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:47.208786 containerd[1494]: time="2025-03-17T17:40:47.208580264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:40:47.238939 systemd[1]: Started cri-containerd-6474dd32c9092d296257da58a4595c8f77fab74ef5d7491616aff408f347493e.scope - libcontainer container 6474dd32c9092d296257da58a4595c8f77fab74ef5d7491616aff408f347493e. Mar 17 17:40:47.239796 kubelet[2632]: E0317 17:40:47.239759 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:47.253505 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:40:47.280252 containerd[1494]: time="2025-03-17T17:40:47.280214246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bfg4q,Uid:2dbad3c2-a888-44a8-9f76-2ed86f3073f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"6474dd32c9092d296257da58a4595c8f77fab74ef5d7491616aff408f347493e\"" Mar 17 17:40:47.281033 kubelet[2632]: E0317 17:40:47.280995 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:47.283163 containerd[1494]: time="2025-03-17T17:40:47.283121720Z" level=info msg="CreateContainer within sandbox \"6474dd32c9092d296257da58a4595c8f77fab74ef5d7491616aff408f347493e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:40:47.330344 containerd[1494]: time="2025-03-17T17:40:47.329538779Z" level=info msg="CreateContainer within sandbox \"6474dd32c9092d296257da58a4595c8f77fab74ef5d7491616aff408f347493e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0181f20557c25172efebac3d3e96342819b4258b4e54c17a9a3016be50da5853\"" Mar 17 17:40:47.330344 containerd[1494]: time="2025-03-17T17:40:47.330200523Z" level=info msg="StartContainer for \"0181f20557c25172efebac3d3e96342819b4258b4e54c17a9a3016be50da5853\"" Mar 17 17:40:47.356774 systemd[1]: Started cri-containerd-0181f20557c25172efebac3d3e96342819b4258b4e54c17a9a3016be50da5853.scope - libcontainer container 0181f20557c25172efebac3d3e96342819b4258b4e54c17a9a3016be50da5853. Mar 17 17:40:47.386733 containerd[1494]: time="2025-03-17T17:40:47.386599325Z" level=info msg="StartContainer for \"0181f20557c25172efebac3d3e96342819b4258b4e54c17a9a3016be50da5853\" returns successfully" Mar 17 17:40:48.165827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149611670.mount: Deactivated successfully. Mar 17 17:40:48.242020 kubelet[2632]: E0317 17:40:48.241974 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:48.243380 kubelet[2632]: E0317 17:40:48.243086 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:48.251404 kubelet[2632]: I0317 17:40:48.251321 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bfg4q" podStartSLOduration=20.251304225 podStartE2EDuration="20.251304225s" podCreationTimestamp="2025-03-17 17:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:40:48.250290555 +0000 UTC m=+36.192756608" watchObservedRunningTime="2025-03-17 17:40:48.251304225 +0000 UTC m=+36.193770278" Mar 17 17:40:48.924836 systemd-networkd[1437]: vethc1530803: Gained IPv6LL Mar 17 17:40:49.245692 kubelet[2632]: E0317 17:40:49.245531 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:49.470744 systemd[1]: Started sshd@7-10.0.0.43:22-10.0.0.1:41664.service - OpenSSH per-connection server daemon (10.0.0.1:41664). Mar 17 17:40:49.511393 sshd[3592]: Accepted publickey for core from 10.0.0.1 port 41664 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:40:49.512781 sshd-session[3592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:49.516887 systemd-logind[1473]: New session 8 of user core. Mar 17 17:40:49.527809 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:40:49.635488 sshd[3594]: Connection closed by 10.0.0.1 port 41664 Mar 17 17:40:49.635854 sshd-session[3592]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:49.647189 systemd[1]: sshd@7-10.0.0.43:22-10.0.0.1:41664.service: Deactivated successfully. Mar 17 17:40:49.648893 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:40:49.650351 systemd-logind[1473]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:40:49.651871 systemd[1]: Started sshd@8-10.0.0.43:22-10.0.0.1:41668.service - OpenSSH per-connection server daemon (10.0.0.1:41668). Mar 17 17:40:49.652697 systemd-logind[1473]: Removed session 8. Mar 17 17:40:49.708779 sshd[3608]: Accepted publickey for core from 10.0.0.1 port 41668 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:40:49.710405 sshd-session[3608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:49.714599 systemd-logind[1473]: New session 9 of user core. Mar 17 17:40:49.722788 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:40:49.881503 sshd[3610]: Connection closed by 10.0.0.1 port 41668 Mar 17 17:40:49.883153 sshd-session[3608]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:49.895365 systemd[1]: sshd@8-10.0.0.43:22-10.0.0.1:41668.service: Deactivated successfully. Mar 17 17:40:49.897840 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:40:49.900171 systemd-logind[1473]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:40:49.910092 systemd[1]: Started sshd@9-10.0.0.43:22-10.0.0.1:41672.service - OpenSSH per-connection server daemon (10.0.0.1:41672). Mar 17 17:40:49.911288 systemd-logind[1473]: Removed session 9. Mar 17 17:40:49.945127 sshd[3621]: Accepted publickey for core from 10.0.0.1 port 41672 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:40:49.946437 sshd-session[3621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:49.950418 systemd-logind[1473]: New session 10 of user core. Mar 17 17:40:49.966757 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:40:50.075753 sshd[3623]: Connection closed by 10.0.0.1 port 41672 Mar 17 17:40:50.076126 sshd-session[3621]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:50.080066 systemd[1]: sshd@9-10.0.0.43:22-10.0.0.1:41672.service: Deactivated successfully. Mar 17 17:40:50.081858 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:40:50.082400 systemd-logind[1473]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:40:50.083325 systemd-logind[1473]: Removed session 10. Mar 17 17:40:50.247342 kubelet[2632]: E0317 17:40:50.247304 2632 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:40:55.087452 systemd[1]: Started sshd@10-10.0.0.43:22-10.0.0.1:45006.service - OpenSSH per-connection server daemon (10.0.0.1:45006). Mar 17 17:40:55.127448 sshd[3658]: Accepted publickey for core from 10.0.0.1 port 45006 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:40:55.129112 sshd-session[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:55.133114 systemd-logind[1473]: New session 11 of user core. Mar 17 17:40:55.142941 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:40:55.250878 sshd[3660]: Connection closed by 10.0.0.1 port 45006 Mar 17 17:40:55.251369 sshd-session[3658]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:55.260545 systemd[1]: sshd@10-10.0.0.43:22-10.0.0.1:45006.service: Deactivated successfully. Mar 17 17:40:55.262364 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:40:55.263992 systemd-logind[1473]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:40:55.268866 systemd[1]: Started sshd@11-10.0.0.43:22-10.0.0.1:45012.service - OpenSSH per-connection server daemon (10.0.0.1:45012). Mar 17 17:40:55.269711 systemd-logind[1473]: Removed session 11. Mar 17 17:40:55.305472 sshd[3673]: Accepted publickey for core from 10.0.0.1 port 45012 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:40:55.306928 sshd-session[3673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:55.310740 systemd-logind[1473]: New session 12 of user core. Mar 17 17:40:55.323750 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:40:55.511090 sshd[3675]: Connection closed by 10.0.0.1 port 45012 Mar 17 17:40:55.511483 sshd-session[3673]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:55.521867 systemd[1]: sshd@11-10.0.0.43:22-10.0.0.1:45012.service: Deactivated successfully. Mar 17 17:40:55.523890 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:40:55.525310 systemd-logind[1473]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:40:55.532109 systemd[1]: Started sshd@12-10.0.0.43:22-10.0.0.1:45014.service - OpenSSH per-connection server daemon (10.0.0.1:45014). Mar 17 17:40:55.533147 systemd-logind[1473]: Removed session 12. Mar 17 17:40:55.570795 sshd[3706]: Accepted publickey for core from 10.0.0.1 port 45014 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:40:55.572506 sshd-session[3706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:55.577214 systemd-logind[1473]: New session 13 of user core. Mar 17 17:40:55.591852 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:40:56.879032 sshd[3708]: Connection closed by 10.0.0.1 port 45014 Mar 17 17:40:56.879752 sshd-session[3706]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:56.893335 systemd[1]: sshd@12-10.0.0.43:22-10.0.0.1:45014.service: Deactivated successfully. Mar 17 17:40:56.896888 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:40:56.898803 systemd-logind[1473]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:40:56.906104 systemd[1]: Started sshd@13-10.0.0.43:22-10.0.0.1:45020.service - OpenSSH per-connection server daemon (10.0.0.1:45020). Mar 17 17:40:56.907186 systemd-logind[1473]: Removed session 13. Mar 17 17:40:56.964046 sshd[3728]: Accepted publickey for core from 10.0.0.1 port 45020 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:40:56.966144 sshd-session[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:56.973601 systemd-logind[1473]: New session 14 of user core. Mar 17 17:40:56.987362 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:40:57.213725 sshd[3730]: Connection closed by 10.0.0.1 port 45020 Mar 17 17:40:57.214156 sshd-session[3728]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:57.227627 systemd[1]: sshd@13-10.0.0.43:22-10.0.0.1:45020.service: Deactivated successfully. Mar 17 17:40:57.229800 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:40:57.231560 systemd-logind[1473]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:40:57.239061 systemd[1]: Started sshd@14-10.0.0.43:22-10.0.0.1:45024.service - OpenSSH per-connection server daemon (10.0.0.1:45024). Mar 17 17:40:57.240533 systemd-logind[1473]: Removed session 14. Mar 17 17:40:57.278308 sshd[3741]: Accepted publickey for core from 10.0.0.1 port 45024 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:40:57.280484 sshd-session[3741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:40:57.285524 systemd-logind[1473]: New session 15 of user core. Mar 17 17:40:57.293945 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:40:57.411795 sshd[3743]: Connection closed by 10.0.0.1 port 45024 Mar 17 17:40:57.412183 sshd-session[3741]: pam_unix(sshd:session): session closed for user core Mar 17 17:40:57.416971 systemd[1]: sshd@14-10.0.0.43:22-10.0.0.1:45024.service: Deactivated successfully. Mar 17 17:40:57.419272 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:40:57.419985 systemd-logind[1473]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:40:57.421132 systemd-logind[1473]: Removed session 15. Mar 17 17:41:02.423547 systemd[1]: Started sshd@15-10.0.0.43:22-10.0.0.1:33928.service - OpenSSH per-connection server daemon (10.0.0.1:33928). Mar 17 17:41:02.464212 sshd[3778]: Accepted publickey for core from 10.0.0.1 port 33928 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:41:02.465550 sshd-session[3778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:02.469514 systemd-logind[1473]: New session 16 of user core. Mar 17 17:41:02.476759 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:41:02.584808 sshd[3780]: Connection closed by 10.0.0.1 port 33928 Mar 17 17:41:02.585208 sshd-session[3778]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:02.588914 systemd[1]: sshd@15-10.0.0.43:22-10.0.0.1:33928.service: Deactivated successfully. Mar 17 17:41:02.590715 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:41:02.591296 systemd-logind[1473]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:41:02.592209 systemd-logind[1473]: Removed session 16. Mar 17 17:41:07.607952 systemd[1]: Started sshd@16-10.0.0.43:22-10.0.0.1:33932.service - OpenSSH per-connection server daemon (10.0.0.1:33932). Mar 17 17:41:07.645898 sshd[3816]: Accepted publickey for core from 10.0.0.1 port 33932 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:41:07.647753 sshd-session[3816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:07.652133 systemd-logind[1473]: New session 17 of user core. Mar 17 17:41:07.659770 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:41:07.773025 sshd[3818]: Connection closed by 10.0.0.1 port 33932 Mar 17 17:41:07.773411 sshd-session[3816]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:07.777456 systemd[1]: sshd@16-10.0.0.43:22-10.0.0.1:33932.service: Deactivated successfully. Mar 17 17:41:07.779525 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:41:07.780247 systemd-logind[1473]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:41:07.781275 systemd-logind[1473]: Removed session 17. Mar 17 17:41:12.786727 systemd[1]: Started sshd@17-10.0.0.43:22-10.0.0.1:60300.service - OpenSSH per-connection server daemon (10.0.0.1:60300). Mar 17 17:41:12.825791 sshd[3854]: Accepted publickey for core from 10.0.0.1 port 60300 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:41:12.827150 sshd-session[3854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:12.831409 systemd-logind[1473]: New session 18 of user core. Mar 17 17:41:12.841821 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:41:12.946078 sshd[3856]: Connection closed by 10.0.0.1 port 60300 Mar 17 17:41:12.946460 sshd-session[3854]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:12.950315 systemd[1]: sshd@17-10.0.0.43:22-10.0.0.1:60300.service: Deactivated successfully. Mar 17 17:41:12.952444 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:41:12.953123 systemd-logind[1473]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:41:12.954049 systemd-logind[1473]: Removed session 18. Mar 17 17:41:17.958960 systemd[1]: Started sshd@18-10.0.0.43:22-10.0.0.1:60302.service - OpenSSH per-connection server daemon (10.0.0.1:60302). Mar 17 17:41:17.998680 sshd[3889]: Accepted publickey for core from 10.0.0.1 port 60302 ssh2: RSA SHA256:j201F9FRK1q3ChnxQf0adNdYppDp+g37vmaXPvsVhek Mar 17 17:41:18.000243 sshd-session[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:18.003822 systemd-logind[1473]: New session 19 of user core. Mar 17 17:41:18.013750 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:41:18.118177 sshd[3891]: Connection closed by 10.0.0.1 port 60302 Mar 17 17:41:18.118558 sshd-session[3889]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:18.123069 systemd[1]: sshd@18-10.0.0.43:22-10.0.0.1:60302.service: Deactivated successfully. Mar 17 17:41:18.125236 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:41:18.126036 systemd-logind[1473]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:41:18.126915 systemd-logind[1473]: Removed session 19.