Mar 17 17:55:21.901549 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:09:25 -00 2025 Mar 17 17:55:21.901570 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:55:21.901581 kernel: BIOS-provided physical RAM map: Mar 17 17:55:21.901588 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 17:55:21.901595 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 17 17:55:21.901601 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 17 17:55:21.901608 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 17 17:55:21.901615 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 17 17:55:21.901621 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 17 17:55:21.901638 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 17 17:55:21.901644 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 17 17:55:21.901653 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 17 17:55:21.901659 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 17 17:55:21.901666 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 17 17:55:21.901674 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 17 17:55:21.901681 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 17 17:55:21.901690 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Mar 17 17:55:21.901697 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Mar 17 17:55:21.901704 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Mar 17 17:55:21.901710 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Mar 17 17:55:21.901717 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 17 17:55:21.901724 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 17 17:55:21.901731 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 17 17:55:21.901738 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:55:21.901744 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 17 17:55:21.901751 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 17:55:21.901758 kernel: NX (Execute Disable) protection: active Mar 17 17:55:21.901767 kernel: APIC: Static calls initialized Mar 17 17:55:21.901774 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Mar 17 17:55:21.901781 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Mar 17 17:55:21.901788 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Mar 17 17:55:21.901794 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Mar 17 17:55:21.901801 kernel: extended physical RAM map: Mar 17 17:55:21.901808 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 17:55:21.901815 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 17 17:55:21.901822 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 17 17:55:21.901828 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 17 17:55:21.901835 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 17 17:55:21.901842 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 17 17:55:21.901851 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 17 17:55:21.901862 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Mar 17 17:55:21.901869 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Mar 17 17:55:21.901876 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Mar 17 17:55:21.901883 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Mar 17 17:55:21.901890 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Mar 17 17:55:21.901899 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 17 17:55:21.901906 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 17 17:55:21.901913 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 17 17:55:21.901920 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 17 17:55:21.901927 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 17 17:55:21.901934 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Mar 17 17:55:21.901942 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Mar 17 17:55:21.901949 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Mar 17 17:55:21.901956 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Mar 17 17:55:21.901965 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 17 17:55:21.901972 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 17 17:55:21.901979 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 17 17:55:21.901986 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:55:21.901993 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 17 17:55:21.902000 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 17:55:21.902007 kernel: efi: EFI v2.7 by EDK II Mar 17 17:55:21.902014 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Mar 17 17:55:21.902021 kernel: random: crng init done Mar 17 17:55:21.902029 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 17 17:55:21.902036 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 17 17:55:21.902043 kernel: secureboot: Secure boot disabled Mar 17 17:55:21.902052 kernel: SMBIOS 2.8 present. Mar 17 17:55:21.902059 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 17 17:55:21.902066 kernel: Hypervisor detected: KVM Mar 17 17:55:21.902073 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:55:21.902080 kernel: kvm-clock: using sched offset of 2719326271 cycles Mar 17 17:55:21.902088 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:55:21.902096 kernel: tsc: Detected 2794.748 MHz processor Mar 17 17:55:21.902103 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:55:21.902111 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:55:21.902118 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 17 17:55:21.902127 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 17 17:55:21.902135 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:55:21.902142 kernel: Using GB pages for direct mapping Mar 17 17:55:21.902149 kernel: ACPI: Early table checksum verification disabled Mar 17 17:55:21.902156 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 17 17:55:21.902164 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:55:21.902171 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:55:21.902178 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:55:21.902186 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 17 17:55:21.902195 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:55:21.902202 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:55:21.902210 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:55:21.902217 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:55:21.902224 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 17 17:55:21.902231 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 17 17:55:21.902239 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Mar 17 17:55:21.902246 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 17 17:55:21.902253 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 17 17:55:21.902262 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 17 17:55:21.902270 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 17 17:55:21.902277 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 17 17:55:21.902284 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 17 17:55:21.902291 kernel: No NUMA configuration found Mar 17 17:55:21.902298 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 17 17:55:21.902305 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Mar 17 17:55:21.902313 kernel: Zone ranges: Mar 17 17:55:21.902320 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:55:21.902329 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 17 17:55:21.902336 kernel: Normal empty Mar 17 17:55:21.902344 kernel: Movable zone start for each node Mar 17 17:55:21.902351 kernel: Early memory node ranges Mar 17 17:55:21.902358 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 17 17:55:21.902365 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 17 17:55:21.902372 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 17 17:55:21.902379 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 17 17:55:21.902386 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 17 17:55:21.902394 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 17 17:55:21.902403 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Mar 17 17:55:21.902410 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Mar 17 17:55:21.902417 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 17 17:55:21.902424 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:55:21.902431 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 17 17:55:21.902446 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 17 17:55:21.902456 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:55:21.902463 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 17 17:55:21.902470 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 17 17:55:21.902485 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 17 17:55:21.902492 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 17 17:55:21.902500 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 17 17:55:21.902510 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:55:21.902518 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:55:21.902526 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:55:21.902533 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:55:21.902541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:55:21.902551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:55:21.902558 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:55:21.902566 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:55:21.902573 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:55:21.902580 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:55:21.902588 kernel: TSC deadline timer available Mar 17 17:55:21.902595 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 17 17:55:21.902603 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:55:21.902610 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 17 17:55:21.902620 kernel: kvm-guest: setup PV sched yield Mar 17 17:55:21.902637 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 17 17:55:21.902644 kernel: Booting paravirtualized kernel on KVM Mar 17 17:55:21.902652 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:55:21.902660 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 17 17:55:21.902668 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 17 17:55:21.902675 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 17 17:55:21.902683 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 17 17:55:21.902690 kernel: kvm-guest: PV spinlocks enabled Mar 17 17:55:21.902701 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 17:55:21.902710 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:55:21.902718 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:55:21.902725 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:55:21.902733 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:55:21.902741 kernel: Fallback order for Node 0: 0 Mar 17 17:55:21.902748 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Mar 17 17:55:21.902756 kernel: Policy zone: DMA32 Mar 17 17:55:21.902765 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:55:21.902773 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43476K init, 1596K bss, 177824K reserved, 0K cma-reserved) Mar 17 17:55:21.902781 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:55:21.902788 kernel: ftrace: allocating 37910 entries in 149 pages Mar 17 17:55:21.902796 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:55:21.902804 kernel: Dynamic Preempt: voluntary Mar 17 17:55:21.902811 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:55:21.902819 kernel: rcu: RCU event tracing is enabled. Mar 17 17:55:21.902827 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:55:21.902837 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:55:21.902844 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:55:21.902852 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:55:21.902860 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:55:21.902867 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:55:21.902875 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 17 17:55:21.902882 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:55:21.902890 kernel: Console: colour dummy device 80x25 Mar 17 17:55:21.902897 kernel: printk: console [ttyS0] enabled Mar 17 17:55:21.902907 kernel: ACPI: Core revision 20230628 Mar 17 17:55:21.902915 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:55:21.902922 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:55:21.902930 kernel: x2apic enabled Mar 17 17:55:21.902937 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:55:21.902945 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 17 17:55:21.902952 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 17 17:55:21.902960 kernel: kvm-guest: setup PV IPIs Mar 17 17:55:21.902967 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:55:21.902977 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 17:55:21.902985 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Mar 17 17:55:21.902992 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 17:55:21.903000 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 17:55:21.903007 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 17:55:21.903015 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:55:21.903022 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:55:21.903030 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:55:21.903037 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:55:21.903047 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 17:55:21.903055 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 17:55:21.903062 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:55:21.903070 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:55:21.903077 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 17 17:55:21.903086 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 17 17:55:21.903093 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 17 17:55:21.903101 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:55:21.903108 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:55:21.903118 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:55:21.903126 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:55:21.903133 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 17 17:55:21.903141 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:55:21.903148 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:55:21.903156 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:55:21.903163 kernel: landlock: Up and running. Mar 17 17:55:21.903171 kernel: SELinux: Initializing. Mar 17 17:55:21.903178 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:55:21.903188 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:55:21.903196 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 17:55:21.903203 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:55:21.903211 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:55:21.903219 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:55:21.903226 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 17:55:21.903234 kernel: ... version: 0 Mar 17 17:55:21.903241 kernel: ... bit width: 48 Mar 17 17:55:21.903251 kernel: ... generic registers: 6 Mar 17 17:55:21.903258 kernel: ... value mask: 0000ffffffffffff Mar 17 17:55:21.903266 kernel: ... max period: 00007fffffffffff Mar 17 17:55:21.903273 kernel: ... fixed-purpose events: 0 Mar 17 17:55:21.903280 kernel: ... event mask: 000000000000003f Mar 17 17:55:21.903288 kernel: signal: max sigframe size: 1776 Mar 17 17:55:21.903295 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:55:21.903303 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:55:21.903310 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:55:21.903318 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:55:21.903327 kernel: .... node #0, CPUs: #1 #2 #3 Mar 17 17:55:21.903335 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:55:21.903342 kernel: smpboot: Max logical packages: 1 Mar 17 17:55:21.903350 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Mar 17 17:55:21.903357 kernel: devtmpfs: initialized Mar 17 17:55:21.903365 kernel: x86/mm: Memory block size: 128MB Mar 17 17:55:21.903372 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 17 17:55:21.903380 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 17 17:55:21.903387 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 17 17:55:21.903397 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 17 17:55:21.903405 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Mar 17 17:55:21.903412 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 17 17:55:21.903420 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:55:21.903427 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:55:21.903435 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:55:21.903442 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:55:21.903450 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:55:21.903460 kernel: audit: type=2000 audit(1742234121.829:1): state=initialized audit_enabled=0 res=1 Mar 17 17:55:21.903467 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:55:21.903482 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:55:21.903489 kernel: cpuidle: using governor menu Mar 17 17:55:21.903497 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:55:21.903505 kernel: dca service started, version 1.12.1 Mar 17 17:55:21.903512 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Mar 17 17:55:21.903520 kernel: PCI: Using configuration type 1 for base access Mar 17 17:55:21.903527 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:55:21.903537 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:55:21.903545 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:55:21.903552 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:55:21.903560 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:55:21.903567 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:55:21.903575 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:55:21.903582 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:55:21.903590 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:55:21.903597 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:55:21.903607 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:55:21.903614 kernel: ACPI: Interpreter enabled Mar 17 17:55:21.903622 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 17:55:21.903642 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:55:21.903650 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:55:21.903657 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:55:21.903665 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 17:55:21.903673 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:55:21.903849 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:55:21.903983 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 17:55:21.904108 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 17:55:21.904118 kernel: PCI host bridge to bus 0000:00 Mar 17 17:55:21.904243 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:55:21.904357 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:55:21.904485 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:55:21.904606 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 17 17:55:21.904734 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 17 17:55:21.904849 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 17 17:55:21.904962 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:55:21.905109 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 17:55:21.905244 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 17 17:55:21.905368 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 17 17:55:21.905505 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 17 17:55:21.905642 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 17 17:55:21.905769 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 17 17:55:21.905893 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:55:21.906028 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:55:21.906153 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 17 17:55:21.906278 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 17 17:55:21.906407 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Mar 17 17:55:21.906549 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:55:21.906691 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 17 17:55:21.906817 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 17 17:55:21.906955 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Mar 17 17:55:21.907092 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:55:21.907222 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 17 17:55:21.907370 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 17 17:55:21.907527 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 17 17:55:21.907697 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 17 17:55:21.907831 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 17:55:21.907956 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 17:55:21.908087 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 17:55:21.908216 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 17 17:55:21.908341 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 17 17:55:21.908491 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 17:55:21.908618 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 17 17:55:21.908640 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:55:21.908648 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:55:21.908656 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:55:21.908664 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:55:21.908675 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 17:55:21.908683 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 17:55:21.908690 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 17:55:21.908698 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 17:55:21.908706 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 17:55:21.908713 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 17:55:21.908721 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 17:55:21.908728 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 17:55:21.908736 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 17:55:21.908746 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 17:55:21.908754 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 17:55:21.908761 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 17:55:21.908769 kernel: iommu: Default domain type: Translated Mar 17 17:55:21.908776 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:55:21.908784 kernel: efivars: Registered efivars operations Mar 17 17:55:21.908791 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:55:21.908799 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:55:21.908807 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 17 17:55:21.908817 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 17 17:55:21.908824 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Mar 17 17:55:21.908832 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Mar 17 17:55:21.908839 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 17 17:55:21.908847 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 17 17:55:21.908854 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Mar 17 17:55:21.908862 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 17 17:55:21.908988 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 17:55:21.909116 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 17:55:21.909240 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:55:21.909251 kernel: vgaarb: loaded Mar 17 17:55:21.909259 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:55:21.909267 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:55:21.909274 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:55:21.909282 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:55:21.909290 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:55:21.909297 kernel: pnp: PnP ACPI init Mar 17 17:55:21.909437 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 17 17:55:21.909449 kernel: pnp: PnP ACPI: found 6 devices Mar 17 17:55:21.909459 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:55:21.909467 kernel: NET: Registered PF_INET protocol family Mar 17 17:55:21.909501 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:55:21.909512 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:55:21.909520 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:55:21.909528 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:55:21.909539 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:55:21.909547 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:55:21.909555 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:55:21.909563 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:55:21.909571 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:55:21.909578 kernel: NET: Registered PF_XDP protocol family Mar 17 17:55:21.909722 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 17 17:55:21.909849 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 17 17:55:21.909974 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:55:21.910106 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:55:21.910272 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:55:21.910386 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 17 17:55:21.910551 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 17 17:55:21.910706 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 17 17:55:21.910722 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:55:21.910730 kernel: Initialise system trusted keyrings Mar 17 17:55:21.910740 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:55:21.910748 kernel: Key type asymmetric registered Mar 17 17:55:21.910756 kernel: Asymmetric key parser 'x509' registered Mar 17 17:55:21.910764 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:55:21.910772 kernel: io scheduler mq-deadline registered Mar 17 17:55:21.910780 kernel: io scheduler kyber registered Mar 17 17:55:21.910787 kernel: io scheduler bfq registered Mar 17 17:55:21.910795 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:55:21.910804 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 17:55:21.910812 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 17:55:21.910822 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 17:55:21.910830 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:55:21.910838 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:55:21.910846 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:55:21.910854 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:55:21.910866 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:55:21.910992 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 17:55:21.911109 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 17:55:21.911120 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 17 17:55:21.911233 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T17:55:21 UTC (1742234121) Mar 17 17:55:21.911349 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 17 17:55:21.911360 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 17 17:55:21.911368 kernel: efifb: probing for efifb Mar 17 17:55:21.911379 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 17 17:55:21.911387 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 17 17:55:21.911394 kernel: efifb: scrolling: redraw Mar 17 17:55:21.911402 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 17:55:21.911410 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 17:55:21.911418 kernel: fb0: EFI VGA frame buffer device Mar 17 17:55:21.911426 kernel: pstore: Using crash dump compression: deflate Mar 17 17:55:21.911434 kernel: pstore: Registered efi_pstore as persistent store backend Mar 17 17:55:21.911444 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:55:21.911455 kernel: Segment Routing with IPv6 Mar 17 17:55:21.911465 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:55:21.911473 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:55:21.911488 kernel: Key type dns_resolver registered Mar 17 17:55:21.911496 kernel: IPI shorthand broadcast: enabled Mar 17 17:55:21.911504 kernel: sched_clock: Marking stable (656003859, 167438469)->(841681764, -18239436) Mar 17 17:55:21.911513 kernel: registered taskstats version 1 Mar 17 17:55:21.911520 kernel: Loading compiled-in X.509 certificates Mar 17 17:55:21.911528 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 2d438fc13e28f87f3f580874887bade2e2b0c7dd' Mar 17 17:55:21.911539 kernel: Key type .fscrypt registered Mar 17 17:55:21.911546 kernel: Key type fscrypt-provisioning registered Mar 17 17:55:21.911554 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:55:21.911562 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:55:21.911570 kernel: ima: No architecture policies found Mar 17 17:55:21.911578 kernel: clk: Disabling unused clocks Mar 17 17:55:21.911586 kernel: Freeing unused kernel image (initmem) memory: 43476K Mar 17 17:55:21.911593 kernel: Write protecting the kernel read-only data: 38912k Mar 17 17:55:21.911601 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 17 17:55:21.911612 kernel: Run /init as init process Mar 17 17:55:21.911619 kernel: with arguments: Mar 17 17:55:21.911639 kernel: /init Mar 17 17:55:21.911647 kernel: with environment: Mar 17 17:55:21.911654 kernel: HOME=/ Mar 17 17:55:21.911662 kernel: TERM=linux Mar 17 17:55:21.911670 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:55:21.911682 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:55:21.911694 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:55:21.911705 systemd[1]: Detected virtualization kvm. Mar 17 17:55:21.911714 systemd[1]: Detected architecture x86-64. Mar 17 17:55:21.911722 systemd[1]: Running in initrd. Mar 17 17:55:21.911730 systemd[1]: No hostname configured, using default hostname. Mar 17 17:55:21.911739 systemd[1]: Hostname set to . Mar 17 17:55:21.911747 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:55:21.911755 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:55:21.911766 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:55:21.911775 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:55:21.911784 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:55:21.911793 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:55:21.911801 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:55:21.911811 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:55:21.911821 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:55:21.911832 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:55:21.911840 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:55:21.911849 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:55:21.911857 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:55:21.911865 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:55:21.911874 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:55:21.911882 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:55:21.911890 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:55:21.911902 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:55:21.911910 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:55:21.911919 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:55:21.911927 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:55:21.911936 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:55:21.911944 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:55:21.911953 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:55:21.911961 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:55:21.911969 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:55:21.912000 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:55:21.912015 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:55:21.912031 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:55:21.912040 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:55:21.912048 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:55:21.912057 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:55:21.912065 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:55:21.912077 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:55:21.912108 systemd-journald[193]: Collecting audit messages is disabled. Mar 17 17:55:21.912134 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:55:21.912144 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:55:21.912153 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:55:21.912161 systemd-journald[193]: Journal started Mar 17 17:55:21.912180 systemd-journald[193]: Runtime Journal (/run/log/journal/8bfa87a769804691b335df6707fb04bb) is 6M, max 48.2M, 42.2M free. Mar 17 17:55:21.910588 systemd-modules-load[194]: Inserted module 'overlay' Mar 17 17:55:21.916180 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:55:21.919529 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:55:21.924142 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:55:21.926776 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:55:21.940293 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:55:21.940197 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:55:21.941758 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:55:21.945721 kernel: Bridge firewalling registered Mar 17 17:55:21.944532 systemd-modules-load[194]: Inserted module 'br_netfilter' Mar 17 17:55:21.949773 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:55:21.950039 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:55:21.952942 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:55:21.956450 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:55:21.966199 dracut-cmdline[223]: dracut-dracut-053 Mar 17 17:55:21.967036 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:55:21.971682 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:55:21.976751 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:55:22.011658 systemd-resolved[239]: Positive Trust Anchors: Mar 17 17:55:22.011671 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:55:22.011700 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:55:22.014795 systemd-resolved[239]: Defaulting to hostname 'linux'. Mar 17 17:55:22.016015 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:55:22.021908 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:55:22.052668 kernel: SCSI subsystem initialized Mar 17 17:55:22.061669 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:55:22.072668 kernel: iscsi: registered transport (tcp) Mar 17 17:55:22.096662 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:55:22.096718 kernel: QLogic iSCSI HBA Driver Mar 17 17:55:22.147839 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:55:22.161753 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:55:22.189605 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:55:22.189726 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:55:22.189743 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:55:22.235668 kernel: raid6: avx2x4 gen() 25467 MB/s Mar 17 17:55:22.252664 kernel: raid6: avx2x2 gen() 28914 MB/s Mar 17 17:55:22.269733 kernel: raid6: avx2x1 gen() 25797 MB/s Mar 17 17:55:22.269758 kernel: raid6: using algorithm avx2x2 gen() 28914 MB/s Mar 17 17:55:22.293650 kernel: raid6: .... xor() 19914 MB/s, rmw enabled Mar 17 17:55:22.293662 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:55:22.314654 kernel: xor: automatically using best checksumming function avx Mar 17 17:55:22.464667 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:55:22.479281 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:55:22.490893 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:55:22.508528 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 17 17:55:22.514993 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:55:22.526792 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:55:22.539979 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Mar 17 17:55:22.572962 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:55:22.586759 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:55:22.659038 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:55:22.669859 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:55:22.682773 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:55:22.685812 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:55:22.688535 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:55:22.689835 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:55:22.706892 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:55:22.706955 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 17 17:55:22.738012 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:55:22.738166 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:55:22.738178 kernel: AES CTR mode by8 optimization enabled Mar 17 17:55:22.738188 kernel: libata version 3.00 loaded. Mar 17 17:55:22.738199 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:55:22.738209 kernel: GPT:9289727 != 19775487 Mar 17 17:55:22.738220 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:55:22.738236 kernel: GPT:9289727 != 19775487 Mar 17 17:55:22.738246 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:55:22.738256 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:55:22.705523 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:55:22.715977 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:55:22.741200 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 17:55:22.756468 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 17:55:22.756492 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 17:55:22.756677 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 17:55:22.756823 kernel: scsi host0: ahci Mar 17 17:55:22.756980 kernel: scsi host1: ahci Mar 17 17:55:22.757126 kernel: scsi host2: ahci Mar 17 17:55:22.757272 kernel: scsi host3: ahci Mar 17 17:55:22.757427 kernel: scsi host4: ahci Mar 17 17:55:22.757589 kernel: scsi host5: ahci Mar 17 17:55:22.757756 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 17 17:55:22.757767 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 17 17:55:22.757778 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 17 17:55:22.757788 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 17 17:55:22.757799 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 17 17:55:22.757809 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 17 17:55:22.742268 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:55:22.742415 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:55:22.745061 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:55:22.746813 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:55:22.746942 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:55:22.748794 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:55:22.760904 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:55:22.774423 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (466) Mar 17 17:55:22.777645 kernel: BTRFS: device fsid 16b3954e-2e86-4c7f-a948-d3d3817b1bdc devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (468) Mar 17 17:55:22.779100 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:55:22.797852 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:55:22.809677 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:55:22.830960 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:55:22.840931 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:55:22.842315 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:55:22.852839 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:55:22.853993 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:55:22.854051 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:55:22.856789 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:55:22.859913 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:55:22.864165 disk-uuid[558]: Primary Header is updated. Mar 17 17:55:22.864165 disk-uuid[558]: Secondary Entries is updated. Mar 17 17:55:22.864165 disk-uuid[558]: Secondary Header is updated. Mar 17 17:55:22.868664 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:55:22.872645 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:55:22.876709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:55:22.890965 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:55:22.919305 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:55:23.067654 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 17:55:23.067726 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 17:55:23.068650 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 17:55:23.069668 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 17:55:23.069765 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 17:55:23.070658 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 17:55:23.071659 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 17:55:23.072845 kernel: ata3.00: applying bridge limits Mar 17 17:55:23.072857 kernel: ata3.00: configured for UDMA/100 Mar 17 17:55:23.073663 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:55:23.122664 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 17:55:23.139416 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:55:23.139457 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:55:23.874674 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:55:23.875202 disk-uuid[561]: The operation has completed successfully. Mar 17 17:55:23.907442 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:55:23.907577 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:55:23.949811 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:55:23.953193 sh[600]: Success Mar 17 17:55:23.964652 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 17:55:23.998030 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:55:24.015129 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:55:24.018786 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:55:24.028215 kernel: BTRFS info (device dm-0): first mount of filesystem 16b3954e-2e86-4c7f-a948-d3d3817b1bdc Mar 17 17:55:24.028244 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:55:24.028260 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:55:24.029226 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:55:24.029964 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:55:24.034391 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:55:24.036591 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:55:24.051766 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:55:24.053680 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:55:24.064349 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:55:24.064382 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:55:24.064393 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:55:24.067762 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:55:24.075915 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:55:24.078053 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:55:24.086820 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:55:24.094752 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:55:24.148786 ignition[701]: Ignition 2.20.0 Mar 17 17:55:24.148797 ignition[701]: Stage: fetch-offline Mar 17 17:55:24.148836 ignition[701]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:55:24.148846 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:55:24.148930 ignition[701]: parsed url from cmdline: "" Mar 17 17:55:24.148934 ignition[701]: no config URL provided Mar 17 17:55:24.148939 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:55:24.148948 ignition[701]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:55:24.156903 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:55:24.148972 ignition[701]: op(1): [started] loading QEMU firmware config module Mar 17 17:55:24.148977 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:55:24.155182 ignition[701]: op(1): [finished] loading QEMU firmware config module Mar 17 17:55:24.163809 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:55:24.190989 systemd-networkd[791]: lo: Link UP Mar 17 17:55:24.190996 systemd-networkd[791]: lo: Gained carrier Mar 17 17:55:24.194713 systemd-networkd[791]: Enumeration completed Mar 17 17:55:24.194792 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:55:24.196003 systemd[1]: Reached target network.target - Network. Mar 17 17:55:24.199024 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:55:24.199028 systemd-networkd[791]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:55:24.199756 systemd-networkd[791]: eth0: Link UP Mar 17 17:55:24.199760 systemd-networkd[791]: eth0: Gained carrier Mar 17 17:55:24.199766 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:55:24.208295 ignition[701]: parsing config with SHA512: a8128e0181f05facf1853dc2b9fe255c07bf3473758f38cef350fde1bee6d2fbdb0dc1ce7dd74c169818d0d031e559d36fd996839a3155f25f348abe9f57e82c Mar 17 17:55:24.214242 unknown[701]: fetched base config from "system" Mar 17 17:55:24.214368 unknown[701]: fetched user config from "qemu" Mar 17 17:55:24.214890 ignition[701]: fetch-offline: fetch-offline passed Mar 17 17:55:24.214978 ignition[701]: Ignition finished successfully Mar 17 17:55:24.217132 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:55:24.218774 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:55:24.224670 systemd-networkd[791]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:55:24.227807 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:55:24.242913 ignition[795]: Ignition 2.20.0 Mar 17 17:55:24.242922 ignition[795]: Stage: kargs Mar 17 17:55:24.243074 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:55:24.243085 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:55:24.243864 ignition[795]: kargs: kargs passed Mar 17 17:55:24.247205 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:55:24.243908 ignition[795]: Ignition finished successfully Mar 17 17:55:24.259748 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:55:24.270324 ignition[806]: Ignition 2.20.0 Mar 17 17:55:24.270334 ignition[806]: Stage: disks Mar 17 17:55:24.270487 ignition[806]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:55:24.270497 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:55:24.273222 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:55:24.271273 ignition[806]: disks: disks passed Mar 17 17:55:24.275201 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:55:24.271320 ignition[806]: Ignition finished successfully Mar 17 17:55:24.277142 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:55:24.278980 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:55:24.281023 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:55:24.282045 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:55:24.293782 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:55:24.309213 systemd-fsck[817]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:55:24.449905 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:55:25.038708 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:55:25.119655 kernel: EXT4-fs (vda9): mounted filesystem 21764504-a65e-45eb-84e1-376b55b62aba r/w with ordered data mode. Quota mode: none. Mar 17 17:55:25.120373 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:55:25.120944 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:55:25.132716 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:55:25.134587 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:55:25.136456 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:55:25.136497 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:55:25.152696 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (825) Mar 17 17:55:25.152724 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:55:25.152736 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:55:25.152746 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:55:25.152756 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:55:25.136520 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:55:25.140731 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:55:25.145064 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:55:25.153823 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:55:25.182128 initrd-setup-root[849]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:55:25.186557 initrd-setup-root[856]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:55:25.190886 initrd-setup-root[863]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:55:25.195005 initrd-setup-root[870]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:55:25.269341 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:55:25.281737 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:55:25.283411 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:55:25.289654 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:55:25.306180 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:55:25.308355 ignition[937]: INFO : Ignition 2.20.0 Mar 17 17:55:25.308355 ignition[937]: INFO : Stage: mount Mar 17 17:55:25.308355 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:55:25.308355 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:55:25.308355 ignition[937]: INFO : mount: mount passed Mar 17 17:55:25.308355 ignition[937]: INFO : Ignition finished successfully Mar 17 17:55:25.309201 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:55:25.315745 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:55:25.484732 systemd-networkd[791]: eth0: Gained IPv6LL Mar 17 17:55:26.027558 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:55:26.039792 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:55:26.049378 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (951) Mar 17 17:55:26.049429 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:55:26.049445 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:55:26.050374 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:55:26.053661 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:55:26.055190 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:55:26.078431 ignition[968]: INFO : Ignition 2.20.0 Mar 17 17:55:26.078431 ignition[968]: INFO : Stage: files Mar 17 17:55:26.080671 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:55:26.080671 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:55:26.080671 ignition[968]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:55:26.080671 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:55:26.080671 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:55:26.087574 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:55:26.087574 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:55:26.087574 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:55:26.087574 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:55:26.087574 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 17:55:26.083007 unknown[968]: wrote ssh authorized keys file for user: core Mar 17 17:55:26.124436 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:55:26.376848 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:55:26.376848 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:55:26.380831 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 17:55:26.846049 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:55:27.155500 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:55:27.155500 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:55:27.159667 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:55:27.159667 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:55:27.159667 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:55:27.159667 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:55:27.159667 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:55:27.159667 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:55:27.159667 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:55:27.159667 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:55:27.159667 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:55:27.159667 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:55:27.159667 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:55:27.159667 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:55:27.159667 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 17 17:55:27.451158 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:55:28.108123 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:55:28.108123 ignition[968]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:55:28.388028 ignition[968]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:55:28.411581 ignition[968]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:55:28.411581 ignition[968]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:55:28.411581 ignition[968]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 17 17:55:28.411581 ignition[968]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:55:28.411581 ignition[968]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:55:28.411581 ignition[968]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 17 17:55:28.411581 ignition[968]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:55:28.428037 ignition[968]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:55:28.432730 ignition[968]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:55:28.434670 ignition[968]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:55:28.434670 ignition[968]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:55:28.434670 ignition[968]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:55:28.434670 ignition[968]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:55:28.434670 ignition[968]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:55:28.434670 ignition[968]: INFO : files: files passed Mar 17 17:55:28.434670 ignition[968]: INFO : Ignition finished successfully Mar 17 17:55:28.435869 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:55:28.445861 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:55:28.447956 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:55:28.449830 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:55:28.449947 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:55:28.461391 initrd-setup-root-after-ignition[997]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:55:28.464047 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:55:28.464047 initrd-setup-root-after-ignition[999]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:55:28.468348 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:55:28.470623 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:55:28.474422 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:55:28.484889 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:55:28.511041 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:55:28.523528 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:55:28.526447 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:55:28.544909 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:55:28.547073 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:55:28.558924 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:55:28.576047 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:55:28.597789 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:55:28.609577 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:55:28.611993 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:55:28.614376 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:55:28.617705 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:55:28.618765 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:55:28.621346 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:55:28.623469 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:55:28.625366 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:55:28.627646 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:55:28.630283 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:55:28.632645 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:55:28.634917 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:55:28.637422 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:55:28.639529 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:55:28.641582 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:55:28.643261 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:55:28.644316 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:55:28.646719 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:55:28.648915 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:55:28.651288 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:55:28.652273 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:55:28.654916 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:55:28.655970 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:55:28.658270 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:55:28.659375 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:55:28.661788 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:55:28.663592 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:55:28.669700 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:55:28.672863 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:55:28.675073 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:55:28.677069 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:55:28.677974 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:55:28.679957 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:55:28.680865 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:55:28.682959 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:55:28.684348 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:55:28.687087 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:55:28.688079 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:55:28.700827 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:55:28.709078 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:55:28.710067 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:55:28.710193 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:55:28.714805 ignition[1025]: INFO : Ignition 2.20.0 Mar 17 17:55:28.714805 ignition[1025]: INFO : Stage: umount Mar 17 17:55:28.714805 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:55:28.714805 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:55:28.714805 ignition[1025]: INFO : umount: umount passed Mar 17 17:55:28.714805 ignition[1025]: INFO : Ignition finished successfully Mar 17 17:55:28.714438 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:55:28.714591 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:55:28.724787 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:55:28.724928 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:55:28.731208 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:55:28.732327 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:55:28.736101 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:55:28.738015 systemd[1]: Stopped target network.target - Network. Mar 17 17:55:28.739809 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:55:28.739874 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:55:28.742786 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:55:28.743712 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:55:28.745654 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:55:28.745708 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:55:28.748513 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:55:28.749475 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:55:28.751848 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:55:28.754067 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:55:28.756533 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:55:28.757833 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:55:28.759965 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:55:28.760983 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:55:28.765958 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:55:28.767667 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:55:28.769010 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:55:28.773246 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:55:28.776592 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:55:28.777605 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:55:28.779986 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:55:28.780057 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:55:28.792860 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:55:28.793007 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:55:28.793093 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:55:28.796131 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:55:28.796181 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:55:28.799572 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:55:28.799622 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:55:28.800553 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:55:28.800601 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:55:28.804671 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:55:28.806892 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:55:28.806961 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:55:28.815748 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:55:28.815883 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:55:28.830426 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:55:28.830611 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:55:28.834461 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:55:28.835544 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:55:28.837719 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:55:28.837763 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:55:28.840713 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:55:28.840774 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:55:28.843832 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:55:28.874406 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:55:28.876693 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:55:28.877748 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:55:28.889875 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:55:28.891017 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:55:28.891096 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:55:28.894475 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:55:28.894539 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:55:28.895766 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:55:28.895825 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:55:28.898120 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:55:28.898173 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:55:28.903376 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:55:28.903441 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:55:28.903828 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:55:28.903941 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:55:28.906840 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:55:28.921805 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:55:28.960333 systemd[1]: Switching root. Mar 17 17:55:29.013411 systemd-journald[193]: Journal stopped Mar 17 17:55:30.402676 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Mar 17 17:55:30.402744 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:55:30.402758 kernel: SELinux: policy capability open_perms=1 Mar 17 17:55:30.402775 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:55:30.402786 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:55:30.402798 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:55:30.402809 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:55:30.402820 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:55:30.402833 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:55:30.402854 kernel: audit: type=1403 audit(1742234129.515:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:55:30.402870 systemd[1]: Successfully loaded SELinux policy in 42.532ms. Mar 17 17:55:30.402898 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.329ms. Mar 17 17:55:30.402912 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:55:30.402925 systemd[1]: Detected virtualization kvm. Mar 17 17:55:30.402943 systemd[1]: Detected architecture x86-64. Mar 17 17:55:30.402955 systemd[1]: Detected first boot. Mar 17 17:55:30.402967 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:55:30.402979 zram_generator::config[1070]: No configuration found. Mar 17 17:55:30.402995 kernel: Guest personality initialized and is inactive Mar 17 17:55:30.403006 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 17 17:55:30.403018 kernel: Initialized host personality Mar 17 17:55:30.403029 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:55:30.403041 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:55:30.403054 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:55:30.403066 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:55:30.403078 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:55:30.403090 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:55:30.403120 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:55:30.403139 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:55:30.403152 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:55:30.403171 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:55:30.403192 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:55:30.403214 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:55:30.403231 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:55:30.403243 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:55:30.403281 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:55:30.403303 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:55:30.403327 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:55:30.403341 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:55:30.403353 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:55:30.403378 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:55:30.403390 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:55:30.403402 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:55:30.403429 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:55:30.403448 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:55:30.403460 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:55:30.403472 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:55:30.403484 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:55:30.403507 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:55:30.403523 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:55:30.403543 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:55:30.403559 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:55:30.403586 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:55:30.403598 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:55:30.403610 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:55:30.403622 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:55:30.403648 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:55:30.403660 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:55:30.403675 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:55:30.403707 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:55:30.403723 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:55:30.403755 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:30.403782 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:55:30.403795 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:55:30.403808 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:55:30.403820 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:55:30.403840 systemd[1]: Reached target machines.target - Containers. Mar 17 17:55:30.403867 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:55:30.403891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:55:30.403906 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:55:30.403924 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:55:30.403938 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:55:30.403950 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:55:30.403962 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:55:30.403975 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:55:30.403987 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:55:30.404000 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:55:30.404012 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:55:30.404028 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:55:30.404046 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:55:30.404059 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:55:30.404072 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:55:30.404084 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:55:30.404096 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:55:30.404109 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:55:30.404120 kernel: fuse: init (API version 7.39) Mar 17 17:55:30.404132 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:55:30.404147 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:55:30.404159 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:55:30.404171 kernel: loop: module loaded Mar 17 17:55:30.404200 systemd-journald[1134]: Collecting audit messages is disabled. Mar 17 17:55:30.404226 systemd-journald[1134]: Journal started Mar 17 17:55:30.404250 systemd-journald[1134]: Runtime Journal (/run/log/journal/8bfa87a769804691b335df6707fb04bb) is 6M, max 48.2M, 42.2M free. Mar 17 17:55:30.102564 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:55:30.117603 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:55:30.118178 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:55:30.428404 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:55:30.428434 systemd[1]: Stopped verity-setup.service. Mar 17 17:55:30.431680 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:30.435679 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:55:30.436864 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:55:30.439068 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:55:30.440322 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:55:30.442871 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:55:30.444235 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:55:30.445550 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:55:30.446917 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:55:30.448570 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:55:30.448870 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:55:30.451641 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:55:30.452126 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:55:30.454085 kernel: ACPI: bus type drm_connector registered Mar 17 17:55:30.455443 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:55:30.455731 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:55:30.477379 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:55:30.477608 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:55:30.479161 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:55:30.479381 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:55:30.489791 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:55:30.490019 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:55:30.491500 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:55:30.492984 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:55:30.494672 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:55:30.496362 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:55:30.510801 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:55:30.527721 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:55:30.544524 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:55:30.546106 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:55:30.546154 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:55:30.548393 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:55:30.550928 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:55:30.553586 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:55:30.555015 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:55:30.557317 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:55:30.560724 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:55:30.562156 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:55:30.564172 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:55:30.565721 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:55:30.568190 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:55:30.576263 systemd-journald[1134]: Time spent on flushing to /var/log/journal/8bfa87a769804691b335df6707fb04bb is 27.135ms for 1061 entries. Mar 17 17:55:30.576263 systemd-journald[1134]: System Journal (/var/log/journal/8bfa87a769804691b335df6707fb04bb) is 8M, max 195.6M, 187.6M free. Mar 17 17:55:30.802234 systemd-journald[1134]: Received client request to flush runtime journal. Mar 17 17:55:30.802297 kernel: loop0: detected capacity change from 0 to 147912 Mar 17 17:55:30.802323 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:55:30.574775 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:55:30.577941 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:55:30.581816 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:55:30.587652 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:55:30.590040 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:55:30.606727 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:55:30.609218 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:55:30.613467 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:55:30.630857 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Mar 17 17:55:30.630874 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Mar 17 17:55:30.644776 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:55:30.647923 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:55:30.671761 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:55:30.758937 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:55:30.762188 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:55:30.766927 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:55:30.785980 udevadm[1199]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:55:30.805729 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:55:30.808649 kernel: loop1: detected capacity change from 0 to 205544 Mar 17 17:55:30.845999 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:55:30.855805 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:55:30.871341 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Mar 17 17:55:30.871360 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Mar 17 17:55:30.877117 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:55:30.884652 kernel: loop2: detected capacity change from 0 to 138176 Mar 17 17:55:30.976695 kernel: loop3: detected capacity change from 0 to 147912 Mar 17 17:55:30.996658 kernel: loop4: detected capacity change from 0 to 205544 Mar 17 17:55:31.013646 kernel: loop5: detected capacity change from 0 to 138176 Mar 17 17:55:31.041821 (sd-merge)[1216]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:55:31.042445 (sd-merge)[1216]: Merged extensions into '/usr'. Mar 17 17:55:31.054593 systemd[1]: Reload requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:55:31.054609 systemd[1]: Reloading... Mar 17 17:55:31.410046 ldconfig[1184]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:55:31.411647 zram_generator::config[1242]: No configuration found. Mar 17 17:55:31.928605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:55:31.994190 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:55:31.994379 systemd[1]: Reloading finished in 939 ms. Mar 17 17:55:32.018109 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:55:32.026965 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:55:32.028639 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:55:32.030393 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:55:32.050063 systemd[1]: Starting ensure-sysext.service... Mar 17 17:55:32.052337 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:55:32.055257 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:55:32.068021 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:55:32.068040 systemd[1]: Reloading... Mar 17 17:55:32.075559 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:55:32.075865 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:55:32.076792 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:55:32.077069 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Mar 17 17:55:32.077149 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Mar 17 17:55:32.081447 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:55:32.081458 systemd-tmpfiles[1285]: Skipping /boot Mar 17 17:55:32.088990 systemd-udevd[1286]: Using default interface naming scheme 'v255'. Mar 17 17:55:32.094942 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:55:32.094958 systemd-tmpfiles[1285]: Skipping /boot Mar 17 17:55:32.139399 zram_generator::config[1327]: No configuration found. Mar 17 17:55:32.166661 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1320) Mar 17 17:55:32.273658 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:55:32.364952 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:55:32.365272 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:55:32.369300 systemd[1]: Reloading finished in 300 ms. Mar 17 17:55:32.379492 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:55:32.389675 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 17 17:55:32.400822 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:55:32.405654 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:55:32.410795 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 17 17:55:32.411084 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 17:55:32.411280 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 17:55:32.411504 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 17:55:32.443654 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 17 17:55:32.477385 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:32.508454 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:55:32.514978 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:55:32.516710 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:55:32.519531 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:55:32.524750 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:55:32.528875 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:55:32.534085 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:55:32.555321 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:55:32.558438 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:55:32.559911 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:55:32.563047 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:55:32.570426 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:55:32.570780 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:55:32.575890 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:55:32.579868 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:55:32.581216 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:55:32.585615 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:55:32.586393 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:55:32.588863 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:55:32.590781 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:55:32.597098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:55:32.597960 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:55:32.601460 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:55:32.601722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:55:32.603623 kernel: kvm_amd: TSC scaling supported Mar 17 17:55:32.603723 kernel: kvm_amd: Nested Virtualization enabled Mar 17 17:55:32.603747 kernel: kvm_amd: Nested Paging enabled Mar 17 17:55:32.603763 kernel: kvm_amd: LBR virtualization supported Mar 17 17:55:32.603775 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 17 17:55:32.604079 kernel: kvm_amd: Virtual GIF supported Mar 17 17:55:32.607388 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:55:32.614416 augenrules[1414]: No rules Mar 17 17:55:32.615031 systemd[1]: Finished ensure-sysext.service. Mar 17 17:55:32.616779 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:55:32.617294 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:55:32.625561 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:55:32.634658 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:55:32.640931 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:55:32.643057 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:55:32.654569 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:55:32.654703 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:55:32.665816 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:55:32.668227 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:55:32.671099 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:55:32.673271 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:55:32.674534 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:55:32.675025 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:55:32.679768 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:55:32.682982 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:55:32.701062 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:55:32.720494 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:55:32.750983 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:55:32.752830 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:55:32.758332 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:55:32.761744 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:55:32.773710 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:55:32.809052 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:55:32.825775 systemd-networkd[1404]: lo: Link UP Mar 17 17:55:32.825785 systemd-networkd[1404]: lo: Gained carrier Mar 17 17:55:32.827505 systemd-networkd[1404]: Enumeration completed Mar 17 17:55:32.827617 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:55:32.828917 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:55:32.829238 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:55:32.829248 systemd-networkd[1404]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:55:32.830326 systemd-networkd[1404]: eth0: Link UP Mar 17 17:55:32.830334 systemd-networkd[1404]: eth0: Gained carrier Mar 17 17:55:32.830347 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:55:32.830396 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:55:32.832847 systemd-resolved[1408]: Positive Trust Anchors: Mar 17 17:55:32.832862 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:55:32.832895 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:55:32.836470 systemd-resolved[1408]: Defaulting to hostname 'linux'. Mar 17 17:55:32.840801 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:55:32.843853 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:55:32.845067 systemd-networkd[1404]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:55:32.845275 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:55:32.846919 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Mar 17 17:55:32.847292 systemd[1]: Reached target network.target - Network. Mar 17 17:55:33.870702 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:55:33.870821 systemd-resolved[1408]: Clock change detected. Flushing caches. Mar 17 17:55:33.870903 systemd-timesyncd[1430]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:55:33.870949 systemd-timesyncd[1430]: Initial clock synchronization to Mon 2025-03-17 17:55:33.870687 UTC. Mar 17 17:55:33.871976 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:55:33.873193 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:55:33.874565 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:55:33.876096 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:55:33.877378 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:55:33.878672 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:55:33.879945 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:55:33.879974 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:55:33.880949 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:55:33.882536 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:55:33.885461 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:55:33.889057 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:55:33.890504 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:55:33.891814 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:55:33.895872 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:55:33.897328 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:55:33.899552 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:55:33.901047 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:55:33.903317 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:55:33.904337 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:55:33.905376 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:55:33.905410 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:55:33.906540 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:55:33.908712 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:55:33.910697 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:55:33.913009 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:55:33.914404 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:55:33.915956 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:55:33.918434 jq[1461]: false Mar 17 17:55:33.920826 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:55:33.924151 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:55:33.927137 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:55:33.934922 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:55:33.937244 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:55:33.937756 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:55:33.941028 extend-filesystems[1462]: Found loop3 Mar 17 17:55:33.941028 extend-filesystems[1462]: Found loop4 Mar 17 17:55:33.941028 extend-filesystems[1462]: Found loop5 Mar 17 17:55:33.941028 extend-filesystems[1462]: Found sr0 Mar 17 17:55:33.941028 extend-filesystems[1462]: Found vda Mar 17 17:55:33.941028 extend-filesystems[1462]: Found vda1 Mar 17 17:55:33.941028 extend-filesystems[1462]: Found vda2 Mar 17 17:55:33.941028 extend-filesystems[1462]: Found vda3 Mar 17 17:55:33.941028 extend-filesystems[1462]: Found usr Mar 17 17:55:33.941028 extend-filesystems[1462]: Found vda4 Mar 17 17:55:33.941028 extend-filesystems[1462]: Found vda6 Mar 17 17:55:33.941028 extend-filesystems[1462]: Found vda7 Mar 17 17:55:33.941028 extend-filesystems[1462]: Found vda9 Mar 17 17:55:33.941028 extend-filesystems[1462]: Checking size of /dev/vda9 Mar 17 17:55:33.972971 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1343) Mar 17 17:55:33.947954 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:55:33.941534 dbus-daemon[1460]: [system] SELinux support is enabled Mar 17 17:55:33.976316 extend-filesystems[1462]: Resized partition /dev/vda9 Mar 17 17:55:33.950870 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:55:33.979457 update_engine[1474]: I20250317 17:55:33.964427 1474 main.cc:92] Flatcar Update Engine starting Mar 17 17:55:33.979457 update_engine[1474]: I20250317 17:55:33.965704 1474 update_check_scheduler.cc:74] Next update check in 9m18s Mar 17 17:55:33.979736 extend-filesystems[1484]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:55:33.955400 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:55:33.969584 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:55:33.969858 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:55:33.985291 jq[1480]: true Mar 17 17:55:33.970217 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:55:33.970471 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:55:33.974185 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:55:33.974438 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:55:33.992798 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:55:34.007175 (ntainerd)[1486]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:55:34.009100 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:55:34.014806 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:55:34.014839 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:55:34.015621 jq[1488]: true Mar 17 17:55:34.016152 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:55:34.016173 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:55:34.024239 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:55:34.064978 tar[1485]: linux-amd64/helm Mar 17 17:55:34.085869 systemd-logind[1470]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:55:34.085899 systemd-logind[1470]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:55:34.086647 systemd-logind[1470]: New seat seat0. Mar 17 17:55:34.088064 locksmithd[1496]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:55:34.091147 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:55:34.146108 sshd_keygen[1479]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:55:34.168806 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:55:34.173172 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:55:34.184992 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:55:34.195353 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:55:34.195636 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:55:34.200801 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:55:34.231174 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:55:34.253167 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:55:34.255663 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:55:34.267980 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:55:34.545961 extend-filesystems[1484]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:55:34.545961 extend-filesystems[1484]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:55:34.545961 extend-filesystems[1484]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:55:34.550213 containerd[1486]: time="2025-03-17T17:55:34.545662013Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:55:34.550411 extend-filesystems[1462]: Resized filesystem in /dev/vda9 Mar 17 17:55:34.550760 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:55:34.551076 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:55:34.574961 containerd[1486]: time="2025-03-17T17:55:34.574795349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:34.576977 containerd[1486]: time="2025-03-17T17:55:34.576941704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:55:34.576977 containerd[1486]: time="2025-03-17T17:55:34.576973994Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:55:34.577049 containerd[1486]: time="2025-03-17T17:55:34.576993882Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:55:34.577213 containerd[1486]: time="2025-03-17T17:55:34.577182866Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:55:34.577213 containerd[1486]: time="2025-03-17T17:55:34.577205388Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:34.577363 containerd[1486]: time="2025-03-17T17:55:34.577276181Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:55:34.577363 containerd[1486]: time="2025-03-17T17:55:34.577288344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:34.577573 containerd[1486]: time="2025-03-17T17:55:34.577552209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:55:34.577601 containerd[1486]: time="2025-03-17T17:55:34.577571224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:34.577601 containerd[1486]: time="2025-03-17T17:55:34.577584950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:55:34.577601 containerd[1486]: time="2025-03-17T17:55:34.577594468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:34.577710 containerd[1486]: time="2025-03-17T17:55:34.577692903Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:34.577979 containerd[1486]: time="2025-03-17T17:55:34.577951979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:55:34.578136 containerd[1486]: time="2025-03-17T17:55:34.578118431Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:55:34.578158 containerd[1486]: time="2025-03-17T17:55:34.578134611Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:55:34.578243 containerd[1486]: time="2025-03-17T17:55:34.578227615Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:55:34.578298 containerd[1486]: time="2025-03-17T17:55:34.578284212Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:55:34.793030 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:55:34.794956 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:55:34.797086 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:55:34.840972 tar[1485]: linux-amd64/LICENSE Mar 17 17:55:34.841063 tar[1485]: linux-amd64/README.md Mar 17 17:55:34.857631 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:55:34.890831 containerd[1486]: time="2025-03-17T17:55:34.890762340Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:55:34.890883 containerd[1486]: time="2025-03-17T17:55:34.890849283Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:55:34.890883 containerd[1486]: time="2025-03-17T17:55:34.890867557Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:55:34.890922 containerd[1486]: time="2025-03-17T17:55:34.890902262Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:55:34.890922 containerd[1486]: time="2025-03-17T17:55:34.890917180Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:55:34.891108 containerd[1486]: time="2025-03-17T17:55:34.891077941Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:55:34.891425 containerd[1486]: time="2025-03-17T17:55:34.891368066Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:55:34.891592 containerd[1486]: time="2025-03-17T17:55:34.891563923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:55:34.891592 containerd[1486]: time="2025-03-17T17:55:34.891586064Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:55:34.891635 containerd[1486]: time="2025-03-17T17:55:34.891602575Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:55:34.891635 containerd[1486]: time="2025-03-17T17:55:34.891618004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:55:34.891688 containerd[1486]: time="2025-03-17T17:55:34.891633654Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:55:34.891688 containerd[1486]: time="2025-03-17T17:55:34.891646838Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:55:34.891688 containerd[1486]: time="2025-03-17T17:55:34.891661005Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:55:34.891688 containerd[1486]: time="2025-03-17T17:55:34.891674170Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:55:34.891688 containerd[1486]: time="2025-03-17T17:55:34.891687064Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:55:34.891796 containerd[1486]: time="2025-03-17T17:55:34.891698676Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:55:34.891796 containerd[1486]: time="2025-03-17T17:55:34.891710878Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:55:34.891796 containerd[1486]: time="2025-03-17T17:55:34.891730786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.891796 containerd[1486]: time="2025-03-17T17:55:34.891743560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.891796 containerd[1486]: time="2025-03-17T17:55:34.891755452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.891796 containerd[1486]: time="2025-03-17T17:55:34.891766783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.891796 containerd[1486]: time="2025-03-17T17:55:34.891794275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.891950 containerd[1486]: time="2025-03-17T17:55:34.891807069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.891950 containerd[1486]: time="2025-03-17T17:55:34.891819883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.891950 containerd[1486]: time="2025-03-17T17:55:34.891832667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.891950 containerd[1486]: time="2025-03-17T17:55:34.891844379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.891950 containerd[1486]: time="2025-03-17T17:55:34.891859066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.891950 containerd[1486]: time="2025-03-17T17:55:34.891870378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.891950 containerd[1486]: time="2025-03-17T17:55:34.891882140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.891950 containerd[1486]: time="2025-03-17T17:55:34.891894603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.891950 containerd[1486]: time="2025-03-17T17:55:34.891908239Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:55:34.891950 containerd[1486]: time="2025-03-17T17:55:34.891926793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.891950 containerd[1486]: time="2025-03-17T17:55:34.891938585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.891950 containerd[1486]: time="2025-03-17T17:55:34.891949416Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:55:34.892158 containerd[1486]: time="2025-03-17T17:55:34.892008256Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:55:34.892158 containerd[1486]: time="2025-03-17T17:55:34.892026110Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:55:34.892158 containerd[1486]: time="2025-03-17T17:55:34.892036920Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:55:34.892158 containerd[1486]: time="2025-03-17T17:55:34.892047991Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:55:34.892158 containerd[1486]: time="2025-03-17T17:55:34.892057408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.892158 containerd[1486]: time="2025-03-17T17:55:34.892068699Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:55:34.892158 containerd[1486]: time="2025-03-17T17:55:34.892078688Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:55:34.892158 containerd[1486]: time="2025-03-17T17:55:34.892089679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:55:34.892421 containerd[1486]: time="2025-03-17T17:55:34.892367089Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:55:34.892421 containerd[1486]: time="2025-03-17T17:55:34.892412224Z" level=info msg="Connect containerd service" Mar 17 17:55:34.892570 containerd[1486]: time="2025-03-17T17:55:34.892438613Z" level=info msg="using legacy CRI server" Mar 17 17:55:34.892570 containerd[1486]: time="2025-03-17T17:55:34.892445576Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:55:34.892570 containerd[1486]: time="2025-03-17T17:55:34.892552607Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:55:34.893141 containerd[1486]: time="2025-03-17T17:55:34.893097629Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:55:34.893303 containerd[1486]: time="2025-03-17T17:55:34.893251578Z" level=info msg="Start subscribing containerd event" Mar 17 17:55:34.893353 containerd[1486]: time="2025-03-17T17:55:34.893337048Z" level=info msg="Start recovering state" Mar 17 17:55:34.893381 containerd[1486]: time="2025-03-17T17:55:34.893358779Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:55:34.893617 containerd[1486]: time="2025-03-17T17:55:34.893407911Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:55:34.893617 containerd[1486]: time="2025-03-17T17:55:34.893427418Z" level=info msg="Start event monitor" Mar 17 17:55:34.893617 containerd[1486]: time="2025-03-17T17:55:34.893463295Z" level=info msg="Start snapshots syncer" Mar 17 17:55:34.893617 containerd[1486]: time="2025-03-17T17:55:34.893476620Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:55:34.893617 containerd[1486]: time="2025-03-17T17:55:34.893484354Z" level=info msg="Start streaming server" Mar 17 17:55:34.893617 containerd[1486]: time="2025-03-17T17:55:34.893564685Z" level=info msg="containerd successfully booted in 0.349262s" Mar 17 17:55:34.893670 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:55:35.850962 systemd-networkd[1404]: eth0: Gained IPv6LL Mar 17 17:55:35.854269 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:55:35.856128 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:55:35.870992 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:55:35.873362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:55:35.875636 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:55:35.896156 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:55:35.896511 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:55:35.898174 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:55:35.901640 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:55:36.733391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:36.735344 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:55:36.737008 systemd[1]: Startup finished in 791ms (kernel) + 7.804s (initrd) + 6.240s (userspace) = 14.837s. Mar 17 17:55:36.740368 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:55:37.422161 kubelet[1572]: E0317 17:55:37.422108 1572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:55:37.426336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:55:37.426544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:55:37.427028 systemd[1]: kubelet.service: Consumed 1.377s CPU time, 236.5M memory peak. Mar 17 17:55:38.932643 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:55:38.942031 systemd[1]: Started sshd@0-10.0.0.132:22-10.0.0.1:46610.service - OpenSSH per-connection server daemon (10.0.0.1:46610). Mar 17 17:55:38.993702 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 46610 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:55:38.995631 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:39.007186 systemd-logind[1470]: New session 1 of user core. Mar 17 17:55:39.008686 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:55:39.022034 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:55:39.033649 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:55:39.043027 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:55:39.045959 (systemd)[1589]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:55:39.048297 systemd-logind[1470]: New session c1 of user core. Mar 17 17:55:39.190716 systemd[1589]: Queued start job for default target default.target. Mar 17 17:55:39.203113 systemd[1589]: Created slice app.slice - User Application Slice. Mar 17 17:55:39.203138 systemd[1589]: Reached target paths.target - Paths. Mar 17 17:55:39.203179 systemd[1589]: Reached target timers.target - Timers. Mar 17 17:55:39.204922 systemd[1589]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:55:39.217617 systemd[1589]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:55:39.217880 systemd[1589]: Reached target sockets.target - Sockets. Mar 17 17:55:39.217928 systemd[1589]: Reached target basic.target - Basic System. Mar 17 17:55:39.218003 systemd[1589]: Reached target default.target - Main User Target. Mar 17 17:55:39.218626 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:55:39.218727 systemd[1589]: Startup finished in 163ms. Mar 17 17:55:39.226883 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:55:39.299133 systemd[1]: Started sshd@1-10.0.0.132:22-10.0.0.1:46626.service - OpenSSH per-connection server daemon (10.0.0.1:46626). Mar 17 17:55:39.340108 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 46626 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:55:39.341869 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:39.347386 systemd-logind[1470]: New session 2 of user core. Mar 17 17:55:39.360942 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:55:39.413700 sshd[1602]: Connection closed by 10.0.0.1 port 46626 Mar 17 17:55:39.414009 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:39.428675 systemd[1]: sshd@1-10.0.0.132:22-10.0.0.1:46626.service: Deactivated successfully. Mar 17 17:55:39.430605 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:55:39.432349 systemd-logind[1470]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:55:39.433722 systemd[1]: Started sshd@2-10.0.0.132:22-10.0.0.1:46642.service - OpenSSH per-connection server daemon (10.0.0.1:46642). Mar 17 17:55:39.434612 systemd-logind[1470]: Removed session 2. Mar 17 17:55:39.477386 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 46642 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:55:39.478768 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:39.483352 systemd-logind[1470]: New session 3 of user core. Mar 17 17:55:39.493061 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:55:39.541794 sshd[1610]: Connection closed by 10.0.0.1 port 46642 Mar 17 17:55:39.542136 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:39.557947 systemd[1]: sshd@2-10.0.0.132:22-10.0.0.1:46642.service: Deactivated successfully. Mar 17 17:55:39.560090 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:55:39.561923 systemd-logind[1470]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:55:39.573213 systemd[1]: Started sshd@3-10.0.0.132:22-10.0.0.1:46648.service - OpenSSH per-connection server daemon (10.0.0.1:46648). Mar 17 17:55:39.574275 systemd-logind[1470]: Removed session 3. Mar 17 17:55:39.611247 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 46648 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:55:39.612607 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:39.617216 systemd-logind[1470]: New session 4 of user core. Mar 17 17:55:39.630966 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:55:39.684848 sshd[1618]: Connection closed by 10.0.0.1 port 46648 Mar 17 17:55:39.685222 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:39.697560 systemd[1]: sshd@3-10.0.0.132:22-10.0.0.1:46648.service: Deactivated successfully. Mar 17 17:55:39.700072 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:55:39.701709 systemd-logind[1470]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:55:39.709071 systemd[1]: Started sshd@4-10.0.0.132:22-10.0.0.1:46650.service - OpenSSH per-connection server daemon (10.0.0.1:46650). Mar 17 17:55:39.710498 systemd-logind[1470]: Removed session 4. Mar 17 17:55:39.775451 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 46650 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:55:39.777018 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:39.781023 systemd-logind[1470]: New session 5 of user core. Mar 17 17:55:39.795921 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:55:40.076155 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:55:40.076519 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:55:40.095449 sudo[1627]: pam_unix(sudo:session): session closed for user root Mar 17 17:55:40.096955 sshd[1626]: Connection closed by 10.0.0.1 port 46650 Mar 17 17:55:40.097395 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:40.124016 systemd[1]: sshd@4-10.0.0.132:22-10.0.0.1:46650.service: Deactivated successfully. Mar 17 17:55:40.126278 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:55:40.128272 systemd-logind[1470]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:55:40.145123 systemd[1]: Started sshd@5-10.0.0.132:22-10.0.0.1:46662.service - OpenSSH per-connection server daemon (10.0.0.1:46662). Mar 17 17:55:40.146187 systemd-logind[1470]: Removed session 5. Mar 17 17:55:40.183840 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 46662 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:55:40.185378 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:40.189635 systemd-logind[1470]: New session 6 of user core. Mar 17 17:55:40.198902 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:55:40.253935 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:55:40.254329 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:55:40.258954 sudo[1637]: pam_unix(sudo:session): session closed for user root Mar 17 17:55:40.266211 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:55:40.266550 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:55:40.286056 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:55:40.315934 augenrules[1659]: No rules Mar 17 17:55:40.316888 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:55:40.317183 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:55:40.318500 sudo[1636]: pam_unix(sudo:session): session closed for user root Mar 17 17:55:40.320480 sshd[1635]: Connection closed by 10.0.0.1 port 46662 Mar 17 17:55:40.320830 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Mar 17 17:55:40.333864 systemd[1]: sshd@5-10.0.0.132:22-10.0.0.1:46662.service: Deactivated successfully. Mar 17 17:55:40.336000 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:55:40.337482 systemd-logind[1470]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:55:40.350020 systemd[1]: Started sshd@6-10.0.0.132:22-10.0.0.1:46668.service - OpenSSH per-connection server daemon (10.0.0.1:46668). Mar 17 17:55:40.351744 systemd-logind[1470]: Removed session 6. Mar 17 17:55:40.390215 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 46668 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:55:40.391623 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:55:40.396137 systemd-logind[1470]: New session 7 of user core. Mar 17 17:55:40.405900 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:55:40.460648 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:55:40.461050 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:55:41.158058 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:55:41.158510 (dockerd)[1692]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:55:41.811276 dockerd[1692]: time="2025-03-17T17:55:41.811169513Z" level=info msg="Starting up" Mar 17 17:55:42.228621 dockerd[1692]: time="2025-03-17T17:55:42.228481202Z" level=info msg="Loading containers: start." Mar 17 17:55:42.401814 kernel: Initializing XFRM netlink socket Mar 17 17:55:42.489050 systemd-networkd[1404]: docker0: Link UP Mar 17 17:55:42.542548 dockerd[1692]: time="2025-03-17T17:55:42.542493236Z" level=info msg="Loading containers: done." Mar 17 17:55:42.557199 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck451060449-merged.mount: Deactivated successfully. Mar 17 17:55:42.559588 dockerd[1692]: time="2025-03-17T17:55:42.559543271Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:55:42.559686 dockerd[1692]: time="2025-03-17T17:55:42.559666592Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 17:55:42.559845 dockerd[1692]: time="2025-03-17T17:55:42.559820201Z" level=info msg="Daemon has completed initialization" Mar 17 17:55:42.598447 dockerd[1692]: time="2025-03-17T17:55:42.598338576Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:55:42.598562 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:55:43.291765 containerd[1486]: time="2025-03-17T17:55:43.291678276Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 17 17:55:43.934430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447953209.mount: Deactivated successfully. Mar 17 17:55:45.110318 containerd[1486]: time="2025-03-17T17:55:45.110241010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:45.111879 containerd[1486]: time="2025-03-17T17:55:45.111798020Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=27959268" Mar 17 17:55:45.113019 containerd[1486]: time="2025-03-17T17:55:45.112983373Z" level=info msg="ImageCreate event name:\"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:45.116515 containerd[1486]: time="2025-03-17T17:55:45.116478528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:45.117547 containerd[1486]: time="2025-03-17T17:55:45.117512457Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"27956068\" in 1.825779608s" Mar 17 17:55:45.117607 containerd[1486]: time="2025-03-17T17:55:45.117549766Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\"" Mar 17 17:55:45.119040 containerd[1486]: time="2025-03-17T17:55:45.119017289Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 17 17:55:46.489864 containerd[1486]: time="2025-03-17T17:55:46.489804614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:46.490741 containerd[1486]: time="2025-03-17T17:55:46.490664827Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=24713776" Mar 17 17:55:46.492039 containerd[1486]: time="2025-03-17T17:55:46.491959185Z" level=info msg="ImageCreate event name:\"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:46.495224 containerd[1486]: time="2025-03-17T17:55:46.495167752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:46.496303 containerd[1486]: time="2025-03-17T17:55:46.496263968Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"26201384\" in 1.377216212s" Mar 17 17:55:46.496303 containerd[1486]: time="2025-03-17T17:55:46.496293203Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\"" Mar 17 17:55:46.496753 containerd[1486]: time="2025-03-17T17:55:46.496732466Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 17 17:55:47.677022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:55:47.684007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:55:47.869690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:47.874736 (kubelet)[1961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:55:48.852386 kubelet[1961]: E0317 17:55:48.852270 1961 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:55:48.859024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:55:48.859231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:55:48.859651 systemd[1]: kubelet.service: Consumed 299ms CPU time, 98.2M memory peak. Mar 17 17:55:48.868291 containerd[1486]: time="2025-03-17T17:55:48.868221675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:48.881733 containerd[1486]: time="2025-03-17T17:55:48.881660558Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=18780368" Mar 17 17:55:48.908321 containerd[1486]: time="2025-03-17T17:55:48.908201592Z" level=info msg="ImageCreate event name:\"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:48.927728 containerd[1486]: time="2025-03-17T17:55:48.927658961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:48.929339 containerd[1486]: time="2025-03-17T17:55:48.929282807Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"20267994\" in 2.432405208s" Mar 17 17:55:48.929339 containerd[1486]: time="2025-03-17T17:55:48.929325907Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\"" Mar 17 17:55:48.929978 containerd[1486]: time="2025-03-17T17:55:48.929900896Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 17:55:50.106808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2052388264.mount: Deactivated successfully. Mar 17 17:55:50.712516 containerd[1486]: time="2025-03-17T17:55:50.712441966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:50.713292 containerd[1486]: time="2025-03-17T17:55:50.713255212Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=30354630" Mar 17 17:55:50.714500 containerd[1486]: time="2025-03-17T17:55:50.714468317Z" level=info msg="ImageCreate event name:\"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:50.716680 containerd[1486]: time="2025-03-17T17:55:50.716644388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:50.717428 containerd[1486]: time="2025-03-17T17:55:50.717369267Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"30353649\" in 1.787422936s" Mar 17 17:55:50.717428 containerd[1486]: time="2025-03-17T17:55:50.717422747Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 17 17:55:50.718050 containerd[1486]: time="2025-03-17T17:55:50.718019366Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:55:51.475822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1088316515.mount: Deactivated successfully. Mar 17 17:55:52.157332 containerd[1486]: time="2025-03-17T17:55:52.157268171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:52.157977 containerd[1486]: time="2025-03-17T17:55:52.157914413Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 17 17:55:52.159081 containerd[1486]: time="2025-03-17T17:55:52.159056535Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:52.162061 containerd[1486]: time="2025-03-17T17:55:52.162027537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:52.163293 containerd[1486]: time="2025-03-17T17:55:52.163264847Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.445206318s" Mar 17 17:55:52.163293 containerd[1486]: time="2025-03-17T17:55:52.163291227Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 17:55:52.163810 containerd[1486]: time="2025-03-17T17:55:52.163770546Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:55:52.666507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22867752.mount: Deactivated successfully. Mar 17 17:55:52.672173 containerd[1486]: time="2025-03-17T17:55:52.672113991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:52.672930 containerd[1486]: time="2025-03-17T17:55:52.672858748Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 17 17:55:52.674146 containerd[1486]: time="2025-03-17T17:55:52.674117218Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:52.676405 containerd[1486]: time="2025-03-17T17:55:52.676366516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:52.677236 containerd[1486]: time="2025-03-17T17:55:52.677192986Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 513.376584ms" Mar 17 17:55:52.677295 containerd[1486]: time="2025-03-17T17:55:52.677236818Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 17 17:55:52.677740 containerd[1486]: time="2025-03-17T17:55:52.677718752Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 17 17:55:53.190069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834789457.mount: Deactivated successfully. Mar 17 17:55:55.415826 containerd[1486]: time="2025-03-17T17:55:55.415738246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:55.416540 containerd[1486]: time="2025-03-17T17:55:55.416509091Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Mar 17 17:55:55.417754 containerd[1486]: time="2025-03-17T17:55:55.417727296Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:55.420765 containerd[1486]: time="2025-03-17T17:55:55.420714488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:55:55.421879 containerd[1486]: time="2025-03-17T17:55:55.421822506Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.744076203s" Mar 17 17:55:55.421879 containerd[1486]: time="2025-03-17T17:55:55.421852041Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Mar 17 17:55:57.689799 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:57.690068 systemd[1]: kubelet.service: Consumed 299ms CPU time, 98.2M memory peak. Mar 17 17:55:57.702044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:55:57.725640 systemd[1]: Reload requested from client PID 2108 ('systemctl') (unit session-7.scope)... Mar 17 17:55:57.725662 systemd[1]: Reloading... Mar 17 17:55:57.821836 zram_generator::config[2155]: No configuration found. Mar 17 17:55:58.093537 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:55:58.197631 systemd[1]: Reloading finished in 471 ms. Mar 17 17:55:58.251535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:58.255439 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:55:58.256576 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:55:58.256962 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:55:58.257220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:58.257255 systemd[1]: kubelet.service: Consumed 131ms CPU time, 83.5M memory peak. Mar 17 17:55:58.259930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:55:58.402997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:55:58.407844 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:55:58.445575 kubelet[2203]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:55:58.445575 kubelet[2203]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:55:58.445575 kubelet[2203]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:55:58.446740 kubelet[2203]: I0317 17:55:58.446664 2203 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:55:58.781516 kubelet[2203]: I0317 17:55:58.781390 2203 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:55:58.781516 kubelet[2203]: I0317 17:55:58.781427 2203 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:55:58.781810 kubelet[2203]: I0317 17:55:58.781665 2203 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:55:58.800464 kubelet[2203]: I0317 17:55:58.800416 2203 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:55:58.800631 kubelet[2203]: E0317 17:55:58.800535 2203 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:55:58.807622 kubelet[2203]: E0317 17:55:58.807568 2203 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:55:58.807622 kubelet[2203]: I0317 17:55:58.807617 2203 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:55:58.813995 kubelet[2203]: I0317 17:55:58.813965 2203 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:55:58.814931 kubelet[2203]: I0317 17:55:58.814905 2203 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:55:58.815111 kubelet[2203]: I0317 17:55:58.815069 2203 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:55:58.815294 kubelet[2203]: I0317 17:55:58.815101 2203 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:55:58.815294 kubelet[2203]: I0317 17:55:58.815294 2203 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:55:58.815424 kubelet[2203]: I0317 17:55:58.815304 2203 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:55:58.815424 kubelet[2203]: I0317 17:55:58.815423 2203 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:55:58.816810 kubelet[2203]: I0317 17:55:58.816770 2203 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:55:58.816810 kubelet[2203]: I0317 17:55:58.816803 2203 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:55:58.816862 kubelet[2203]: I0317 17:55:58.816840 2203 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:55:58.816862 kubelet[2203]: I0317 17:55:58.816856 2203 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:55:58.820603 kubelet[2203]: I0317 17:55:58.820568 2203 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:55:58.822506 kubelet[2203]: I0317 17:55:58.822460 2203 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:55:58.822862 kubelet[2203]: W0317 17:55:58.822802 2203 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Mar 17 17:55:58.822901 kubelet[2203]: E0317 17:55:58.822864 2203 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:55:58.823577 kubelet[2203]: W0317 17:55:58.823548 2203 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:55:58.824376 kubelet[2203]: I0317 17:55:58.824212 2203 server.go:1269] "Started kubelet" Mar 17 17:55:58.824376 kubelet[2203]: W0317 17:55:58.824204 2203 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Mar 17 17:55:58.824376 kubelet[2203]: E0317 17:55:58.824260 2203 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:55:58.824376 kubelet[2203]: I0317 17:55:58.824303 2203 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:55:58.824945 kubelet[2203]: I0317 17:55:58.824894 2203 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:55:58.825220 kubelet[2203]: I0317 17:55:58.825195 2203 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:55:58.825220 kubelet[2203]: I0317 17:55:58.825199 2203 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:55:58.828208 kubelet[2203]: I0317 17:55:58.828179 2203 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:55:58.829592 kubelet[2203]: I0317 17:55:58.828748 2203 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:55:58.829592 kubelet[2203]: I0317 17:55:58.829541 2203 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:55:58.829681 kubelet[2203]: E0317 17:55:58.829655 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:55:58.830345 kubelet[2203]: I0317 17:55:58.830324 2203 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:55:58.830411 kubelet[2203]: I0317 17:55:58.830386 2203 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:55:58.830411 kubelet[2203]: I0317 17:55:58.830401 2203 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:55:58.830513 kubelet[2203]: I0317 17:55:58.830489 2203 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:55:58.830800 kubelet[2203]: W0317 17:55:58.830748 2203 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Mar 17 17:55:58.830838 kubelet[2203]: E0317 17:55:58.830814 2203 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:55:58.830894 kubelet[2203]: E0317 17:55:58.830862 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="200ms" Mar 17 17:55:58.832002 kubelet[2203]: E0317 17:55:58.831982 2203 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:55:58.832343 kubelet[2203]: I0317 17:55:58.832328 2203 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:55:58.833810 kubelet[2203]: E0317 17:55:58.831421 2203 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da8bc98447dec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:55:58.824193516 +0000 UTC m=+0.412286535,LastTimestamp:2025-03-17 17:55:58.824193516 +0000 UTC m=+0.412286535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:55:58.848118 kubelet[2203]: I0317 17:55:58.848077 2203 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:55:58.848118 kubelet[2203]: I0317 17:55:58.848104 2203 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:55:58.848118 kubelet[2203]: I0317 17:55:58.848126 2203 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:55:58.850770 kubelet[2203]: I0317 17:55:58.850725 2203 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:55:58.852273 kubelet[2203]: I0317 17:55:58.852236 2203 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:55:58.852320 kubelet[2203]: I0317 17:55:58.852293 2203 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:55:58.852320 kubelet[2203]: I0317 17:55:58.852319 2203 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:55:58.852531 kubelet[2203]: E0317 17:55:58.852377 2203 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:55:58.853269 kubelet[2203]: W0317 17:55:58.853210 2203 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Mar 17 17:55:58.853324 kubelet[2203]: E0317 17:55:58.853278 2203 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:55:58.930764 kubelet[2203]: E0317 17:55:58.930717 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:55:58.953053 kubelet[2203]: E0317 17:55:58.952988 2203 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:55:59.031331 kubelet[2203]: E0317 17:55:59.031272 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:55:59.031608 kubelet[2203]: E0317 17:55:59.031519 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="400ms" Mar 17 17:55:59.131817 kubelet[2203]: E0317 17:55:59.131764 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:55:59.153936 kubelet[2203]: E0317 17:55:59.153907 2203 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:55:59.172575 kubelet[2203]: I0317 17:55:59.172544 2203 policy_none.go:49] "None policy: Start" Mar 17 17:55:59.173369 kubelet[2203]: I0317 17:55:59.173338 2203 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:55:59.173369 kubelet[2203]: I0317 17:55:59.173360 2203 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:55:59.180657 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:55:59.198740 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:55:59.216432 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:55:59.217515 kubelet[2203]: I0317 17:55:59.217469 2203 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:55:59.217744 kubelet[2203]: I0317 17:55:59.217716 2203 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:55:59.217808 kubelet[2203]: I0317 17:55:59.217738 2203 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:55:59.218110 kubelet[2203]: I0317 17:55:59.218080 2203 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:55:59.219402 kubelet[2203]: E0317 17:55:59.219360 2203 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 17:55:59.320944 kubelet[2203]: I0317 17:55:59.320768 2203 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:55:59.321224 kubelet[2203]: E0317 17:55:59.321188 2203 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Mar 17 17:55:59.432334 kubelet[2203]: E0317 17:55:59.432274 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="800ms" Mar 17 17:55:59.522895 kubelet[2203]: I0317 17:55:59.522839 2203 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:55:59.523326 kubelet[2203]: E0317 17:55:59.523142 2203 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Mar 17 17:55:59.562597 systemd[1]: Created slice kubepods-burstable-pod46b2690e42578bfcda20d1a51d4057e4.slice - libcontainer container kubepods-burstable-pod46b2690e42578bfcda20d1a51d4057e4.slice. Mar 17 17:55:59.596101 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice - libcontainer container kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 17 17:55:59.600654 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice - libcontainer container kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 17 17:55:59.634638 kubelet[2203]: I0317 17:55:59.634589 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46b2690e42578bfcda20d1a51d4057e4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"46b2690e42578bfcda20d1a51d4057e4\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:55:59.634638 kubelet[2203]: I0317 17:55:59.634641 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:55:59.634813 kubelet[2203]: I0317 17:55:59.634662 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:55:59.634813 kubelet[2203]: I0317 17:55:59.634689 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:55:59.634813 kubelet[2203]: I0317 17:55:59.634713 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46b2690e42578bfcda20d1a51d4057e4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"46b2690e42578bfcda20d1a51d4057e4\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:55:59.634813 kubelet[2203]: I0317 17:55:59.634730 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46b2690e42578bfcda20d1a51d4057e4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"46b2690e42578bfcda20d1a51d4057e4\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:55:59.634813 kubelet[2203]: I0317 17:55:59.634747 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:55:59.634930 kubelet[2203]: I0317 17:55:59.634827 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:55:59.634930 kubelet[2203]: I0317 17:55:59.634862 2203 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:55:59.720179 kubelet[2203]: W0317 17:55:59.720111 2203 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Mar 17 17:55:59.720179 kubelet[2203]: E0317 17:55:59.720178 2203 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:55:59.894005 kubelet[2203]: E0317 17:55:59.893567 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:55:59.894242 containerd[1486]: time="2025-03-17T17:55:59.894136641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:46b2690e42578bfcda20d1a51d4057e4,Namespace:kube-system,Attempt:0,}" Mar 17 17:55:59.899305 kubelet[2203]: E0317 17:55:59.899281 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:55:59.899569 containerd[1486]: time="2025-03-17T17:55:59.899537931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 17 17:55:59.902883 kubelet[2203]: E0317 17:55:59.902859 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:55:59.903367 containerd[1486]: time="2025-03-17T17:55:59.903326816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 17 17:55:59.924559 kubelet[2203]: I0317 17:55:59.924523 2203 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:55:59.924856 kubelet[2203]: E0317 17:55:59.924804 2203 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Mar 17 17:56:00.078211 kubelet[2203]: W0317 17:56:00.078163 2203 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Mar 17 17:56:00.078211 kubelet[2203]: E0317 17:56:00.078207 2203 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:56:00.233752 kubelet[2203]: E0317 17:56:00.233623 2203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="1.6s" Mar 17 17:56:00.290243 kubelet[2203]: W0317 17:56:00.290192 2203 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Mar 17 17:56:00.290313 kubelet[2203]: E0317 17:56:00.290251 2203 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:56:00.372957 kubelet[2203]: W0317 17:56:00.372857 2203 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused Mar 17 17:56:00.372957 kubelet[2203]: E0317 17:56:00.372926 2203 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:56:00.587414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3584647897.mount: Deactivated successfully. Mar 17 17:56:00.593496 containerd[1486]: time="2025-03-17T17:56:00.593458409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:56:00.596621 containerd[1486]: time="2025-03-17T17:56:00.596519119Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 17:56:00.597691 containerd[1486]: time="2025-03-17T17:56:00.597607190Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:56:00.599936 containerd[1486]: time="2025-03-17T17:56:00.599900571Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:56:00.600729 containerd[1486]: time="2025-03-17T17:56:00.600688699Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:56:00.601852 containerd[1486]: time="2025-03-17T17:56:00.601813769Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:56:00.602982 containerd[1486]: time="2025-03-17T17:56:00.602862696Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:56:00.603909 containerd[1486]: time="2025-03-17T17:56:00.603868332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:56:00.604555 containerd[1486]: time="2025-03-17T17:56:00.604515696Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 710.279758ms" Mar 17 17:56:00.607558 containerd[1486]: time="2025-03-17T17:56:00.607531902Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 707.935262ms" Mar 17 17:56:00.608449 containerd[1486]: time="2025-03-17T17:56:00.608414878Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 704.981532ms" Mar 17 17:56:00.727245 kubelet[2203]: I0317 17:56:00.726904 2203 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:56:00.727902 kubelet[2203]: E0317 17:56:00.727865 2203 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" Mar 17 17:56:00.757860 containerd[1486]: time="2025-03-17T17:56:00.757744965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:00.758105 containerd[1486]: time="2025-03-17T17:56:00.757829854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:00.758105 containerd[1486]: time="2025-03-17T17:56:00.757844292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:00.758105 containerd[1486]: time="2025-03-17T17:56:00.757921787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:00.758105 containerd[1486]: time="2025-03-17T17:56:00.757875350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:00.758105 containerd[1486]: time="2025-03-17T17:56:00.757924332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:00.758105 containerd[1486]: time="2025-03-17T17:56:00.757934421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:00.758105 containerd[1486]: time="2025-03-17T17:56:00.757554468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:00.758105 containerd[1486]: time="2025-03-17T17:56:00.757615783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:00.758105 containerd[1486]: time="2025-03-17T17:56:00.757633266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:00.758105 containerd[1486]: time="2025-03-17T17:56:00.757736590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:00.759950 containerd[1486]: time="2025-03-17T17:56:00.758601892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:00.780919 systemd[1]: Started cri-containerd-3f4fe215e30246d779f4ef0a34a911784272c4779ebd62e3a75a10532e467f13.scope - libcontainer container 3f4fe215e30246d779f4ef0a34a911784272c4779ebd62e3a75a10532e467f13. Mar 17 17:56:00.786356 systemd[1]: Started cri-containerd-113b6be5fb5c380800745fbc3ff3549d713c6949c77089bd3ff3c9e87cb714ea.scope - libcontainer container 113b6be5fb5c380800745fbc3ff3549d713c6949c77089bd3ff3c9e87cb714ea. Mar 17 17:56:00.788138 systemd[1]: Started cri-containerd-b92fda9f3831353ae2d9613bf1af5fcdc673a164c185d268dde039f34d95a18e.scope - libcontainer container b92fda9f3831353ae2d9613bf1af5fcdc673a164c185d268dde039f34d95a18e. Mar 17 17:56:00.824169 containerd[1486]: time="2025-03-17T17:56:00.824104012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f4fe215e30246d779f4ef0a34a911784272c4779ebd62e3a75a10532e467f13\"" Mar 17 17:56:00.827812 kubelet[2203]: E0317 17:56:00.827229 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:00.833287 containerd[1486]: time="2025-03-17T17:56:00.832953428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b92fda9f3831353ae2d9613bf1af5fcdc673a164c185d268dde039f34d95a18e\"" Mar 17 17:56:00.833838 kubelet[2203]: E0317 17:56:00.833813 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:00.835794 containerd[1486]: time="2025-03-17T17:56:00.835741527Z" level=info msg="CreateContainer within sandbox \"3f4fe215e30246d779f4ef0a34a911784272c4779ebd62e3a75a10532e467f13\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:56:00.836965 containerd[1486]: time="2025-03-17T17:56:00.836936027Z" level=info msg="CreateContainer within sandbox \"b92fda9f3831353ae2d9613bf1af5fcdc673a164c185d268dde039f34d95a18e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:56:00.837665 containerd[1486]: time="2025-03-17T17:56:00.837189532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:46b2690e42578bfcda20d1a51d4057e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"113b6be5fb5c380800745fbc3ff3549d713c6949c77089bd3ff3c9e87cb714ea\"" Mar 17 17:56:00.839098 kubelet[2203]: E0317 17:56:00.839079 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:00.840521 containerd[1486]: time="2025-03-17T17:56:00.840493438Z" level=info msg="CreateContainer within sandbox \"113b6be5fb5c380800745fbc3ff3549d713c6949c77089bd3ff3c9e87cb714ea\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:56:00.865443 containerd[1486]: time="2025-03-17T17:56:00.865393996Z" level=info msg="CreateContainer within sandbox \"3f4fe215e30246d779f4ef0a34a911784272c4779ebd62e3a75a10532e467f13\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8da49781b72cba6dff1fffb5ed00e0061b21244aaa040a8c08a37ce0f41ec797\"" Mar 17 17:56:00.866066 containerd[1486]: time="2025-03-17T17:56:00.866008027Z" level=info msg="StartContainer for \"8da49781b72cba6dff1fffb5ed00e0061b21244aaa040a8c08a37ce0f41ec797\"" Mar 17 17:56:00.870941 containerd[1486]: time="2025-03-17T17:56:00.870840600Z" level=info msg="CreateContainer within sandbox \"b92fda9f3831353ae2d9613bf1af5fcdc673a164c185d268dde039f34d95a18e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6af17ca8f601440d63423e9379f0ff325560708e10122ffeb2519dc523604be0\"" Mar 17 17:56:00.871163 containerd[1486]: time="2025-03-17T17:56:00.871142376Z" level=info msg="StartContainer for \"6af17ca8f601440d63423e9379f0ff325560708e10122ffeb2519dc523604be0\"" Mar 17 17:56:00.874246 containerd[1486]: time="2025-03-17T17:56:00.874209358Z" level=info msg="CreateContainer within sandbox \"113b6be5fb5c380800745fbc3ff3549d713c6949c77089bd3ff3c9e87cb714ea\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cb2cff386a94511f7a6d8dabb0045126c2bfe17aaeb8c53962d9652bffbcdf37\"" Mar 17 17:56:00.875582 containerd[1486]: time="2025-03-17T17:56:00.875540985Z" level=info msg="StartContainer for \"cb2cff386a94511f7a6d8dabb0045126c2bfe17aaeb8c53962d9652bffbcdf37\"" Mar 17 17:56:00.890227 kubelet[2203]: E0317 17:56:00.890172 2203 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:56:00.894018 systemd[1]: Started cri-containerd-8da49781b72cba6dff1fffb5ed00e0061b21244aaa040a8c08a37ce0f41ec797.scope - libcontainer container 8da49781b72cba6dff1fffb5ed00e0061b21244aaa040a8c08a37ce0f41ec797. Mar 17 17:56:00.908934 systemd[1]: Started cri-containerd-6af17ca8f601440d63423e9379f0ff325560708e10122ffeb2519dc523604be0.scope - libcontainer container 6af17ca8f601440d63423e9379f0ff325560708e10122ffeb2519dc523604be0. Mar 17 17:56:00.910130 systemd[1]: Started cri-containerd-cb2cff386a94511f7a6d8dabb0045126c2bfe17aaeb8c53962d9652bffbcdf37.scope - libcontainer container cb2cff386a94511f7a6d8dabb0045126c2bfe17aaeb8c53962d9652bffbcdf37. Mar 17 17:56:00.944558 containerd[1486]: time="2025-03-17T17:56:00.944520075Z" level=info msg="StartContainer for \"8da49781b72cba6dff1fffb5ed00e0061b21244aaa040a8c08a37ce0f41ec797\" returns successfully" Mar 17 17:56:00.954564 containerd[1486]: time="2025-03-17T17:56:00.954492427Z" level=info msg="StartContainer for \"6af17ca8f601440d63423e9379f0ff325560708e10122ffeb2519dc523604be0\" returns successfully" Mar 17 17:56:00.970461 containerd[1486]: time="2025-03-17T17:56:00.970313377Z" level=info msg="StartContainer for \"cb2cff386a94511f7a6d8dabb0045126c2bfe17aaeb8c53962d9652bffbcdf37\" returns successfully" Mar 17 17:56:01.872810 kubelet[2203]: E0317 17:56:01.871118 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:01.872810 kubelet[2203]: E0317 17:56:01.872522 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:01.875830 kubelet[2203]: E0317 17:56:01.874185 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:02.003617 kubelet[2203]: E0317 17:56:02.003558 2203 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 17:56:02.330063 kubelet[2203]: I0317 17:56:02.329942 2203 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:56:02.337127 kubelet[2203]: I0317 17:56:02.337076 2203 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 17:56:02.337127 kubelet[2203]: E0317 17:56:02.337113 2203 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 17 17:56:02.344961 kubelet[2203]: E0317 17:56:02.344915 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:56:02.445244 kubelet[2203]: E0317 17:56:02.445191 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:56:02.545835 kubelet[2203]: E0317 17:56:02.545768 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:56:02.646414 kubelet[2203]: E0317 17:56:02.646308 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:56:02.746923 kubelet[2203]: E0317 17:56:02.746890 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:56:02.847015 kubelet[2203]: E0317 17:56:02.846972 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:56:02.875571 kubelet[2203]: E0317 17:56:02.875536 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:02.875571 kubelet[2203]: E0317 17:56:02.875584 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:02.876009 kubelet[2203]: E0317 17:56:02.875714 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:02.948127 kubelet[2203]: E0317 17:56:02.947981 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:56:03.048599 kubelet[2203]: E0317 17:56:03.048553 2203 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:56:03.823033 kubelet[2203]: I0317 17:56:03.822990 2203 apiserver.go:52] "Watching apiserver" Mar 17 17:56:03.831194 kubelet[2203]: I0317 17:56:03.831158 2203 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:56:03.845579 systemd[1]: Reload requested from client PID 2481 ('systemctl') (unit session-7.scope)... Mar 17 17:56:03.845596 systemd[1]: Reloading... Mar 17 17:56:03.885584 kubelet[2203]: E0317 17:56:03.885545 2203 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:03.941807 zram_generator::config[2528]: No configuration found. Mar 17 17:56:04.062533 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:56:04.189794 systemd[1]: Reloading finished in 343 ms. Mar 17 17:56:04.215567 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:04.240353 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:56:04.240688 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:04.240747 systemd[1]: kubelet.service: Consumed 863ms CPU time, 119.3M memory peak. Mar 17 17:56:04.251255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:56:04.417043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:56:04.421660 (kubelet)[2570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:56:04.711827 kubelet[2570]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:56:04.711827 kubelet[2570]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:56:04.711827 kubelet[2570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:56:04.712286 kubelet[2570]: I0317 17:56:04.711980 2570 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:56:04.719886 kubelet[2570]: I0317 17:56:04.719839 2570 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:56:04.719886 kubelet[2570]: I0317 17:56:04.719868 2570 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:56:04.720188 kubelet[2570]: I0317 17:56:04.720168 2570 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:56:04.721727 kubelet[2570]: I0317 17:56:04.721694 2570 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:56:04.724320 kubelet[2570]: I0317 17:56:04.724292 2570 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:56:04.728732 kubelet[2570]: E0317 17:56:04.728693 2570 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:56:04.728732 kubelet[2570]: I0317 17:56:04.728724 2570 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:56:04.734042 kubelet[2570]: I0317 17:56:04.734003 2570 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:56:04.734177 kubelet[2570]: I0317 17:56:04.734145 2570 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:56:04.734298 kubelet[2570]: I0317 17:56:04.734266 2570 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:56:04.734536 kubelet[2570]: I0317 17:56:04.734291 2570 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:56:04.734536 kubelet[2570]: I0317 17:56:04.734536 2570 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:56:04.734714 kubelet[2570]: I0317 17:56:04.734568 2570 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:56:04.734714 kubelet[2570]: I0317 17:56:04.734664 2570 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:56:04.734834 kubelet[2570]: I0317 17:56:04.734817 2570 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:56:04.734834 kubelet[2570]: I0317 17:56:04.734833 2570 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:56:04.734888 kubelet[2570]: I0317 17:56:04.734863 2570 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:56:04.734888 kubelet[2570]: I0317 17:56:04.734877 2570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:56:04.738199 kubelet[2570]: I0317 17:56:04.735712 2570 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:56:04.738199 kubelet[2570]: I0317 17:56:04.736135 2570 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:56:04.738199 kubelet[2570]: I0317 17:56:04.736706 2570 server.go:1269] "Started kubelet" Mar 17 17:56:04.738199 kubelet[2570]: I0317 17:56:04.737834 2570 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:56:04.738199 kubelet[2570]: I0317 17:56:04.737931 2570 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:56:04.738438 kubelet[2570]: I0317 17:56:04.738411 2570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:56:04.738573 kubelet[2570]: I0317 17:56:04.738529 2570 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:56:04.740135 kubelet[2570]: I0317 17:56:04.740100 2570 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:56:04.743120 kubelet[2570]: I0317 17:56:04.742831 2570 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:56:04.746956 kubelet[2570]: I0317 17:56:04.746670 2570 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:56:04.746956 kubelet[2570]: I0317 17:56:04.746826 2570 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:56:04.747128 kubelet[2570]: I0317 17:56:04.747021 2570 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:56:04.749432 kubelet[2570]: E0317 17:56:04.747729 2570 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:56:04.749432 kubelet[2570]: I0317 17:56:04.747943 2570 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:56:04.749432 kubelet[2570]: I0317 17:56:04.748049 2570 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:56:04.752754 kubelet[2570]: I0317 17:56:04.752710 2570 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:56:04.753428 kubelet[2570]: E0317 17:56:04.753412 2570 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:56:04.758650 kubelet[2570]: I0317 17:56:04.758606 2570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:56:04.761922 kubelet[2570]: I0317 17:56:04.761897 2570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:56:04.762422 kubelet[2570]: I0317 17:56:04.762017 2570 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:56:04.762422 kubelet[2570]: I0317 17:56:04.762045 2570 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:56:04.762422 kubelet[2570]: E0317 17:56:04.762105 2570 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:56:04.790400 kubelet[2570]: I0317 17:56:04.790351 2570 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:56:04.790400 kubelet[2570]: I0317 17:56:04.790378 2570 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:56:04.790400 kubelet[2570]: I0317 17:56:04.790403 2570 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:56:04.790644 kubelet[2570]: I0317 17:56:04.790584 2570 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:56:04.790644 kubelet[2570]: I0317 17:56:04.790602 2570 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:56:04.790644 kubelet[2570]: I0317 17:56:04.790631 2570 policy_none.go:49] "None policy: Start" Mar 17 17:56:04.791389 kubelet[2570]: I0317 17:56:04.791367 2570 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:56:04.791427 kubelet[2570]: I0317 17:56:04.791399 2570 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:56:04.791636 kubelet[2570]: I0317 17:56:04.791613 2570 state_mem.go:75] "Updated machine memory state" Mar 17 17:56:04.796358 kubelet[2570]: I0317 17:56:04.796332 2570 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:56:04.796598 kubelet[2570]: I0317 17:56:04.796558 2570 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:56:04.796598 kubelet[2570]: I0317 17:56:04.796581 2570 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:56:04.796876 kubelet[2570]: I0317 17:56:04.796858 2570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:56:04.847556 sudo[2606]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:56:04.847962 sudo[2606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:56:04.872767 kubelet[2570]: E0317 17:56:04.872728 2570 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 17:56:04.900908 kubelet[2570]: I0317 17:56:04.900869 2570 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:56:04.908335 kubelet[2570]: I0317 17:56:04.907992 2570 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 17 17:56:04.908335 kubelet[2570]: I0317 17:56:04.908094 2570 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 17:56:04.948476 kubelet[2570]: I0317 17:56:04.948397 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46b2690e42578bfcda20d1a51d4057e4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"46b2690e42578bfcda20d1a51d4057e4\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:56:04.948476 kubelet[2570]: I0317 17:56:04.948458 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:56:04.948629 kubelet[2570]: I0317 17:56:04.948516 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:56:04.948629 kubelet[2570]: I0317 17:56:04.948543 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:56:04.948629 kubelet[2570]: I0317 17:56:04.948560 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:56:04.948629 kubelet[2570]: I0317 17:56:04.948578 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:56:04.948629 kubelet[2570]: I0317 17:56:04.948592 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:56:04.948754 kubelet[2570]: I0317 17:56:04.948607 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46b2690e42578bfcda20d1a51d4057e4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"46b2690e42578bfcda20d1a51d4057e4\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:56:04.948754 kubelet[2570]: I0317 17:56:04.948639 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46b2690e42578bfcda20d1a51d4057e4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"46b2690e42578bfcda20d1a51d4057e4\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:56:05.173208 kubelet[2570]: E0317 17:56:05.172832 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:05.173208 kubelet[2570]: E0317 17:56:05.172832 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:05.173208 kubelet[2570]: E0317 17:56:05.172922 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:05.329127 sudo[2606]: pam_unix(sudo:session): session closed for user root Mar 17 17:56:05.735399 kubelet[2570]: I0317 17:56:05.735298 2570 apiserver.go:52] "Watching apiserver" Mar 17 17:56:05.747624 kubelet[2570]: I0317 17:56:05.747592 2570 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:56:05.772366 kubelet[2570]: E0317 17:56:05.772308 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:05.772366 kubelet[2570]: E0317 17:56:05.772364 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:05.775973 kubelet[2570]: E0317 17:56:05.775947 2570 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 17 17:56:05.776108 kubelet[2570]: E0317 17:56:05.776080 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:05.800817 kubelet[2570]: I0317 17:56:05.800720 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.800700457 podStartE2EDuration="2.800700457s" podCreationTimestamp="2025-03-17 17:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:56:05.792227607 +0000 UTC m=+1.366820307" watchObservedRunningTime="2025-03-17 17:56:05.800700457 +0000 UTC m=+1.375293157" Mar 17 17:56:05.800988 kubelet[2570]: I0317 17:56:05.800862 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8008566400000001 podStartE2EDuration="1.80085664s" podCreationTimestamp="2025-03-17 17:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:56:05.800821875 +0000 UTC m=+1.375414575" watchObservedRunningTime="2025-03-17 17:56:05.80085664 +0000 UTC m=+1.375449350" Mar 17 17:56:05.812264 kubelet[2570]: I0317 17:56:05.812185 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.812166269 podStartE2EDuration="1.812166269s" podCreationTimestamp="2025-03-17 17:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:56:05.805984787 +0000 UTC m=+1.380577487" watchObservedRunningTime="2025-03-17 17:56:05.812166269 +0000 UTC m=+1.386758979" Mar 17 17:56:06.592137 sudo[1672]: pam_unix(sudo:session): session closed for user root Mar 17 17:56:06.593690 sshd[1671]: Connection closed by 10.0.0.1 port 46668 Mar 17 17:56:06.594136 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:06.597671 systemd[1]: sshd@6-10.0.0.132:22-10.0.0.1:46668.service: Deactivated successfully. Mar 17 17:56:06.599813 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:56:06.600024 systemd[1]: session-7.scope: Consumed 4.624s CPU time, 254M memory peak. Mar 17 17:56:06.601153 systemd-logind[1470]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:56:06.601989 systemd-logind[1470]: Removed session 7. Mar 17 17:56:06.773586 kubelet[2570]: E0317 17:56:06.773555 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:06.773586 kubelet[2570]: E0317 17:56:06.773572 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:06.774079 kubelet[2570]: E0317 17:56:06.773555 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:10.879722 kubelet[2570]: I0317 17:56:10.879676 2570 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:56:10.880183 kubelet[2570]: I0317 17:56:10.880170 2570 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:56:10.880214 containerd[1486]: time="2025-03-17T17:56:10.880018804Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:56:12.050643 systemd[1]: Created slice kubepods-besteffort-pod74067c67_6482_4cc5_89c6_4e3d3a48df7c.slice - libcontainer container kubepods-besteffort-pod74067c67_6482_4cc5_89c6_4e3d3a48df7c.slice. Mar 17 17:56:12.075613 systemd[1]: Created slice kubepods-burstable-pod63cf0a34_d08f_4429_9ae7_9ffc143d0919.slice - libcontainer container kubepods-burstable-pod63cf0a34_d08f_4429_9ae7_9ffc143d0919.slice. Mar 17 17:56:12.082072 systemd[1]: Created slice kubepods-besteffort-pod840becc0_4d43_4836_aed6_609cb30d4f47.slice - libcontainer container kubepods-besteffort-pod840becc0_4d43_4836_aed6_609cb30d4f47.slice. Mar 17 17:56:12.104547 kubelet[2570]: I0317 17:56:12.104480 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74067c67-6482-4cc5-89c6-4e3d3a48df7c-cilium-config-path\") pod \"cilium-operator-5d85765b45-j4fvd\" (UID: \"74067c67-6482-4cc5-89c6-4e3d3a48df7c\") " pod="kube-system/cilium-operator-5d85765b45-j4fvd" Mar 17 17:56:12.104547 kubelet[2570]: I0317 17:56:12.104535 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/840becc0-4d43-4836-aed6-609cb30d4f47-kube-proxy\") pod \"kube-proxy-44fsx\" (UID: \"840becc0-4d43-4836-aed6-609cb30d4f47\") " pod="kube-system/kube-proxy-44fsx" Mar 17 17:56:12.104547 kubelet[2570]: I0317 17:56:12.104560 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/63cf0a34-d08f-4429-9ae7-9ffc143d0919-hubble-tls\") pod \"cilium-8qh2q\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " pod="kube-system/cilium-8qh2q" Mar 17 17:56:12.105200 kubelet[2570]: I0317 17:56:12.104578 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-hostproc\") pod \"cilium-8qh2q\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " pod="kube-system/cilium-8qh2q" Mar 17 17:56:12.105200 kubelet[2570]: I0317 17:56:12.104592 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-cni-path\") pod \"cilium-8qh2q\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " pod="kube-system/cilium-8qh2q" Mar 17 17:56:12.105200 kubelet[2570]: I0317 17:56:12.104605 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-host-proc-sys-net\") pod \"cilium-8qh2q\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " pod="kube-system/cilium-8qh2q" Mar 17 17:56:12.105200 kubelet[2570]: I0317 17:56:12.104619 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfcpj\" (UniqueName: \"kubernetes.io/projected/63cf0a34-d08f-4429-9ae7-9ffc143d0919-kube-api-access-dfcpj\") pod \"cilium-8qh2q\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " pod="kube-system/cilium-8qh2q" Mar 17 17:56:12.105200 kubelet[2570]: I0317 17:56:12.104633 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv5qc\" (UniqueName: \"kubernetes.io/projected/840becc0-4d43-4836-aed6-609cb30d4f47-kube-api-access-gv5qc\") pod \"kube-proxy-44fsx\" (UID: \"840becc0-4d43-4836-aed6-609cb30d4f47\") " pod="kube-system/kube-proxy-44fsx" Mar 17 17:56:12.105200 kubelet[2570]: I0317 17:56:12.104649 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-etc-cni-netd\") pod \"cilium-8qh2q\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " pod="kube-system/cilium-8qh2q" Mar 17 17:56:12.105397 kubelet[2570]: I0317 17:56:12.104666 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-lib-modules\") pod \"cilium-8qh2q\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " pod="kube-system/cilium-8qh2q" Mar 17 17:56:12.105397 kubelet[2570]: I0317 17:56:12.104684 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-xtables-lock\") pod \"cilium-8qh2q\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " pod="kube-system/cilium-8qh2q" Mar 17 17:56:12.105397 kubelet[2570]: I0317 17:56:12.104702 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-host-proc-sys-kernel\") pod \"cilium-8qh2q\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " pod="kube-system/cilium-8qh2q" Mar 17 17:56:12.105397 kubelet[2570]: I0317 17:56:12.104720 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-cilium-cgroup\") pod \"cilium-8qh2q\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " pod="kube-system/cilium-8qh2q" Mar 17 17:56:12.105397 kubelet[2570]: I0317 17:56:12.104735 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63cf0a34-d08f-4429-9ae7-9ffc143d0919-cilium-config-path\") pod \"cilium-8qh2q\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " pod="kube-system/cilium-8qh2q" Mar 17 17:56:12.105397 kubelet[2570]: I0317 17:56:12.104749 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/63cf0a34-d08f-4429-9ae7-9ffc143d0919-clustermesh-secrets\") pod \"cilium-8qh2q\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " pod="kube-system/cilium-8qh2q" Mar 17 17:56:12.105591 kubelet[2570]: I0317 17:56:12.104798 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/840becc0-4d43-4836-aed6-609cb30d4f47-xtables-lock\") pod \"kube-proxy-44fsx\" (UID: \"840becc0-4d43-4836-aed6-609cb30d4f47\") " pod="kube-system/kube-proxy-44fsx" Mar 17 17:56:12.105591 kubelet[2570]: I0317 17:56:12.104814 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/840becc0-4d43-4836-aed6-609cb30d4f47-lib-modules\") pod \"kube-proxy-44fsx\" (UID: \"840becc0-4d43-4836-aed6-609cb30d4f47\") " pod="kube-system/kube-proxy-44fsx" Mar 17 17:56:12.105591 kubelet[2570]: I0317 17:56:12.104831 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-cilium-run\") pod \"cilium-8qh2q\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " pod="kube-system/cilium-8qh2q" Mar 17 17:56:12.105591 kubelet[2570]: I0317 17:56:12.104852 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-bpf-maps\") pod \"cilium-8qh2q\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " pod="kube-system/cilium-8qh2q" Mar 17 17:56:12.105591 kubelet[2570]: I0317 17:56:12.104869 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7dm9\" (UniqueName: \"kubernetes.io/projected/74067c67-6482-4cc5-89c6-4e3d3a48df7c-kube-api-access-w7dm9\") pod \"cilium-operator-5d85765b45-j4fvd\" (UID: \"74067c67-6482-4cc5-89c6-4e3d3a48df7c\") " pod="kube-system/cilium-operator-5d85765b45-j4fvd" Mar 17 17:56:12.364582 kubelet[2570]: E0317 17:56:12.364428 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:12.365046 containerd[1486]: time="2025-03-17T17:56:12.365008694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-j4fvd,Uid:74067c67-6482-4cc5-89c6-4e3d3a48df7c,Namespace:kube-system,Attempt:0,}" Mar 17 17:56:12.380243 kubelet[2570]: E0317 17:56:12.380210 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:12.380757 containerd[1486]: time="2025-03-17T17:56:12.380685641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8qh2q,Uid:63cf0a34-d08f-4429-9ae7-9ffc143d0919,Namespace:kube-system,Attempt:0,}" Mar 17 17:56:12.384537 kubelet[2570]: E0317 17:56:12.384509 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:12.385207 containerd[1486]: time="2025-03-17T17:56:12.385009996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-44fsx,Uid:840becc0-4d43-4836-aed6-609cb30d4f47,Namespace:kube-system,Attempt:0,}" Mar 17 17:56:12.389687 containerd[1486]: time="2025-03-17T17:56:12.389609361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:12.389687 containerd[1486]: time="2025-03-17T17:56:12.389664766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:12.389687 containerd[1486]: time="2025-03-17T17:56:12.389675677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:12.390403 containerd[1486]: time="2025-03-17T17:56:12.390348972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:12.414953 systemd[1]: Started cri-containerd-8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b.scope - libcontainer container 8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b. Mar 17 17:56:12.429573 containerd[1486]: time="2025-03-17T17:56:12.429493149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:12.429573 containerd[1486]: time="2025-03-17T17:56:12.429537423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:12.429573 containerd[1486]: time="2025-03-17T17:56:12.429547712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:12.429822 containerd[1486]: time="2025-03-17T17:56:12.429614850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:12.435066 containerd[1486]: time="2025-03-17T17:56:12.434915535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:12.435293 containerd[1486]: time="2025-03-17T17:56:12.435115103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:12.435293 containerd[1486]: time="2025-03-17T17:56:12.435140370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:12.435488 containerd[1486]: time="2025-03-17T17:56:12.435391927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:12.449953 systemd[1]: Started cri-containerd-d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2.scope - libcontainer container d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2. Mar 17 17:56:12.454819 systemd[1]: Started cri-containerd-3c8bcf882b2c70dbe200a73d6564de9a928ef0c1449ef89de543295e78168236.scope - libcontainer container 3c8bcf882b2c70dbe200a73d6564de9a928ef0c1449ef89de543295e78168236. Mar 17 17:56:12.471988 containerd[1486]: time="2025-03-17T17:56:12.471933262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-j4fvd,Uid:74067c67-6482-4cc5-89c6-4e3d3a48df7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b\"" Mar 17 17:56:12.473327 kubelet[2570]: E0317 17:56:12.472883 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:12.474149 containerd[1486]: time="2025-03-17T17:56:12.474122771Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:56:12.490299 containerd[1486]: time="2025-03-17T17:56:12.490236265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8qh2q,Uid:63cf0a34-d08f-4429-9ae7-9ffc143d0919,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\"" Mar 17 17:56:12.491037 kubelet[2570]: E0317 17:56:12.490883 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:12.500722 containerd[1486]: time="2025-03-17T17:56:12.500679434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-44fsx,Uid:840becc0-4d43-4836-aed6-609cb30d4f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c8bcf882b2c70dbe200a73d6564de9a928ef0c1449ef89de543295e78168236\"" Mar 17 17:56:12.501520 kubelet[2570]: E0317 17:56:12.501498 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:12.503545 containerd[1486]: time="2025-03-17T17:56:12.503504017Z" level=info msg="CreateContainer within sandbox \"3c8bcf882b2c70dbe200a73d6564de9a928ef0c1449ef89de543295e78168236\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:56:12.520489 containerd[1486]: time="2025-03-17T17:56:12.520434259Z" level=info msg="CreateContainer within sandbox \"3c8bcf882b2c70dbe200a73d6564de9a928ef0c1449ef89de543295e78168236\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6e29b34a84871775b0a158821a992d47b8b18315b6a1bdc9f938bd51cbd3d6b4\"" Mar 17 17:56:12.521143 containerd[1486]: time="2025-03-17T17:56:12.521103127Z" level=info msg="StartContainer for \"6e29b34a84871775b0a158821a992d47b8b18315b6a1bdc9f938bd51cbd3d6b4\"" Mar 17 17:56:12.549965 systemd[1]: Started cri-containerd-6e29b34a84871775b0a158821a992d47b8b18315b6a1bdc9f938bd51cbd3d6b4.scope - libcontainer container 6e29b34a84871775b0a158821a992d47b8b18315b6a1bdc9f938bd51cbd3d6b4. Mar 17 17:56:12.584018 containerd[1486]: time="2025-03-17T17:56:12.583970698Z" level=info msg="StartContainer for \"6e29b34a84871775b0a158821a992d47b8b18315b6a1bdc9f938bd51cbd3d6b4\" returns successfully" Mar 17 17:56:12.782299 kubelet[2570]: E0317 17:56:12.782268 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:12.790976 kubelet[2570]: I0317 17:56:12.790922 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-44fsx" podStartSLOduration=0.790905782 podStartE2EDuration="790.905782ms" podCreationTimestamp="2025-03-17 17:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:56:12.790565638 +0000 UTC m=+8.365158338" watchObservedRunningTime="2025-03-17 17:56:12.790905782 +0000 UTC m=+8.365498482" Mar 17 17:56:12.914933 kubelet[2570]: E0317 17:56:12.914899 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:13.463234 kubelet[2570]: E0317 17:56:13.463191 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:13.785536 kubelet[2570]: E0317 17:56:13.785410 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:13.786091 kubelet[2570]: E0317 17:56:13.785931 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:14.787038 kubelet[2570]: E0317 17:56:14.786995 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:15.420909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1890348173.mount: Deactivated successfully. Mar 17 17:56:16.457408 kubelet[2570]: E0317 17:56:16.457371 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:17.050439 containerd[1486]: time="2025-03-17T17:56:17.050361638Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:17.074964 containerd[1486]: time="2025-03-17T17:56:17.074917828Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 17 17:56:17.092305 containerd[1486]: time="2025-03-17T17:56:17.092261860Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:17.093506 containerd[1486]: time="2025-03-17T17:56:17.093470254Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.619315012s" Mar 17 17:56:17.093584 containerd[1486]: time="2025-03-17T17:56:17.093507413Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 17:56:17.094815 containerd[1486]: time="2025-03-17T17:56:17.094790347Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:56:17.095551 containerd[1486]: time="2025-03-17T17:56:17.095512803Z" level=info msg="CreateContainer within sandbox \"8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:56:17.565237 containerd[1486]: time="2025-03-17T17:56:17.565200763Z" level=info msg="CreateContainer within sandbox \"8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18\"" Mar 17 17:56:17.565558 containerd[1486]: time="2025-03-17T17:56:17.565523874Z" level=info msg="StartContainer for \"d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18\"" Mar 17 17:56:17.603948 systemd[1]: Started cri-containerd-d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18.scope - libcontainer container d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18. Mar 17 17:56:17.727161 containerd[1486]: time="2025-03-17T17:56:17.727093198Z" level=info msg="StartContainer for \"d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18\" returns successfully" Mar 17 17:56:17.791064 kubelet[2570]: E0317 17:56:17.791029 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:18.792430 kubelet[2570]: E0317 17:56:18.792382 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:18.900679 update_engine[1474]: I20250317 17:56:18.900589 1474 update_attempter.cc:509] Updating boot flags... Mar 17 17:56:18.936807 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (3006) Mar 17 17:56:18.988981 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (3006) Mar 17 17:56:19.044860 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (3006) Mar 17 17:56:26.084177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3715339824.mount: Deactivated successfully. Mar 17 17:56:31.576705 containerd[1486]: time="2025-03-17T17:56:31.576617749Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:31.617962 containerd[1486]: time="2025-03-17T17:56:31.617881080Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 17 17:56:31.646487 containerd[1486]: time="2025-03-17T17:56:31.646425555Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:56:31.648122 containerd[1486]: time="2025-03-17T17:56:31.648080428Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.55325808s" Mar 17 17:56:31.648122 containerd[1486]: time="2025-03-17T17:56:31.648109303Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 17:56:31.651395 containerd[1486]: time="2025-03-17T17:56:31.651361029Z" level=info msg="CreateContainer within sandbox \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:56:31.892192 containerd[1486]: time="2025-03-17T17:56:31.892120214Z" level=info msg="CreateContainer within sandbox \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c8b141c236ca740bc0e151d9478df481aa5f92f865803a823879928d21691a78\"" Mar 17 17:56:31.893800 containerd[1486]: time="2025-03-17T17:56:31.892714912Z" level=info msg="StartContainer for \"c8b141c236ca740bc0e151d9478df481aa5f92f865803a823879928d21691a78\"" Mar 17 17:56:31.925946 systemd[1]: Started cri-containerd-c8b141c236ca740bc0e151d9478df481aa5f92f865803a823879928d21691a78.scope - libcontainer container c8b141c236ca740bc0e151d9478df481aa5f92f865803a823879928d21691a78. Mar 17 17:56:31.975492 systemd[1]: cri-containerd-c8b141c236ca740bc0e151d9478df481aa5f92f865803a823879928d21691a78.scope: Deactivated successfully. Mar 17 17:56:31.983562 containerd[1486]: time="2025-03-17T17:56:31.983527381Z" level=info msg="StartContainer for \"c8b141c236ca740bc0e151d9478df481aa5f92f865803a823879928d21691a78\" returns successfully" Mar 17 17:56:32.799592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8b141c236ca740bc0e151d9478df481aa5f92f865803a823879928d21691a78-rootfs.mount: Deactivated successfully. Mar 17 17:56:32.816147 kubelet[2570]: E0317 17:56:32.816103 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:32.970338 kubelet[2570]: I0317 17:56:32.970248 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-j4fvd" podStartSLOduration=16.349770099 podStartE2EDuration="20.970229986s" podCreationTimestamp="2025-03-17 17:56:12 +0000 UTC" firstStartedPulling="2025-03-17 17:56:12.473786493 +0000 UTC m=+8.048379193" lastFinishedPulling="2025-03-17 17:56:17.09424637 +0000 UTC m=+12.668839080" observedRunningTime="2025-03-17 17:56:17.881796328 +0000 UTC m=+13.456389028" watchObservedRunningTime="2025-03-17 17:56:32.970229986 +0000 UTC m=+28.544822687" Mar 17 17:56:33.017533 containerd[1486]: time="2025-03-17T17:56:33.017467343Z" level=info msg="shim disconnected" id=c8b141c236ca740bc0e151d9478df481aa5f92f865803a823879928d21691a78 namespace=k8s.io Mar 17 17:56:33.017533 containerd[1486]: time="2025-03-17T17:56:33.017528077Z" level=warning msg="cleaning up after shim disconnected" id=c8b141c236ca740bc0e151d9478df481aa5f92f865803a823879928d21691a78 namespace=k8s.io Mar 17 17:56:33.017533 containerd[1486]: time="2025-03-17T17:56:33.017538667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:33.818870 kubelet[2570]: E0317 17:56:33.818833 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:33.821197 containerd[1486]: time="2025-03-17T17:56:33.821134919Z" level=info msg="CreateContainer within sandbox \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:56:34.052117 containerd[1486]: time="2025-03-17T17:56:34.052040782Z" level=info msg="CreateContainer within sandbox \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6bc64f5a2dc95b41548a471cafa71c3966d74ebaeef6e6bc386f0cb0f233aa04\"" Mar 17 17:56:34.052642 containerd[1486]: time="2025-03-17T17:56:34.052614871Z" level=info msg="StartContainer for \"6bc64f5a2dc95b41548a471cafa71c3966d74ebaeef6e6bc386f0cb0f233aa04\"" Mar 17 17:56:34.086956 systemd[1]: Started cri-containerd-6bc64f5a2dc95b41548a471cafa71c3966d74ebaeef6e6bc386f0cb0f233aa04.scope - libcontainer container 6bc64f5a2dc95b41548a471cafa71c3966d74ebaeef6e6bc386f0cb0f233aa04. Mar 17 17:56:34.140213 containerd[1486]: time="2025-03-17T17:56:34.140159603Z" level=info msg="StartContainer for \"6bc64f5a2dc95b41548a471cafa71c3966d74ebaeef6e6bc386f0cb0f233aa04\" returns successfully" Mar 17 17:56:34.140420 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:56:34.141106 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:56:34.141401 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:56:34.150280 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:56:34.154982 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:56:34.155659 systemd[1]: cri-containerd-6bc64f5a2dc95b41548a471cafa71c3966d74ebaeef6e6bc386f0cb0f233aa04.scope: Deactivated successfully. Mar 17 17:56:34.170935 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:56:34.233173 containerd[1486]: time="2025-03-17T17:56:34.233080785Z" level=info msg="shim disconnected" id=6bc64f5a2dc95b41548a471cafa71c3966d74ebaeef6e6bc386f0cb0f233aa04 namespace=k8s.io Mar 17 17:56:34.233173 containerd[1486]: time="2025-03-17T17:56:34.233164862Z" level=warning msg="cleaning up after shim disconnected" id=6bc64f5a2dc95b41548a471cafa71c3966d74ebaeef6e6bc386f0cb0f233aa04 namespace=k8s.io Mar 17 17:56:34.233173 containerd[1486]: time="2025-03-17T17:56:34.233176274Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:34.822653 kubelet[2570]: E0317 17:56:34.822611 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:34.824339 containerd[1486]: time="2025-03-17T17:56:34.824301817Z" level=info msg="CreateContainer within sandbox \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:56:34.999562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bc64f5a2dc95b41548a471cafa71c3966d74ebaeef6e6bc386f0cb0f233aa04-rootfs.mount: Deactivated successfully. Mar 17 17:56:35.161027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686013742.mount: Deactivated successfully. Mar 17 17:56:35.281904 containerd[1486]: time="2025-03-17T17:56:35.281842514Z" level=info msg="CreateContainer within sandbox \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2247abfd2de626cff5b89e1edcfead868565500a7530898de544e5b83bb2f9b3\"" Mar 17 17:56:35.282334 containerd[1486]: time="2025-03-17T17:56:35.282299603Z" level=info msg="StartContainer for \"2247abfd2de626cff5b89e1edcfead868565500a7530898de544e5b83bb2f9b3\"" Mar 17 17:56:35.339021 systemd[1]: Started cri-containerd-2247abfd2de626cff5b89e1edcfead868565500a7530898de544e5b83bb2f9b3.scope - libcontainer container 2247abfd2de626cff5b89e1edcfead868565500a7530898de544e5b83bb2f9b3. Mar 17 17:56:35.380497 systemd[1]: cri-containerd-2247abfd2de626cff5b89e1edcfead868565500a7530898de544e5b83bb2f9b3.scope: Deactivated successfully. Mar 17 17:56:35.401225 containerd[1486]: time="2025-03-17T17:56:35.401136062Z" level=info msg="StartContainer for \"2247abfd2de626cff5b89e1edcfead868565500a7530898de544e5b83bb2f9b3\" returns successfully" Mar 17 17:56:35.506441 containerd[1486]: time="2025-03-17T17:56:35.506244325Z" level=info msg="shim disconnected" id=2247abfd2de626cff5b89e1edcfead868565500a7530898de544e5b83bb2f9b3 namespace=k8s.io Mar 17 17:56:35.506441 containerd[1486]: time="2025-03-17T17:56:35.506307383Z" level=warning msg="cleaning up after shim disconnected" id=2247abfd2de626cff5b89e1edcfead868565500a7530898de544e5b83bb2f9b3 namespace=k8s.io Mar 17 17:56:35.506441 containerd[1486]: time="2025-03-17T17:56:35.506318594Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:35.797199 systemd[1]: Started sshd@7-10.0.0.132:22-10.0.0.1:58934.service - OpenSSH per-connection server daemon (10.0.0.1:58934). Mar 17 17:56:35.825575 kubelet[2570]: E0317 17:56:35.825424 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:35.828347 containerd[1486]: time="2025-03-17T17:56:35.828303120Z" level=info msg="CreateContainer within sandbox \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:56:35.855089 sshd[3227]: Accepted publickey for core from 10.0.0.1 port 58934 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:56:35.856688 sshd-session[3227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:35.860974 systemd-logind[1470]: New session 8 of user core. Mar 17 17:56:35.871023 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:56:36.000479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2247abfd2de626cff5b89e1edcfead868565500a7530898de544e5b83bb2f9b3-rootfs.mount: Deactivated successfully. Mar 17 17:56:36.030055 sshd[3229]: Connection closed by 10.0.0.1 port 58934 Mar 17 17:56:36.030479 sshd-session[3227]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:36.034866 systemd[1]: sshd@7-10.0.0.132:22-10.0.0.1:58934.service: Deactivated successfully. Mar 17 17:56:36.037529 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:56:36.038321 systemd-logind[1470]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:56:36.039368 systemd-logind[1470]: Removed session 8. Mar 17 17:56:36.197728 containerd[1486]: time="2025-03-17T17:56:36.197680201Z" level=info msg="CreateContainer within sandbox \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6976755c9d32edbeaf87bb54f54d5cc09b99048b31547d598fe33a14cfd1466d\"" Mar 17 17:56:36.198151 containerd[1486]: time="2025-03-17T17:56:36.198127472Z" level=info msg="StartContainer for \"6976755c9d32edbeaf87bb54f54d5cc09b99048b31547d598fe33a14cfd1466d\"" Mar 17 17:56:36.230952 systemd[1]: Started cri-containerd-6976755c9d32edbeaf87bb54f54d5cc09b99048b31547d598fe33a14cfd1466d.scope - libcontainer container 6976755c9d32edbeaf87bb54f54d5cc09b99048b31547d598fe33a14cfd1466d. Mar 17 17:56:36.256556 systemd[1]: cri-containerd-6976755c9d32edbeaf87bb54f54d5cc09b99048b31547d598fe33a14cfd1466d.scope: Deactivated successfully. Mar 17 17:56:36.258726 containerd[1486]: time="2025-03-17T17:56:36.258683578Z" level=info msg="StartContainer for \"6976755c9d32edbeaf87bb54f54d5cc09b99048b31547d598fe33a14cfd1466d\" returns successfully" Mar 17 17:56:36.289673 containerd[1486]: time="2025-03-17T17:56:36.289595537Z" level=info msg="shim disconnected" id=6976755c9d32edbeaf87bb54f54d5cc09b99048b31547d598fe33a14cfd1466d namespace=k8s.io Mar 17 17:56:36.289673 containerd[1486]: time="2025-03-17T17:56:36.289662965Z" level=warning msg="cleaning up after shim disconnected" id=6976755c9d32edbeaf87bb54f54d5cc09b99048b31547d598fe33a14cfd1466d namespace=k8s.io Mar 17 17:56:36.289673 containerd[1486]: time="2025-03-17T17:56:36.289673735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:56:36.829442 kubelet[2570]: E0317 17:56:36.829387 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:36.831006 containerd[1486]: time="2025-03-17T17:56:36.830957610Z" level=info msg="CreateContainer within sandbox \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:56:37.000208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6976755c9d32edbeaf87bb54f54d5cc09b99048b31547d598fe33a14cfd1466d-rootfs.mount: Deactivated successfully. Mar 17 17:56:37.089231 containerd[1486]: time="2025-03-17T17:56:37.089105120Z" level=info msg="CreateContainer within sandbox \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7\"" Mar 17 17:56:37.089614 containerd[1486]: time="2025-03-17T17:56:37.089579221Z" level=info msg="StartContainer for \"2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7\"" Mar 17 17:56:37.116981 systemd[1]: Started cri-containerd-2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7.scope - libcontainer container 2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7. Mar 17 17:56:37.211442 containerd[1486]: time="2025-03-17T17:56:37.211396458Z" level=info msg="StartContainer for \"2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7\" returns successfully" Mar 17 17:56:37.322020 kubelet[2570]: I0317 17:56:37.321983 2570 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 17:56:37.445411 systemd[1]: Created slice kubepods-burstable-podd98337f5_1a9c_4237_a931_93606293f5f7.slice - libcontainer container kubepods-burstable-podd98337f5_1a9c_4237_a931_93606293f5f7.slice. Mar 17 17:56:37.486225 systemd[1]: Created slice kubepods-burstable-pod87ebee5f_cb02_4243_957b_2027b36513d8.slice - libcontainer container kubepods-burstable-pod87ebee5f_cb02_4243_957b_2027b36513d8.slice. Mar 17 17:56:37.638727 kubelet[2570]: I0317 17:56:37.638675 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87ebee5f-cb02-4243-957b-2027b36513d8-config-volume\") pod \"coredns-6f6b679f8f-bxxlb\" (UID: \"87ebee5f-cb02-4243-957b-2027b36513d8\") " pod="kube-system/coredns-6f6b679f8f-bxxlb" Mar 17 17:56:37.638727 kubelet[2570]: I0317 17:56:37.638714 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scqnf\" (UniqueName: \"kubernetes.io/projected/87ebee5f-cb02-4243-957b-2027b36513d8-kube-api-access-scqnf\") pod \"coredns-6f6b679f8f-bxxlb\" (UID: \"87ebee5f-cb02-4243-957b-2027b36513d8\") " pod="kube-system/coredns-6f6b679f8f-bxxlb" Mar 17 17:56:37.638998 kubelet[2570]: I0317 17:56:37.638741 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d98337f5-1a9c-4237-a931-93606293f5f7-config-volume\") pod \"coredns-6f6b679f8f-w4rkn\" (UID: \"d98337f5-1a9c-4237-a931-93606293f5f7\") " pod="kube-system/coredns-6f6b679f8f-w4rkn" Mar 17 17:56:37.638998 kubelet[2570]: I0317 17:56:37.638766 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4xhr\" (UniqueName: \"kubernetes.io/projected/d98337f5-1a9c-4237-a931-93606293f5f7-kube-api-access-j4xhr\") pod \"coredns-6f6b679f8f-w4rkn\" (UID: \"d98337f5-1a9c-4237-a931-93606293f5f7\") " pod="kube-system/coredns-6f6b679f8f-w4rkn" Mar 17 17:56:37.789098 kubelet[2570]: E0317 17:56:37.788949 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:37.797981 containerd[1486]: time="2025-03-17T17:56:37.797925991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxxlb,Uid:87ebee5f-cb02-4243-957b-2027b36513d8,Namespace:kube-system,Attempt:0,}" Mar 17 17:56:37.834806 kubelet[2570]: E0317 17:56:37.834021 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:38.049204 kubelet[2570]: E0317 17:56:38.049023 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:38.049635 containerd[1486]: time="2025-03-17T17:56:38.049597936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w4rkn,Uid:d98337f5-1a9c-4237-a931-93606293f5f7,Namespace:kube-system,Attempt:0,}" Mar 17 17:56:38.835911 kubelet[2570]: E0317 17:56:38.835866 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:39.323524 systemd-networkd[1404]: cilium_host: Link UP Mar 17 17:56:39.323744 systemd-networkd[1404]: cilium_net: Link UP Mar 17 17:56:39.324030 systemd-networkd[1404]: cilium_net: Gained carrier Mar 17 17:56:39.324253 systemd-networkd[1404]: cilium_host: Gained carrier Mar 17 17:56:39.324424 systemd-networkd[1404]: cilium_net: Gained IPv6LL Mar 17 17:56:39.324619 systemd-networkd[1404]: cilium_host: Gained IPv6LL Mar 17 17:56:39.421823 systemd-networkd[1404]: cilium_vxlan: Link UP Mar 17 17:56:39.422384 systemd-networkd[1404]: cilium_vxlan: Gained carrier Mar 17 17:56:39.810763 kernel: NET: Registered PF_ALG protocol family Mar 17 17:56:39.849679 kubelet[2570]: E0317 17:56:39.844042 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:40.677745 systemd-networkd[1404]: lxc_health: Link UP Mar 17 17:56:40.686851 systemd-networkd[1404]: lxc_health: Gained carrier Mar 17 17:56:41.057048 systemd[1]: Started sshd@8-10.0.0.132:22-10.0.0.1:58944.service - OpenSSH per-connection server daemon (10.0.0.1:58944). Mar 17 17:56:41.101188 sshd[3794]: Accepted publickey for core from 10.0.0.1 port 58944 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:56:41.102863 sshd-session[3794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:41.107338 systemd-logind[1470]: New session 9 of user core. Mar 17 17:56:41.116958 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:56:41.170427 systemd-networkd[1404]: lxc3a451d1079e7: Link UP Mar 17 17:56:41.179806 kernel: eth0: renamed from tmp843bc Mar 17 17:56:41.189682 systemd-networkd[1404]: lxc604ab18e792e: Link UP Mar 17 17:56:41.190087 systemd-networkd[1404]: lxc3a451d1079e7: Gained carrier Mar 17 17:56:41.199570 systemd-networkd[1404]: cilium_vxlan: Gained IPv6LL Mar 17 17:56:41.211853 kernel: eth0: renamed from tmpea997 Mar 17 17:56:41.218118 systemd-networkd[1404]: lxc604ab18e792e: Gained carrier Mar 17 17:56:41.361520 sshd[3796]: Connection closed by 10.0.0.1 port 58944 Mar 17 17:56:41.363485 sshd-session[3794]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:41.367769 systemd[1]: sshd@8-10.0.0.132:22-10.0.0.1:58944.service: Deactivated successfully. Mar 17 17:56:41.370423 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:56:41.371643 systemd-logind[1470]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:56:41.372669 systemd-logind[1470]: Removed session 9. Mar 17 17:56:42.382020 kubelet[2570]: E0317 17:56:42.381696 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:42.432163 kubelet[2570]: I0317 17:56:42.432089 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8qh2q" podStartSLOduration=11.276025572 podStartE2EDuration="30.432071045s" podCreationTimestamp="2025-03-17 17:56:12 +0000 UTC" firstStartedPulling="2025-03-17 17:56:12.492921664 +0000 UTC m=+8.067514364" lastFinishedPulling="2025-03-17 17:56:31.648967137 +0000 UTC m=+27.223559837" observedRunningTime="2025-03-17 17:56:37.908349977 +0000 UTC m=+33.482942697" watchObservedRunningTime="2025-03-17 17:56:42.432071045 +0000 UTC m=+38.006663745" Mar 17 17:56:42.603022 systemd-networkd[1404]: lxc_health: Gained IPv6LL Mar 17 17:56:42.603314 systemd-networkd[1404]: lxc3a451d1079e7: Gained IPv6LL Mar 17 17:56:42.603476 systemd-networkd[1404]: lxc604ab18e792e: Gained IPv6LL Mar 17 17:56:42.848500 kubelet[2570]: E0317 17:56:42.848361 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:43.850203 kubelet[2570]: E0317 17:56:43.850167 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:44.730580 containerd[1486]: time="2025-03-17T17:56:44.730492510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:44.731365 containerd[1486]: time="2025-03-17T17:56:44.730552572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:44.731365 containerd[1486]: time="2025-03-17T17:56:44.730567170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:44.731365 containerd[1486]: time="2025-03-17T17:56:44.730614488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:56:44.731365 containerd[1486]: time="2025-03-17T17:56:44.730666326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:56:44.731365 containerd[1486]: time="2025-03-17T17:56:44.730676605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:44.731365 containerd[1486]: time="2025-03-17T17:56:44.730753379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:44.734349 containerd[1486]: time="2025-03-17T17:56:44.734204199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:56:44.764946 systemd[1]: Started cri-containerd-843bc9ef78e257b2a9498b8856dbf876f085ed92ec23867aaf365f7032d7368c.scope - libcontainer container 843bc9ef78e257b2a9498b8856dbf876f085ed92ec23867aaf365f7032d7368c. Mar 17 17:56:44.766940 systemd[1]: Started cri-containerd-ea99710aac652964910e2cf83a28fadbc230d83feeccec9206280b26ae967cc6.scope - libcontainer container ea99710aac652964910e2cf83a28fadbc230d83feeccec9206280b26ae967cc6. Mar 17 17:56:44.778161 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:56:44.782766 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:56:44.803961 containerd[1486]: time="2025-03-17T17:56:44.803907721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxxlb,Uid:87ebee5f-cb02-4243-957b-2027b36513d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"843bc9ef78e257b2a9498b8856dbf876f085ed92ec23867aaf365f7032d7368c\"" Mar 17 17:56:44.804753 kubelet[2570]: E0317 17:56:44.804397 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:44.806099 containerd[1486]: time="2025-03-17T17:56:44.806062408Z" level=info msg="CreateContainer within sandbox \"843bc9ef78e257b2a9498b8856dbf876f085ed92ec23867aaf365f7032d7368c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:56:44.812233 containerd[1486]: time="2025-03-17T17:56:44.812199502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-w4rkn,Uid:d98337f5-1a9c-4237-a931-93606293f5f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea99710aac652964910e2cf83a28fadbc230d83feeccec9206280b26ae967cc6\"" Mar 17 17:56:44.813116 kubelet[2570]: E0317 17:56:44.813087 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:44.814456 containerd[1486]: time="2025-03-17T17:56:44.814434810Z" level=info msg="CreateContainer within sandbox \"ea99710aac652964910e2cf83a28fadbc230d83feeccec9206280b26ae967cc6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:56:45.341891 containerd[1486]: time="2025-03-17T17:56:45.341834230Z" level=info msg="CreateContainer within sandbox \"843bc9ef78e257b2a9498b8856dbf876f085ed92ec23867aaf365f7032d7368c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9d25a96ece774d756bfff9f7468831d4860fcda23d341cfcd782c6a33f3e6c7\"" Mar 17 17:56:45.342603 containerd[1486]: time="2025-03-17T17:56:45.342397537Z" level=info msg="StartContainer for \"f9d25a96ece774d756bfff9f7468831d4860fcda23d341cfcd782c6a33f3e6c7\"" Mar 17 17:56:45.375961 systemd[1]: Started cri-containerd-f9d25a96ece774d756bfff9f7468831d4860fcda23d341cfcd782c6a33f3e6c7.scope - libcontainer container f9d25a96ece774d756bfff9f7468831d4860fcda23d341cfcd782c6a33f3e6c7. Mar 17 17:56:45.659936 containerd[1486]: time="2025-03-17T17:56:45.659715141Z" level=info msg="CreateContainer within sandbox \"ea99710aac652964910e2cf83a28fadbc230d83feeccec9206280b26ae967cc6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0cbceeae5eaee8b6b5173e79234ec3db6053cb5b423b6e900cea170951ddba31\"" Mar 17 17:56:45.659936 containerd[1486]: time="2025-03-17T17:56:45.659738925Z" level=info msg="StartContainer for \"f9d25a96ece774d756bfff9f7468831d4860fcda23d341cfcd782c6a33f3e6c7\" returns successfully" Mar 17 17:56:45.660518 containerd[1486]: time="2025-03-17T17:56:45.660480477Z" level=info msg="StartContainer for \"0cbceeae5eaee8b6b5173e79234ec3db6053cb5b423b6e900cea170951ddba31\"" Mar 17 17:56:45.689955 systemd[1]: Started cri-containerd-0cbceeae5eaee8b6b5173e79234ec3db6053cb5b423b6e900cea170951ddba31.scope - libcontainer container 0cbceeae5eaee8b6b5173e79234ec3db6053cb5b423b6e900cea170951ddba31. Mar 17 17:56:45.872581 containerd[1486]: time="2025-03-17T17:56:45.872442255Z" level=info msg="StartContainer for \"0cbceeae5eaee8b6b5173e79234ec3db6053cb5b423b6e900cea170951ddba31\" returns successfully" Mar 17 17:56:45.875144 kubelet[2570]: E0317 17:56:45.875099 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:45.877417 kubelet[2570]: E0317 17:56:45.877304 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:45.925846 kubelet[2570]: I0317 17:56:45.925547 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-w4rkn" podStartSLOduration=33.925529156 podStartE2EDuration="33.925529156s" podCreationTimestamp="2025-03-17 17:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:56:45.925260391 +0000 UTC m=+41.499853091" watchObservedRunningTime="2025-03-17 17:56:45.925529156 +0000 UTC m=+41.500121846" Mar 17 17:56:46.276272 kubelet[2570]: I0317 17:56:46.275850 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bxxlb" podStartSLOduration=34.275831915 podStartE2EDuration="34.275831915s" podCreationTimestamp="2025-03-17 17:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:56:46.275322008 +0000 UTC m=+41.849914708" watchObservedRunningTime="2025-03-17 17:56:46.275831915 +0000 UTC m=+41.850424615" Mar 17 17:56:46.378104 systemd[1]: Started sshd@9-10.0.0.132:22-10.0.0.1:33004.service - OpenSSH per-connection server daemon (10.0.0.1:33004). Mar 17 17:56:46.425814 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 33004 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:56:46.427647 sshd-session[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:46.432449 systemd-logind[1470]: New session 10 of user core. Mar 17 17:56:46.443975 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:56:46.567398 sshd[4017]: Connection closed by 10.0.0.1 port 33004 Mar 17 17:56:46.567686 sshd-session[4015]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:46.571914 systemd[1]: sshd@9-10.0.0.132:22-10.0.0.1:33004.service: Deactivated successfully. Mar 17 17:56:46.574020 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:56:46.574823 systemd-logind[1470]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:56:46.575588 systemd-logind[1470]: Removed session 10. Mar 17 17:56:46.879022 kubelet[2570]: E0317 17:56:46.878992 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:46.879494 kubelet[2570]: E0317 17:56:46.879061 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:47.880312 kubelet[2570]: E0317 17:56:47.880277 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:47.880312 kubelet[2570]: E0317 17:56:47.880321 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:56:51.586494 systemd[1]: Started sshd@10-10.0.0.132:22-10.0.0.1:33016.service - OpenSSH per-connection server daemon (10.0.0.1:33016). Mar 17 17:56:51.630276 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 33016 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:56:51.631909 sshd-session[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:51.636239 systemd-logind[1470]: New session 11 of user core. Mar 17 17:56:51.647929 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:56:51.757405 sshd[4037]: Connection closed by 10.0.0.1 port 33016 Mar 17 17:56:51.757824 sshd-session[4035]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:51.761943 systemd[1]: sshd@10-10.0.0.132:22-10.0.0.1:33016.service: Deactivated successfully. Mar 17 17:56:51.763974 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:56:51.764682 systemd-logind[1470]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:56:51.765494 systemd-logind[1470]: Removed session 11. Mar 17 17:56:56.771856 systemd[1]: Started sshd@11-10.0.0.132:22-10.0.0.1:47838.service - OpenSSH per-connection server daemon (10.0.0.1:47838). Mar 17 17:56:56.846968 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 47838 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:56:56.848581 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:56:56.852860 systemd-logind[1470]: New session 12 of user core. Mar 17 17:56:56.859899 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:56:56.980841 sshd[4054]: Connection closed by 10.0.0.1 port 47838 Mar 17 17:56:56.981404 sshd-session[4052]: pam_unix(sshd:session): session closed for user core Mar 17 17:56:56.986351 systemd[1]: sshd@11-10.0.0.132:22-10.0.0.1:47838.service: Deactivated successfully. Mar 17 17:56:56.989534 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:56:56.990821 systemd-logind[1470]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:56:56.992049 systemd-logind[1470]: Removed session 12. Mar 17 17:57:01.994565 systemd[1]: Started sshd@12-10.0.0.132:22-10.0.0.1:47854.service - OpenSSH per-connection server daemon (10.0.0.1:47854). Mar 17 17:57:02.186570 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 47854 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:02.188387 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:02.192688 systemd-logind[1470]: New session 13 of user core. Mar 17 17:57:02.202903 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:57:02.400643 sshd[4070]: Connection closed by 10.0.0.1 port 47854 Mar 17 17:57:02.400999 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:02.405081 systemd[1]: sshd@12-10.0.0.132:22-10.0.0.1:47854.service: Deactivated successfully. Mar 17 17:57:02.407347 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:57:02.408120 systemd-logind[1470]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:57:02.408937 systemd-logind[1470]: Removed session 13. Mar 17 17:57:07.414280 systemd[1]: Started sshd@13-10.0.0.132:22-10.0.0.1:46922.service - OpenSSH per-connection server daemon (10.0.0.1:46922). Mar 17 17:57:07.461301 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 46922 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:07.463057 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:07.467511 systemd-logind[1470]: New session 14 of user core. Mar 17 17:57:07.478905 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:57:07.588830 sshd[4088]: Connection closed by 10.0.0.1 port 46922 Mar 17 17:57:07.589275 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:07.598113 systemd[1]: sshd@13-10.0.0.132:22-10.0.0.1:46922.service: Deactivated successfully. Mar 17 17:57:07.600391 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:57:07.602446 systemd-logind[1470]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:57:07.611141 systemd[1]: Started sshd@14-10.0.0.132:22-10.0.0.1:46926.service - OpenSSH per-connection server daemon (10.0.0.1:46926). Mar 17 17:57:07.612721 systemd-logind[1470]: Removed session 14. Mar 17 17:57:07.650100 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 46926 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:07.651812 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:07.656552 systemd-logind[1470]: New session 15 of user core. Mar 17 17:57:07.662949 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:57:07.860119 sshd[4104]: Connection closed by 10.0.0.1 port 46926 Mar 17 17:57:07.861738 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:07.872141 systemd[1]: sshd@14-10.0.0.132:22-10.0.0.1:46926.service: Deactivated successfully. Mar 17 17:57:07.874135 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:57:07.875085 systemd-logind[1470]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:57:07.885063 systemd[1]: Started sshd@15-10.0.0.132:22-10.0.0.1:46942.service - OpenSSH per-connection server daemon (10.0.0.1:46942). Mar 17 17:57:07.885693 systemd-logind[1470]: Removed session 15. Mar 17 17:57:07.925797 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 46942 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:07.927312 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:07.931657 systemd-logind[1470]: New session 16 of user core. Mar 17 17:57:07.939891 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:57:08.120894 sshd[4117]: Connection closed by 10.0.0.1 port 46942 Mar 17 17:57:08.121236 sshd-session[4114]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:08.125492 systemd[1]: sshd@15-10.0.0.132:22-10.0.0.1:46942.service: Deactivated successfully. Mar 17 17:57:08.127568 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:57:08.128244 systemd-logind[1470]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:57:08.129377 systemd-logind[1470]: Removed session 16. Mar 17 17:57:13.134883 systemd[1]: Started sshd@16-10.0.0.132:22-10.0.0.1:46950.service - OpenSSH per-connection server daemon (10.0.0.1:46950). Mar 17 17:57:13.176344 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 46950 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:13.177685 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:13.181758 systemd-logind[1470]: New session 17 of user core. Mar 17 17:57:13.195945 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:57:13.302562 sshd[4136]: Connection closed by 10.0.0.1 port 46950 Mar 17 17:57:13.302920 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:13.306946 systemd[1]: sshd@16-10.0.0.132:22-10.0.0.1:46950.service: Deactivated successfully. Mar 17 17:57:13.309289 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:57:13.310024 systemd-logind[1470]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:57:13.310821 systemd-logind[1470]: Removed session 17. Mar 17 17:57:16.763162 kubelet[2570]: E0317 17:57:16.763127 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:57:18.315570 systemd[1]: Started sshd@17-10.0.0.132:22-10.0.0.1:42398.service - OpenSSH per-connection server daemon (10.0.0.1:42398). Mar 17 17:57:18.357692 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 42398 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:18.359115 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:18.363033 systemd-logind[1470]: New session 18 of user core. Mar 17 17:57:18.373946 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:57:18.482078 sshd[4151]: Connection closed by 10.0.0.1 port 42398 Mar 17 17:57:18.482412 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:18.486175 systemd[1]: sshd@17-10.0.0.132:22-10.0.0.1:42398.service: Deactivated successfully. Mar 17 17:57:18.488367 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:57:18.489012 systemd-logind[1470]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:57:18.489902 systemd-logind[1470]: Removed session 18. Mar 17 17:57:23.495864 systemd[1]: Started sshd@18-10.0.0.132:22-10.0.0.1:42408.service - OpenSSH per-connection server daemon (10.0.0.1:42408). Mar 17 17:57:23.538808 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 42408 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:23.540244 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:23.544125 systemd-logind[1470]: New session 19 of user core. Mar 17 17:57:23.554885 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:57:23.661977 sshd[4167]: Connection closed by 10.0.0.1 port 42408 Mar 17 17:57:23.662485 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:23.682745 systemd[1]: sshd@18-10.0.0.132:22-10.0.0.1:42408.service: Deactivated successfully. Mar 17 17:57:23.685159 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:57:23.687407 systemd-logind[1470]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:57:23.694106 systemd[1]: Started sshd@19-10.0.0.132:22-10.0.0.1:42410.service - OpenSSH per-connection server daemon (10.0.0.1:42410). Mar 17 17:57:23.695755 systemd-logind[1470]: Removed session 19. Mar 17 17:57:23.734799 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 42410 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:23.736353 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:23.741065 systemd-logind[1470]: New session 20 of user core. Mar 17 17:57:23.757916 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:57:23.934759 sshd[4182]: Connection closed by 10.0.0.1 port 42410 Mar 17 17:57:23.935173 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:23.945363 systemd[1]: sshd@19-10.0.0.132:22-10.0.0.1:42410.service: Deactivated successfully. Mar 17 17:57:23.947179 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:57:23.949112 systemd-logind[1470]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:57:23.959208 systemd[1]: Started sshd@20-10.0.0.132:22-10.0.0.1:34138.service - OpenSSH per-connection server daemon (10.0.0.1:34138). Mar 17 17:57:23.960121 systemd-logind[1470]: Removed session 20. Mar 17 17:57:23.999665 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 34138 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:24.000915 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:24.004986 systemd-logind[1470]: New session 21 of user core. Mar 17 17:57:24.016892 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:57:25.658072 sshd[4196]: Connection closed by 10.0.0.1 port 34138 Mar 17 17:57:25.664324 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:25.680089 systemd[1]: sshd@20-10.0.0.132:22-10.0.0.1:34138.service: Deactivated successfully. Mar 17 17:57:25.683612 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:57:25.688854 systemd-logind[1470]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:57:25.702431 systemd[1]: Started sshd@21-10.0.0.132:22-10.0.0.1:34142.service - OpenSSH per-connection server daemon (10.0.0.1:34142). Mar 17 17:57:25.704484 systemd-logind[1470]: Removed session 21. Mar 17 17:57:25.790511 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 34142 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:25.792655 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:25.802724 systemd-logind[1470]: New session 22 of user core. Mar 17 17:57:25.817944 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:57:26.138747 sshd[4217]: Connection closed by 10.0.0.1 port 34142 Mar 17 17:57:26.139121 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:26.152053 systemd[1]: sshd@21-10.0.0.132:22-10.0.0.1:34142.service: Deactivated successfully. Mar 17 17:57:26.154260 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:57:26.156607 systemd-logind[1470]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:57:26.164350 systemd[1]: Started sshd@22-10.0.0.132:22-10.0.0.1:34158.service - OpenSSH per-connection server daemon (10.0.0.1:34158). Mar 17 17:57:26.165647 systemd-logind[1470]: Removed session 22. Mar 17 17:57:26.215821 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 34158 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:26.217741 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:26.223536 systemd-logind[1470]: New session 23 of user core. Mar 17 17:57:26.230935 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:57:26.418725 sshd[4231]: Connection closed by 10.0.0.1 port 34158 Mar 17 17:57:26.418032 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:26.432121 systemd[1]: sshd@22-10.0.0.132:22-10.0.0.1:34158.service: Deactivated successfully. Mar 17 17:57:26.437233 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:57:26.443427 systemd-logind[1470]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:57:26.450364 systemd-logind[1470]: Removed session 23. Mar 17 17:57:26.763338 kubelet[2570]: E0317 17:57:26.763170 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:57:31.429622 systemd[1]: Started sshd@23-10.0.0.132:22-10.0.0.1:34160.service - OpenSSH per-connection server daemon (10.0.0.1:34160). Mar 17 17:57:31.471745 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 34160 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:31.473190 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:31.477201 systemd-logind[1470]: New session 24 of user core. Mar 17 17:57:31.488900 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:57:31.601319 sshd[4247]: Connection closed by 10.0.0.1 port 34160 Mar 17 17:57:31.601747 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:31.606124 systemd[1]: sshd@23-10.0.0.132:22-10.0.0.1:34160.service: Deactivated successfully. Mar 17 17:57:31.609107 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:57:31.609993 systemd-logind[1470]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:57:31.610960 systemd-logind[1470]: Removed session 24. Mar 17 17:57:32.762762 kubelet[2570]: E0317 17:57:32.762720 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:57:33.763212 kubelet[2570]: E0317 17:57:33.763166 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:57:36.614398 systemd[1]: Started sshd@24-10.0.0.132:22-10.0.0.1:36278.service - OpenSSH per-connection server daemon (10.0.0.1:36278). Mar 17 17:57:36.655932 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 36278 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:36.657178 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:36.660924 systemd-logind[1470]: New session 25 of user core. Mar 17 17:57:36.668883 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:57:36.842702 sshd[4265]: Connection closed by 10.0.0.1 port 36278 Mar 17 17:57:36.843059 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:36.846913 systemd[1]: sshd@24-10.0.0.132:22-10.0.0.1:36278.service: Deactivated successfully. Mar 17 17:57:36.849234 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:57:36.850188 systemd-logind[1470]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:57:36.851162 systemd-logind[1470]: Removed session 25. Mar 17 17:57:40.763627 kubelet[2570]: E0317 17:57:40.763568 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:57:41.854705 systemd[1]: Started sshd@25-10.0.0.132:22-10.0.0.1:36282.service - OpenSSH per-connection server daemon (10.0.0.1:36282). Mar 17 17:57:41.895788 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 36282 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:41.897029 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:41.901690 systemd-logind[1470]: New session 26 of user core. Mar 17 17:57:41.910892 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:57:42.019069 sshd[4280]: Connection closed by 10.0.0.1 port 36282 Mar 17 17:57:42.019449 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:42.023947 systemd[1]: sshd@25-10.0.0.132:22-10.0.0.1:36282.service: Deactivated successfully. Mar 17 17:57:42.026904 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:57:42.027693 systemd-logind[1470]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:57:42.028618 systemd-logind[1470]: Removed session 26. Mar 17 17:57:44.763754 kubelet[2570]: E0317 17:57:44.763664 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:57:47.032692 systemd[1]: Started sshd@26-10.0.0.132:22-10.0.0.1:34502.service - OpenSSH per-connection server daemon (10.0.0.1:34502). Mar 17 17:57:47.076009 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 34502 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:47.077326 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:47.081764 systemd-logind[1470]: New session 27 of user core. Mar 17 17:57:47.095175 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:57:47.214402 sshd[4297]: Connection closed by 10.0.0.1 port 34502 Mar 17 17:57:47.214746 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:47.218659 systemd[1]: sshd@26-10.0.0.132:22-10.0.0.1:34502.service: Deactivated successfully. Mar 17 17:57:47.221130 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:57:47.221938 systemd-logind[1470]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:57:47.222791 systemd-logind[1470]: Removed session 27. Mar 17 17:57:51.762799 kubelet[2570]: E0317 17:57:51.762729 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:57:52.228062 systemd[1]: Started sshd@27-10.0.0.132:22-10.0.0.1:34518.service - OpenSSH per-connection server daemon (10.0.0.1:34518). Mar 17 17:57:52.271155 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 34518 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:52.272579 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:52.276723 systemd-logind[1470]: New session 28 of user core. Mar 17 17:57:52.293913 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 17:57:52.428147 sshd[4312]: Connection closed by 10.0.0.1 port 34518 Mar 17 17:57:52.428530 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:52.443673 systemd[1]: sshd@27-10.0.0.132:22-10.0.0.1:34518.service: Deactivated successfully. Mar 17 17:57:52.445662 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 17:57:52.447416 systemd-logind[1470]: Session 28 logged out. Waiting for processes to exit. Mar 17 17:57:52.458095 systemd[1]: Started sshd@28-10.0.0.132:22-10.0.0.1:34528.service - OpenSSH per-connection server daemon (10.0.0.1:34528). Mar 17 17:57:52.459112 systemd-logind[1470]: Removed session 28. Mar 17 17:57:52.497418 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 34528 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:52.498716 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:52.503047 systemd-logind[1470]: New session 29 of user core. Mar 17 17:57:52.512913 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 17 17:57:53.838127 containerd[1486]: time="2025-03-17T17:57:53.838076105Z" level=info msg="StopContainer for \"d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18\" with timeout 30 (s)" Mar 17 17:57:53.838883 containerd[1486]: time="2025-03-17T17:57:53.838740675Z" level=info msg="Stop container \"d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18\" with signal terminated" Mar 17 17:57:53.851917 systemd[1]: cri-containerd-d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18.scope: Deactivated successfully. Mar 17 17:57:53.866299 containerd[1486]: time="2025-03-17T17:57:53.866197494Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:57:53.868635 containerd[1486]: time="2025-03-17T17:57:53.868602033Z" level=info msg="StopContainer for \"2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7\" with timeout 2 (s)" Mar 17 17:57:53.868996 containerd[1486]: time="2025-03-17T17:57:53.868978568Z" level=info msg="Stop container \"2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7\" with signal terminated" Mar 17 17:57:53.876315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18-rootfs.mount: Deactivated successfully. Mar 17 17:57:53.878148 systemd-networkd[1404]: lxc_health: Link DOWN Mar 17 17:57:53.878156 systemd-networkd[1404]: lxc_health: Lost carrier Mar 17 17:57:53.892558 containerd[1486]: time="2025-03-17T17:57:53.892499224Z" level=info msg="shim disconnected" id=d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18 namespace=k8s.io Mar 17 17:57:53.892558 containerd[1486]: time="2025-03-17T17:57:53.892554238Z" level=warning msg="cleaning up after shim disconnected" id=d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18 namespace=k8s.io Mar 17 17:57:53.892558 containerd[1486]: time="2025-03-17T17:57:53.892562414Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:57:53.899815 systemd[1]: cri-containerd-2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7.scope: Deactivated successfully. Mar 17 17:57:53.900238 systemd[1]: cri-containerd-2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7.scope: Consumed 7.086s CPU time, 125.2M memory peak, 148K read from disk, 13.3M written to disk. Mar 17 17:57:53.915091 containerd[1486]: time="2025-03-17T17:57:53.915054019Z" level=info msg="StopContainer for \"d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18\" returns successfully" Mar 17 17:57:53.919447 containerd[1486]: time="2025-03-17T17:57:53.919340015Z" level=info msg="StopPodSandbox for \"8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b\"" Mar 17 17:57:53.919447 containerd[1486]: time="2025-03-17T17:57:53.919380352Z" level=info msg="Container to stop \"d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:57:53.921916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7-rootfs.mount: Deactivated successfully. Mar 17 17:57:53.922052 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b-shm.mount: Deactivated successfully. Mar 17 17:57:53.924355 containerd[1486]: time="2025-03-17T17:57:53.924132533Z" level=info msg="shim disconnected" id=2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7 namespace=k8s.io Mar 17 17:57:53.924355 containerd[1486]: time="2025-03-17T17:57:53.924202735Z" level=warning msg="cleaning up after shim disconnected" id=2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7 namespace=k8s.io Mar 17 17:57:53.924355 containerd[1486]: time="2025-03-17T17:57:53.924210229Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:57:53.928299 systemd[1]: cri-containerd-8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b.scope: Deactivated successfully. Mar 17 17:57:53.938207 containerd[1486]: time="2025-03-17T17:57:53.938160049Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:57:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:57:53.942003 containerd[1486]: time="2025-03-17T17:57:53.941971907Z" level=info msg="StopContainer for \"2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7\" returns successfully" Mar 17 17:57:53.942408 containerd[1486]: time="2025-03-17T17:57:53.942388547Z" level=info msg="StopPodSandbox for \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\"" Mar 17 17:57:53.942453 containerd[1486]: time="2025-03-17T17:57:53.942413033Z" level=info msg="Container to stop \"6976755c9d32edbeaf87bb54f54d5cc09b99048b31547d598fe33a14cfd1466d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:57:53.942453 containerd[1486]: time="2025-03-17T17:57:53.942423183Z" level=info msg="Container to stop \"c8b141c236ca740bc0e151d9478df481aa5f92f865803a823879928d21691a78\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:57:53.942453 containerd[1486]: time="2025-03-17T17:57:53.942430917Z" level=info msg="Container to stop \"6bc64f5a2dc95b41548a471cafa71c3966d74ebaeef6e6bc386f0cb0f233aa04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:57:53.942453 containerd[1486]: time="2025-03-17T17:57:53.942438772Z" level=info msg="Container to stop \"2247abfd2de626cff5b89e1edcfead868565500a7530898de544e5b83bb2f9b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:57:53.942453 containerd[1486]: time="2025-03-17T17:57:53.942447859Z" level=info msg="Container to stop \"2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:57:53.944536 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2-shm.mount: Deactivated successfully. Mar 17 17:57:53.950987 systemd[1]: cri-containerd-d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2.scope: Deactivated successfully. Mar 17 17:57:53.954186 containerd[1486]: time="2025-03-17T17:57:53.954141351Z" level=info msg="shim disconnected" id=8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b namespace=k8s.io Mar 17 17:57:53.954458 containerd[1486]: time="2025-03-17T17:57:53.954323797Z" level=warning msg="cleaning up after shim disconnected" id=8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b namespace=k8s.io Mar 17 17:57:53.954458 containerd[1486]: time="2025-03-17T17:57:53.954336371Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:57:53.972153 containerd[1486]: time="2025-03-17T17:57:53.972117004Z" level=info msg="TearDown network for sandbox \"8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b\" successfully" Mar 17 17:57:53.972153 containerd[1486]: time="2025-03-17T17:57:53.972145608Z" level=info msg="StopPodSandbox for \"8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b\" returns successfully" Mar 17 17:57:53.988912 containerd[1486]: time="2025-03-17T17:57:53.988833579Z" level=info msg="shim disconnected" id=d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2 namespace=k8s.io Mar 17 17:57:53.988912 containerd[1486]: time="2025-03-17T17:57:53.988896799Z" level=warning msg="cleaning up after shim disconnected" id=d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2 namespace=k8s.io Mar 17 17:57:53.988912 containerd[1486]: time="2025-03-17T17:57:53.988909122Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:57:54.003307 containerd[1486]: time="2025-03-17T17:57:54.003259710Z" level=info msg="TearDown network for sandbox \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\" successfully" Mar 17 17:57:54.003307 containerd[1486]: time="2025-03-17T17:57:54.003284227Z" level=info msg="StopPodSandbox for \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\" returns successfully" Mar 17 17:57:54.007424 kubelet[2570]: I0317 17:57:54.007371 2570 scope.go:117] "RemoveContainer" containerID="d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18" Mar 17 17:57:54.012549 containerd[1486]: time="2025-03-17T17:57:54.012506840Z" level=info msg="RemoveContainer for \"d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18\"" Mar 17 17:57:54.016149 containerd[1486]: time="2025-03-17T17:57:54.016112795Z" level=info msg="RemoveContainer for \"d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18\" returns successfully" Mar 17 17:57:54.016365 kubelet[2570]: I0317 17:57:54.016338 2570 scope.go:117] "RemoveContainer" containerID="d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18" Mar 17 17:57:54.016573 containerd[1486]: time="2025-03-17T17:57:54.016529806Z" level=error msg="ContainerStatus for \"d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18\": not found" Mar 17 17:57:54.022893 kubelet[2570]: E0317 17:57:54.022860 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18\": not found" containerID="d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18" Mar 17 17:57:54.023063 kubelet[2570]: I0317 17:57:54.022899 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18"} err="failed to get container status \"d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18\": rpc error: code = NotFound desc = an error occurred when try to find container \"d087b5bbc36707b7e9414b914ee8f827ca24194c87d1489d0a28c6290dac2f18\": not found" Mar 17 17:57:54.112106 kubelet[2570]: I0317 17:57:54.111276 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74067c67-6482-4cc5-89c6-4e3d3a48df7c-cilium-config-path\") pod \"74067c67-6482-4cc5-89c6-4e3d3a48df7c\" (UID: \"74067c67-6482-4cc5-89c6-4e3d3a48df7c\") " Mar 17 17:57:54.112106 kubelet[2570]: I0317 17:57:54.111309 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-cilium-cgroup\") pod \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " Mar 17 17:57:54.112106 kubelet[2570]: I0317 17:57:54.111325 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-xtables-lock\") pod \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " Mar 17 17:57:54.112106 kubelet[2570]: I0317 17:57:54.111339 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-bpf-maps\") pod \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " Mar 17 17:57:54.112106 kubelet[2570]: I0317 17:57:54.111448 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-hostproc\") pod \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " Mar 17 17:57:54.112106 kubelet[2570]: I0317 17:57:54.111466 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-etc-cni-netd\") pod \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " Mar 17 17:57:54.112322 kubelet[2570]: I0317 17:57:54.111480 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-cni-path\") pod \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " Mar 17 17:57:54.112322 kubelet[2570]: I0317 17:57:54.111492 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-host-proc-sys-net\") pod \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " Mar 17 17:57:54.112322 kubelet[2570]: I0317 17:57:54.111512 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfcpj\" (UniqueName: \"kubernetes.io/projected/63cf0a34-d08f-4429-9ae7-9ffc143d0919-kube-api-access-dfcpj\") pod \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " Mar 17 17:57:54.112322 kubelet[2570]: I0317 17:57:54.111393 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "63cf0a34-d08f-4429-9ae7-9ffc143d0919" (UID: "63cf0a34-d08f-4429-9ae7-9ffc143d0919"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:57:54.112322 kubelet[2570]: I0317 17:57:54.111528 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-host-proc-sys-kernel\") pod \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " Mar 17 17:57:54.112322 kubelet[2570]: I0317 17:57:54.111543 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-cilium-run\") pod \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " Mar 17 17:57:54.112458 kubelet[2570]: I0317 17:57:54.111561 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/63cf0a34-d08f-4429-9ae7-9ffc143d0919-hubble-tls\") pod \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " Mar 17 17:57:54.112458 kubelet[2570]: I0317 17:57:54.111575 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63cf0a34-d08f-4429-9ae7-9ffc143d0919-cilium-config-path\") pod \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " Mar 17 17:57:54.112458 kubelet[2570]: I0317 17:57:54.111588 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-lib-modules\") pod \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " Mar 17 17:57:54.112458 kubelet[2570]: I0317 17:57:54.111602 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/63cf0a34-d08f-4429-9ae7-9ffc143d0919-clustermesh-secrets\") pod \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\" (UID: \"63cf0a34-d08f-4429-9ae7-9ffc143d0919\") " Mar 17 17:57:54.112458 kubelet[2570]: I0317 17:57:54.111618 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7dm9\" (UniqueName: \"kubernetes.io/projected/74067c67-6482-4cc5-89c6-4e3d3a48df7c-kube-api-access-w7dm9\") pod \"74067c67-6482-4cc5-89c6-4e3d3a48df7c\" (UID: \"74067c67-6482-4cc5-89c6-4e3d3a48df7c\") " Mar 17 17:57:54.112458 kubelet[2570]: I0317 17:57:54.111646 2570 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 17:57:54.115079 kubelet[2570]: I0317 17:57:54.111399 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "63cf0a34-d08f-4429-9ae7-9ffc143d0919" (UID: "63cf0a34-d08f-4429-9ae7-9ffc143d0919"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:57:54.115079 kubelet[2570]: I0317 17:57:54.111410 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "63cf0a34-d08f-4429-9ae7-9ffc143d0919" (UID: "63cf0a34-d08f-4429-9ae7-9ffc143d0919"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:57:54.115079 kubelet[2570]: I0317 17:57:54.111517 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-hostproc" (OuterVolumeSpecName: "hostproc") pod "63cf0a34-d08f-4429-9ae7-9ffc143d0919" (UID: "63cf0a34-d08f-4429-9ae7-9ffc143d0919"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:57:54.115079 kubelet[2570]: I0317 17:57:54.111559 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "63cf0a34-d08f-4429-9ae7-9ffc143d0919" (UID: "63cf0a34-d08f-4429-9ae7-9ffc143d0919"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:57:54.115204 kubelet[2570]: I0317 17:57:54.111570 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "63cf0a34-d08f-4429-9ae7-9ffc143d0919" (UID: "63cf0a34-d08f-4429-9ae7-9ffc143d0919"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:57:54.115204 kubelet[2570]: I0317 17:57:54.111578 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-cni-path" (OuterVolumeSpecName: "cni-path") pod "63cf0a34-d08f-4429-9ae7-9ffc143d0919" (UID: "63cf0a34-d08f-4429-9ae7-9ffc143d0919"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:57:54.115204 kubelet[2570]: I0317 17:57:54.111592 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "63cf0a34-d08f-4429-9ae7-9ffc143d0919" (UID: "63cf0a34-d08f-4429-9ae7-9ffc143d0919"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:57:54.115204 kubelet[2570]: I0317 17:57:54.114975 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "63cf0a34-d08f-4429-9ae7-9ffc143d0919" (UID: "63cf0a34-d08f-4429-9ae7-9ffc143d0919"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:57:54.115204 kubelet[2570]: I0317 17:57:54.114985 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "63cf0a34-d08f-4429-9ae7-9ffc143d0919" (UID: "63cf0a34-d08f-4429-9ae7-9ffc143d0919"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:57:54.115323 kubelet[2570]: I0317 17:57:54.115043 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74067c67-6482-4cc5-89c6-4e3d3a48df7c-kube-api-access-w7dm9" (OuterVolumeSpecName: "kube-api-access-w7dm9") pod "74067c67-6482-4cc5-89c6-4e3d3a48df7c" (UID: "74067c67-6482-4cc5-89c6-4e3d3a48df7c"). InnerVolumeSpecName "kube-api-access-w7dm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:57:54.115323 kubelet[2570]: I0317 17:57:54.115046 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63cf0a34-d08f-4429-9ae7-9ffc143d0919-kube-api-access-dfcpj" (OuterVolumeSpecName: "kube-api-access-dfcpj") pod "63cf0a34-d08f-4429-9ae7-9ffc143d0919" (UID: "63cf0a34-d08f-4429-9ae7-9ffc143d0919"). InnerVolumeSpecName "kube-api-access-dfcpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:57:54.116014 kubelet[2570]: I0317 17:57:54.115969 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74067c67-6482-4cc5-89c6-4e3d3a48df7c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "74067c67-6482-4cc5-89c6-4e3d3a48df7c" (UID: "74067c67-6482-4cc5-89c6-4e3d3a48df7c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:57:54.116080 kubelet[2570]: I0317 17:57:54.116046 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63cf0a34-d08f-4429-9ae7-9ffc143d0919-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "63cf0a34-d08f-4429-9ae7-9ffc143d0919" (UID: "63cf0a34-d08f-4429-9ae7-9ffc143d0919"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:57:54.117425 kubelet[2570]: I0317 17:57:54.117400 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63cf0a34-d08f-4429-9ae7-9ffc143d0919-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "63cf0a34-d08f-4429-9ae7-9ffc143d0919" (UID: "63cf0a34-d08f-4429-9ae7-9ffc143d0919"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:57:54.118848 kubelet[2570]: I0317 17:57:54.118832 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63cf0a34-d08f-4429-9ae7-9ffc143d0919-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "63cf0a34-d08f-4429-9ae7-9ffc143d0919" (UID: "63cf0a34-d08f-4429-9ae7-9ffc143d0919"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:57:54.212158 kubelet[2570]: I0317 17:57:54.212111 2570 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 17:57:54.212158 kubelet[2570]: I0317 17:57:54.212142 2570 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 17:57:54.212158 kubelet[2570]: I0317 17:57:54.212152 2570 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 17:57:54.212158 kubelet[2570]: I0317 17:57:54.212162 2570 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dfcpj\" (UniqueName: \"kubernetes.io/projected/63cf0a34-d08f-4429-9ae7-9ffc143d0919-kube-api-access-dfcpj\") on node \"localhost\" DevicePath \"\"" Mar 17 17:57:54.212158 kubelet[2570]: I0317 17:57:54.212172 2570 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 17:57:54.212386 kubelet[2570]: I0317 17:57:54.212179 2570 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:57:54.212386 kubelet[2570]: I0317 17:57:54.212187 2570 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 17:57:54.212386 kubelet[2570]: I0317 17:57:54.212194 2570 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 17:57:54.212386 kubelet[2570]: I0317 17:57:54.212201 2570 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/63cf0a34-d08f-4429-9ae7-9ffc143d0919-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 17:57:54.212386 kubelet[2570]: I0317 17:57:54.212208 2570 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63cf0a34-d08f-4429-9ae7-9ffc143d0919-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:57:54.212386 kubelet[2570]: I0317 17:57:54.212215 2570 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 17:57:54.212386 kubelet[2570]: I0317 17:57:54.212222 2570 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/63cf0a34-d08f-4429-9ae7-9ffc143d0919-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 17:57:54.212386 kubelet[2570]: I0317 17:57:54.212232 2570 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-w7dm9\" (UniqueName: \"kubernetes.io/projected/74067c67-6482-4cc5-89c6-4e3d3a48df7c-kube-api-access-w7dm9\") on node \"localhost\" DevicePath \"\"" Mar 17 17:57:54.212558 kubelet[2570]: I0317 17:57:54.212239 2570 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74067c67-6482-4cc5-89c6-4e3d3a48df7c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:57:54.212558 kubelet[2570]: I0317 17:57:54.212248 2570 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/63cf0a34-d08f-4429-9ae7-9ffc143d0919-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 17:57:54.315036 systemd[1]: Removed slice kubepods-besteffort-pod74067c67_6482_4cc5_89c6_4e3d3a48df7c.slice - libcontainer container kubepods-besteffort-pod74067c67_6482_4cc5_89c6_4e3d3a48df7c.slice. Mar 17 17:57:54.765117 kubelet[2570]: I0317 17:57:54.765084 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74067c67-6482-4cc5-89c6-4e3d3a48df7c" path="/var/lib/kubelet/pods/74067c67-6482-4cc5-89c6-4e3d3a48df7c/volumes" Mar 17 17:57:54.770270 systemd[1]: Removed slice kubepods-burstable-pod63cf0a34_d08f_4429_9ae7_9ffc143d0919.slice - libcontainer container kubepods-burstable-pod63cf0a34_d08f_4429_9ae7_9ffc143d0919.slice. Mar 17 17:57:54.770377 systemd[1]: kubepods-burstable-pod63cf0a34_d08f_4429_9ae7_9ffc143d0919.slice: Consumed 7.194s CPU time, 125.6M memory peak, 168K read from disk, 13.3M written to disk. Mar 17 17:57:54.817249 kubelet[2570]: E0317 17:57:54.817201 2570 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:57:54.843051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2-rootfs.mount: Deactivated successfully. Mar 17 17:57:54.843171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b-rootfs.mount: Deactivated successfully. Mar 17 17:57:54.843247 systemd[1]: var-lib-kubelet-pods-74067c67\x2d6482\x2d4cc5\x2d89c6\x2d4e3d3a48df7c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw7dm9.mount: Deactivated successfully. Mar 17 17:57:54.843331 systemd[1]: var-lib-kubelet-pods-63cf0a34\x2dd08f\x2d4429\x2d9ae7\x2d9ffc143d0919-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddfcpj.mount: Deactivated successfully. Mar 17 17:57:54.843414 systemd[1]: var-lib-kubelet-pods-63cf0a34\x2dd08f\x2d4429\x2d9ae7\x2d9ffc143d0919-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:57:54.843489 systemd[1]: var-lib-kubelet-pods-63cf0a34\x2dd08f\x2d4429\x2d9ae7\x2d9ffc143d0919-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:57:55.012161 kubelet[2570]: I0317 17:57:55.012132 2570 scope.go:117] "RemoveContainer" containerID="2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7" Mar 17 17:57:55.013065 containerd[1486]: time="2025-03-17T17:57:55.013031942Z" level=info msg="RemoveContainer for \"2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7\"" Mar 17 17:57:55.020641 containerd[1486]: time="2025-03-17T17:57:55.020506699Z" level=info msg="RemoveContainer for \"2334945aa0c1150fc0806f31b5a1f1a96e6a90612373f77c9f08a31c3e6ee7f7\" returns successfully" Mar 17 17:57:55.020970 kubelet[2570]: I0317 17:57:55.020795 2570 scope.go:117] "RemoveContainer" containerID="6976755c9d32edbeaf87bb54f54d5cc09b99048b31547d598fe33a14cfd1466d" Mar 17 17:57:55.022484 containerd[1486]: time="2025-03-17T17:57:55.022447787Z" level=info msg="RemoveContainer for \"6976755c9d32edbeaf87bb54f54d5cc09b99048b31547d598fe33a14cfd1466d\"" Mar 17 17:57:55.026427 containerd[1486]: time="2025-03-17T17:57:55.026382956Z" level=info msg="RemoveContainer for \"6976755c9d32edbeaf87bb54f54d5cc09b99048b31547d598fe33a14cfd1466d\" returns successfully" Mar 17 17:57:55.026884 kubelet[2570]: I0317 17:57:55.026764 2570 scope.go:117] "RemoveContainer" containerID="2247abfd2de626cff5b89e1edcfead868565500a7530898de544e5b83bb2f9b3" Mar 17 17:57:55.028371 containerd[1486]: time="2025-03-17T17:57:55.028332119Z" level=info msg="RemoveContainer for \"2247abfd2de626cff5b89e1edcfead868565500a7530898de544e5b83bb2f9b3\"" Mar 17 17:57:55.041345 containerd[1486]: time="2025-03-17T17:57:55.041300016Z" level=info msg="RemoveContainer for \"2247abfd2de626cff5b89e1edcfead868565500a7530898de544e5b83bb2f9b3\" returns successfully" Mar 17 17:57:55.041559 kubelet[2570]: I0317 17:57:55.041526 2570 scope.go:117] "RemoveContainer" containerID="6bc64f5a2dc95b41548a471cafa71c3966d74ebaeef6e6bc386f0cb0f233aa04" Mar 17 17:57:55.042818 containerd[1486]: time="2025-03-17T17:57:55.042573088Z" level=info msg="RemoveContainer for \"6bc64f5a2dc95b41548a471cafa71c3966d74ebaeef6e6bc386f0cb0f233aa04\"" Mar 17 17:57:55.045973 containerd[1486]: time="2025-03-17T17:57:55.045949086Z" level=info msg="RemoveContainer for \"6bc64f5a2dc95b41548a471cafa71c3966d74ebaeef6e6bc386f0cb0f233aa04\" returns successfully" Mar 17 17:57:55.046190 kubelet[2570]: I0317 17:57:55.046089 2570 scope.go:117] "RemoveContainer" containerID="c8b141c236ca740bc0e151d9478df481aa5f92f865803a823879928d21691a78" Mar 17 17:57:55.046888 containerd[1486]: time="2025-03-17T17:57:55.046865694Z" level=info msg="RemoveContainer for \"c8b141c236ca740bc0e151d9478df481aa5f92f865803a823879928d21691a78\"" Mar 17 17:57:55.049927 containerd[1486]: time="2025-03-17T17:57:55.049891889Z" level=info msg="RemoveContainer for \"c8b141c236ca740bc0e151d9478df481aa5f92f865803a823879928d21691a78\" returns successfully" Mar 17 17:57:55.804835 sshd[4327]: Connection closed by 10.0.0.1 port 34528 Mar 17 17:57:55.805214 sshd-session[4324]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:55.814740 systemd[1]: sshd@28-10.0.0.132:22-10.0.0.1:34528.service: Deactivated successfully. Mar 17 17:57:55.816740 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 17:57:55.818341 systemd-logind[1470]: Session 29 logged out. Waiting for processes to exit. Mar 17 17:57:55.829006 systemd[1]: Started sshd@29-10.0.0.132:22-10.0.0.1:33244.service - OpenSSH per-connection server daemon (10.0.0.1:33244). Mar 17 17:57:55.829926 systemd-logind[1470]: Removed session 29. Mar 17 17:57:55.869711 sshd[4487]: Accepted publickey for core from 10.0.0.1 port 33244 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:55.871007 sshd-session[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:55.875254 systemd-logind[1470]: New session 30 of user core. Mar 17 17:57:55.883900 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 17 17:57:56.249561 sshd[4490]: Connection closed by 10.0.0.1 port 33244 Mar 17 17:57:56.250991 sshd-session[4487]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:56.262839 kubelet[2570]: E0317 17:57:56.260525 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="63cf0a34-d08f-4429-9ae7-9ffc143d0919" containerName="mount-cgroup" Mar 17 17:57:56.262839 kubelet[2570]: E0317 17:57:56.260557 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="63cf0a34-d08f-4429-9ae7-9ffc143d0919" containerName="mount-bpf-fs" Mar 17 17:57:56.262839 kubelet[2570]: E0317 17:57:56.260566 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="63cf0a34-d08f-4429-9ae7-9ffc143d0919" containerName="clean-cilium-state" Mar 17 17:57:56.262839 kubelet[2570]: E0317 17:57:56.260573 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74067c67-6482-4cc5-89c6-4e3d3a48df7c" containerName="cilium-operator" Mar 17 17:57:56.262839 kubelet[2570]: E0317 17:57:56.260581 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="63cf0a34-d08f-4429-9ae7-9ffc143d0919" containerName="apply-sysctl-overwrites" Mar 17 17:57:56.262839 kubelet[2570]: E0317 17:57:56.260589 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="63cf0a34-d08f-4429-9ae7-9ffc143d0919" containerName="cilium-agent" Mar 17 17:57:56.262839 kubelet[2570]: I0317 17:57:56.260617 2570 memory_manager.go:354] "RemoveStaleState removing state" podUID="63cf0a34-d08f-4429-9ae7-9ffc143d0919" containerName="cilium-agent" Mar 17 17:57:56.262839 kubelet[2570]: I0317 17:57:56.260628 2570 memory_manager.go:354] "RemoveStaleState removing state" podUID="74067c67-6482-4cc5-89c6-4e3d3a48df7c" containerName="cilium-operator" Mar 17 17:57:56.267458 systemd-logind[1470]: Session 30 logged out. Waiting for processes to exit. Mar 17 17:57:56.279117 systemd[1]: Started sshd@30-10.0.0.132:22-10.0.0.1:33256.service - OpenSSH per-connection server daemon (10.0.0.1:33256). Mar 17 17:57:56.280055 systemd[1]: sshd@29-10.0.0.132:22-10.0.0.1:33244.service: Deactivated successfully. Mar 17 17:57:56.284248 systemd[1]: session-30.scope: Deactivated successfully. Mar 17 17:57:56.292921 systemd-logind[1470]: Removed session 30. Mar 17 17:57:56.301065 systemd[1]: Created slice kubepods-burstable-pod194f3bd0_aa1c_4084_8a59_5f8b7ca29c2b.slice - libcontainer container kubepods-burstable-pod194f3bd0_aa1c_4084_8a59_5f8b7ca29c2b.slice. Mar 17 17:57:56.328134 sshd[4499]: Accepted publickey for core from 10.0.0.1 port 33256 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:56.329626 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:56.333937 systemd-logind[1470]: New session 31 of user core. Mar 17 17:57:56.344900 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 17 17:57:56.395450 sshd[4504]: Connection closed by 10.0.0.1 port 33256 Mar 17 17:57:56.395823 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Mar 17 17:57:56.417556 systemd[1]: sshd@30-10.0.0.132:22-10.0.0.1:33256.service: Deactivated successfully. Mar 17 17:57:56.419548 systemd[1]: session-31.scope: Deactivated successfully. Mar 17 17:57:56.421198 systemd-logind[1470]: Session 31 logged out. Waiting for processes to exit. Mar 17 17:57:56.426608 kubelet[2570]: I0317 17:57:56.426567 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj7j9\" (UniqueName: \"kubernetes.io/projected/194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b-kube-api-access-qj7j9\") pod \"cilium-2smr5\" (UID: \"194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b\") " pod="kube-system/cilium-2smr5" Mar 17 17:57:56.426608 kubelet[2570]: I0317 17:57:56.426603 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b-cilium-run\") pod \"cilium-2smr5\" (UID: \"194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b\") " pod="kube-system/cilium-2smr5" Mar 17 17:57:56.426841 kubelet[2570]: I0317 17:57:56.426625 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b-cilium-cgroup\") pod \"cilium-2smr5\" (UID: \"194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b\") " pod="kube-system/cilium-2smr5" Mar 17 17:57:56.426841 kubelet[2570]: I0317 17:57:56.426645 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b-clustermesh-secrets\") pod \"cilium-2smr5\" (UID: \"194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b\") " pod="kube-system/cilium-2smr5" Mar 17 17:57:56.426841 kubelet[2570]: I0317 17:57:56.426675 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b-cni-path\") pod \"cilium-2smr5\" (UID: \"194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b\") " pod="kube-system/cilium-2smr5" Mar 17 17:57:56.426841 kubelet[2570]: I0317 17:57:56.426694 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b-etc-cni-netd\") pod \"cilium-2smr5\" (UID: \"194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b\") " pod="kube-system/cilium-2smr5" Mar 17 17:57:56.426841 kubelet[2570]: I0317 17:57:56.426708 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b-lib-modules\") pod \"cilium-2smr5\" (UID: \"194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b\") " pod="kube-system/cilium-2smr5" Mar 17 17:57:56.426841 kubelet[2570]: I0317 17:57:56.426725 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b-host-proc-sys-net\") pod \"cilium-2smr5\" (UID: \"194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b\") " pod="kube-system/cilium-2smr5" Mar 17 17:57:56.427013 kubelet[2570]: I0317 17:57:56.426757 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b-xtables-lock\") pod \"cilium-2smr5\" (UID: \"194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b\") " pod="kube-system/cilium-2smr5" Mar 17 17:57:56.427013 kubelet[2570]: I0317 17:57:56.426804 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b-host-proc-sys-kernel\") pod \"cilium-2smr5\" (UID: \"194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b\") " pod="kube-system/cilium-2smr5" Mar 17 17:57:56.427013 kubelet[2570]: I0317 17:57:56.426828 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b-bpf-maps\") pod \"cilium-2smr5\" (UID: \"194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b\") " pod="kube-system/cilium-2smr5" Mar 17 17:57:56.427013 kubelet[2570]: I0317 17:57:56.426851 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b-hostproc\") pod \"cilium-2smr5\" (UID: \"194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b\") " pod="kube-system/cilium-2smr5" Mar 17 17:57:56.427013 kubelet[2570]: I0317 17:57:56.426889 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b-cilium-ipsec-secrets\") pod \"cilium-2smr5\" (UID: \"194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b\") " pod="kube-system/cilium-2smr5" Mar 17 17:57:56.427013 kubelet[2570]: I0317 17:57:56.426915 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b-hubble-tls\") pod \"cilium-2smr5\" (UID: \"194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b\") " pod="kube-system/cilium-2smr5" Mar 17 17:57:56.427142 kubelet[2570]: I0317 17:57:56.426934 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b-cilium-config-path\") pod \"cilium-2smr5\" (UID: \"194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b\") " pod="kube-system/cilium-2smr5" Mar 17 17:57:56.428057 systemd[1]: Started sshd@31-10.0.0.132:22-10.0.0.1:33268.service - OpenSSH per-connection server daemon (10.0.0.1:33268). Mar 17 17:57:56.429220 systemd-logind[1470]: Removed session 31. Mar 17 17:57:56.465880 sshd[4510]: Accepted publickey for core from 10.0.0.1 port 33268 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:57:56.467172 sshd-session[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:57:56.472594 systemd-logind[1470]: New session 32 of user core. Mar 17 17:57:56.480909 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 17 17:57:56.605113 kubelet[2570]: E0317 17:57:56.604993 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:57:56.605757 containerd[1486]: time="2025-03-17T17:57:56.605493599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2smr5,Uid:194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b,Namespace:kube-system,Attempt:0,}" Mar 17 17:57:56.625296 containerd[1486]: time="2025-03-17T17:57:56.625226728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:57:56.625296 containerd[1486]: time="2025-03-17T17:57:56.625266684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:57:56.625296 containerd[1486]: time="2025-03-17T17:57:56.625276373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:56.625554 containerd[1486]: time="2025-03-17T17:57:56.625333151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:57:56.648926 systemd[1]: Started cri-containerd-744f128f2d41e5daf92a795511e222deacd791f99ae8017371091a4cbb30c443.scope - libcontainer container 744f128f2d41e5daf92a795511e222deacd791f99ae8017371091a4cbb30c443. Mar 17 17:57:56.672014 containerd[1486]: time="2025-03-17T17:57:56.671971500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2smr5,Uid:194f3bd0-aa1c-4084-8a59-5f8b7ca29c2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"744f128f2d41e5daf92a795511e222deacd791f99ae8017371091a4cbb30c443\"" Mar 17 17:57:56.672558 kubelet[2570]: E0317 17:57:56.672538 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:57:56.674784 containerd[1486]: time="2025-03-17T17:57:56.674735237Z" level=info msg="CreateContainer within sandbox \"744f128f2d41e5daf92a795511e222deacd791f99ae8017371091a4cbb30c443\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:57:56.687981 containerd[1486]: time="2025-03-17T17:57:56.687926060Z" level=info msg="CreateContainer within sandbox \"744f128f2d41e5daf92a795511e222deacd791f99ae8017371091a4cbb30c443\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6d83b4f1380dfdb5489454b8a14b302369981ea6e863b14bdbb2ad5995b6d106\"" Mar 17 17:57:56.688325 containerd[1486]: time="2025-03-17T17:57:56.688298646Z" level=info msg="StartContainer for \"6d83b4f1380dfdb5489454b8a14b302369981ea6e863b14bdbb2ad5995b6d106\"" Mar 17 17:57:56.716897 systemd[1]: Started cri-containerd-6d83b4f1380dfdb5489454b8a14b302369981ea6e863b14bdbb2ad5995b6d106.scope - libcontainer container 6d83b4f1380dfdb5489454b8a14b302369981ea6e863b14bdbb2ad5995b6d106. Mar 17 17:57:56.742039 containerd[1486]: time="2025-03-17T17:57:56.741985621Z" level=info msg="StartContainer for \"6d83b4f1380dfdb5489454b8a14b302369981ea6e863b14bdbb2ad5995b6d106\" returns successfully" Mar 17 17:57:56.750397 systemd[1]: cri-containerd-6d83b4f1380dfdb5489454b8a14b302369981ea6e863b14bdbb2ad5995b6d106.scope: Deactivated successfully. Mar 17 17:57:56.764973 kubelet[2570]: I0317 17:57:56.764942 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63cf0a34-d08f-4429-9ae7-9ffc143d0919" path="/var/lib/kubelet/pods/63cf0a34-d08f-4429-9ae7-9ffc143d0919/volumes" Mar 17 17:57:56.779201 containerd[1486]: time="2025-03-17T17:57:56.779014904Z" level=info msg="shim disconnected" id=6d83b4f1380dfdb5489454b8a14b302369981ea6e863b14bdbb2ad5995b6d106 namespace=k8s.io Mar 17 17:57:56.779201 containerd[1486]: time="2025-03-17T17:57:56.779069086Z" level=warning msg="cleaning up after shim disconnected" id=6d83b4f1380dfdb5489454b8a14b302369981ea6e863b14bdbb2ad5995b6d106 namespace=k8s.io Mar 17 17:57:56.779201 containerd[1486]: time="2025-03-17T17:57:56.779079285Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:57:56.876399 kubelet[2570]: I0317 17:57:56.876359 2570 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:57:56Z","lastTransitionTime":"2025-03-17T17:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:57:57.017014 kubelet[2570]: E0317 17:57:57.016982 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:57:57.018759 containerd[1486]: time="2025-03-17T17:57:57.018628652Z" level=info msg="CreateContainer within sandbox \"744f128f2d41e5daf92a795511e222deacd791f99ae8017371091a4cbb30c443\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:57:57.032322 containerd[1486]: time="2025-03-17T17:57:57.032257821Z" level=info msg="CreateContainer within sandbox \"744f128f2d41e5daf92a795511e222deacd791f99ae8017371091a4cbb30c443\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"48e062ceb17fab40d08f136e057ac06a5c904d254078caef52d0e7543e5d8942\"" Mar 17 17:57:57.032763 containerd[1486]: time="2025-03-17T17:57:57.032732911Z" level=info msg="StartContainer for \"48e062ceb17fab40d08f136e057ac06a5c904d254078caef52d0e7543e5d8942\"" Mar 17 17:57:57.058912 systemd[1]: Started cri-containerd-48e062ceb17fab40d08f136e057ac06a5c904d254078caef52d0e7543e5d8942.scope - libcontainer container 48e062ceb17fab40d08f136e057ac06a5c904d254078caef52d0e7543e5d8942. Mar 17 17:57:57.082794 containerd[1486]: time="2025-03-17T17:57:57.082741005Z" level=info msg="StartContainer for \"48e062ceb17fab40d08f136e057ac06a5c904d254078caef52d0e7543e5d8942\" returns successfully" Mar 17 17:57:57.089684 systemd[1]: cri-containerd-48e062ceb17fab40d08f136e057ac06a5c904d254078caef52d0e7543e5d8942.scope: Deactivated successfully. Mar 17 17:57:57.114653 containerd[1486]: time="2025-03-17T17:57:57.114588466Z" level=info msg="shim disconnected" id=48e062ceb17fab40d08f136e057ac06a5c904d254078caef52d0e7543e5d8942 namespace=k8s.io Mar 17 17:57:57.114653 containerd[1486]: time="2025-03-17T17:57:57.114647769Z" level=warning msg="cleaning up after shim disconnected" id=48e062ceb17fab40d08f136e057ac06a5c904d254078caef52d0e7543e5d8942 namespace=k8s.io Mar 17 17:57:57.114653 containerd[1486]: time="2025-03-17T17:57:57.114656676Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:57:58.019795 kubelet[2570]: E0317 17:57:58.019742 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:57:58.021057 containerd[1486]: time="2025-03-17T17:57:58.021023060Z" level=info msg="CreateContainer within sandbox \"744f128f2d41e5daf92a795511e222deacd791f99ae8017371091a4cbb30c443\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:57:58.038326 containerd[1486]: time="2025-03-17T17:57:58.038282759Z" level=info msg="CreateContainer within sandbox \"744f128f2d41e5daf92a795511e222deacd791f99ae8017371091a4cbb30c443\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"67e9be64057cfedc9fa661d345bc5df29384da754fb5bdd9f29bd91b411141c2\"" Mar 17 17:57:58.039907 containerd[1486]: time="2025-03-17T17:57:58.038662017Z" level=info msg="StartContainer for \"67e9be64057cfedc9fa661d345bc5df29384da754fb5bdd9f29bd91b411141c2\"" Mar 17 17:57:58.067905 systemd[1]: Started cri-containerd-67e9be64057cfedc9fa661d345bc5df29384da754fb5bdd9f29bd91b411141c2.scope - libcontainer container 67e9be64057cfedc9fa661d345bc5df29384da754fb5bdd9f29bd91b411141c2. Mar 17 17:57:58.099381 containerd[1486]: time="2025-03-17T17:57:58.099340091Z" level=info msg="StartContainer for \"67e9be64057cfedc9fa661d345bc5df29384da754fb5bdd9f29bd91b411141c2\" returns successfully" Mar 17 17:57:58.100804 systemd[1]: cri-containerd-67e9be64057cfedc9fa661d345bc5df29384da754fb5bdd9f29bd91b411141c2.scope: Deactivated successfully. Mar 17 17:57:58.124458 containerd[1486]: time="2025-03-17T17:57:58.124382502Z" level=info msg="shim disconnected" id=67e9be64057cfedc9fa661d345bc5df29384da754fb5bdd9f29bd91b411141c2 namespace=k8s.io Mar 17 17:57:58.124458 containerd[1486]: time="2025-03-17T17:57:58.124441013Z" level=warning msg="cleaning up after shim disconnected" id=67e9be64057cfedc9fa661d345bc5df29384da754fb5bdd9f29bd91b411141c2 namespace=k8s.io Mar 17 17:57:58.124458 containerd[1486]: time="2025-03-17T17:57:58.124449879Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:57:58.533197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67e9be64057cfedc9fa661d345bc5df29384da754fb5bdd9f29bd91b411141c2-rootfs.mount: Deactivated successfully. Mar 17 17:57:59.023539 kubelet[2570]: E0317 17:57:59.023501 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:57:59.025803 containerd[1486]: time="2025-03-17T17:57:59.025723475Z" level=info msg="CreateContainer within sandbox \"744f128f2d41e5daf92a795511e222deacd791f99ae8017371091a4cbb30c443\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:57:59.040392 containerd[1486]: time="2025-03-17T17:57:59.040348490Z" level=info msg="CreateContainer within sandbox \"744f128f2d41e5daf92a795511e222deacd791f99ae8017371091a4cbb30c443\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cfb2442184a44228e898ef76466d74deb06eb37730e4177f314f84cc6f75023f\"" Mar 17 17:57:59.040824 containerd[1486]: time="2025-03-17T17:57:59.040803251Z" level=info msg="StartContainer for \"cfb2442184a44228e898ef76466d74deb06eb37730e4177f314f84cc6f75023f\"" Mar 17 17:57:59.071899 systemd[1]: Started cri-containerd-cfb2442184a44228e898ef76466d74deb06eb37730e4177f314f84cc6f75023f.scope - libcontainer container cfb2442184a44228e898ef76466d74deb06eb37730e4177f314f84cc6f75023f. Mar 17 17:57:59.093742 systemd[1]: cri-containerd-cfb2442184a44228e898ef76466d74deb06eb37730e4177f314f84cc6f75023f.scope: Deactivated successfully. Mar 17 17:57:59.221529 containerd[1486]: time="2025-03-17T17:57:59.221474584Z" level=info msg="StartContainer for \"cfb2442184a44228e898ef76466d74deb06eb37730e4177f314f84cc6f75023f\" returns successfully" Mar 17 17:57:59.249186 containerd[1486]: time="2025-03-17T17:57:59.249123981Z" level=info msg="shim disconnected" id=cfb2442184a44228e898ef76466d74deb06eb37730e4177f314f84cc6f75023f namespace=k8s.io Mar 17 17:57:59.249186 containerd[1486]: time="2025-03-17T17:57:59.249180569Z" level=warning msg="cleaning up after shim disconnected" id=cfb2442184a44228e898ef76466d74deb06eb37730e4177f314f84cc6f75023f namespace=k8s.io Mar 17 17:57:59.249186 containerd[1486]: time="2025-03-17T17:57:59.249191831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:57:59.533377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfb2442184a44228e898ef76466d74deb06eb37730e4177f314f84cc6f75023f-rootfs.mount: Deactivated successfully. Mar 17 17:57:59.818428 kubelet[2570]: E0317 17:57:59.818324 2570 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:58:00.027568 kubelet[2570]: E0317 17:58:00.027531 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:58:00.029388 containerd[1486]: time="2025-03-17T17:58:00.029072797Z" level=info msg="CreateContainer within sandbox \"744f128f2d41e5daf92a795511e222deacd791f99ae8017371091a4cbb30c443\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:58:00.195084 containerd[1486]: time="2025-03-17T17:58:00.195029685Z" level=info msg="CreateContainer within sandbox \"744f128f2d41e5daf92a795511e222deacd791f99ae8017371091a4cbb30c443\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3ef9ef32601da2329a2ee41ec2803cd6af11cd5f26902beb733605f9fa5e8925\"" Mar 17 17:58:00.195630 containerd[1486]: time="2025-03-17T17:58:00.195595617Z" level=info msg="StartContainer for \"3ef9ef32601da2329a2ee41ec2803cd6af11cd5f26902beb733605f9fa5e8925\"" Mar 17 17:58:00.222917 systemd[1]: Started cri-containerd-3ef9ef32601da2329a2ee41ec2803cd6af11cd5f26902beb733605f9fa5e8925.scope - libcontainer container 3ef9ef32601da2329a2ee41ec2803cd6af11cd5f26902beb733605f9fa5e8925. Mar 17 17:58:00.367728 containerd[1486]: time="2025-03-17T17:58:00.367670027Z" level=info msg="StartContainer for \"3ef9ef32601da2329a2ee41ec2803cd6af11cd5f26902beb733605f9fa5e8925\" returns successfully" Mar 17 17:58:00.667806 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 17:58:01.031838 kubelet[2570]: E0317 17:58:01.031684 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:58:01.050802 kubelet[2570]: I0317 17:58:01.050681 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2smr5" podStartSLOduration=5.050662369 podStartE2EDuration="5.050662369s" podCreationTimestamp="2025-03-17 17:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:58:01.050659944 +0000 UTC m=+116.625252644" watchObservedRunningTime="2025-03-17 17:58:01.050662369 +0000 UTC m=+116.625255069" Mar 17 17:58:02.606272 kubelet[2570]: E0317 17:58:02.606230 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:58:03.662744 systemd-networkd[1404]: lxc_health: Link UP Mar 17 17:58:03.669319 systemd-networkd[1404]: lxc_health: Gained carrier Mar 17 17:58:04.606654 kubelet[2570]: E0317 17:58:04.606584 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:58:04.753021 containerd[1486]: time="2025-03-17T17:58:04.752975437Z" level=info msg="StopPodSandbox for \"8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b\"" Mar 17 17:58:04.753503 containerd[1486]: time="2025-03-17T17:58:04.753063804Z" level=info msg="TearDown network for sandbox \"8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b\" successfully" Mar 17 17:58:04.753503 containerd[1486]: time="2025-03-17T17:58:04.753074254Z" level=info msg="StopPodSandbox for \"8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b\" returns successfully" Mar 17 17:58:04.753503 containerd[1486]: time="2025-03-17T17:58:04.753412865Z" level=info msg="RemovePodSandbox for \"8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b\"" Mar 17 17:58:04.753503 containerd[1486]: time="2025-03-17T17:58:04.753433764Z" level=info msg="Forcibly stopping sandbox \"8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b\"" Mar 17 17:58:04.770428 containerd[1486]: time="2025-03-17T17:58:04.753475824Z" level=info msg="TearDown network for sandbox \"8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b\" successfully" Mar 17 17:58:04.827050 containerd[1486]: time="2025-03-17T17:58:04.826994474Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:58:04.827319 containerd[1486]: time="2025-03-17T17:58:04.827297878Z" level=info msg="RemovePodSandbox \"8fc59a2af31d5d79bdfb2a89ce4cc3e07674ec6860fd8792b8a393e09fa4af3b\" returns successfully" Mar 17 17:58:04.827978 containerd[1486]: time="2025-03-17T17:58:04.827939382Z" level=info msg="StopPodSandbox for \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\"" Mar 17 17:58:04.828058 containerd[1486]: time="2025-03-17T17:58:04.828038901Z" level=info msg="TearDown network for sandbox \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\" successfully" Mar 17 17:58:04.828178 containerd[1486]: time="2025-03-17T17:58:04.828055392Z" level=info msg="StopPodSandbox for \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\" returns successfully" Mar 17 17:58:04.828442 containerd[1486]: time="2025-03-17T17:58:04.828415433Z" level=info msg="RemovePodSandbox for \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\"" Mar 17 17:58:04.828516 containerd[1486]: time="2025-03-17T17:58:04.828438567Z" level=info msg="Forcibly stopping sandbox \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\"" Mar 17 17:58:04.828616 containerd[1486]: time="2025-03-17T17:58:04.828527435Z" level=info msg="TearDown network for sandbox \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\" successfully" Mar 17 17:58:04.891052 containerd[1486]: time="2025-03-17T17:58:04.890996451Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:58:04.891221 containerd[1486]: time="2025-03-17T17:58:04.891061093Z" level=info msg="RemovePodSandbox \"d4ba5cce581d302e22fb2891ad62a9d81a644ff12adbcdb9f8195589e882e1d2\" returns successfully" Mar 17 17:58:04.906963 systemd-networkd[1404]: lxc_health: Gained IPv6LL Mar 17 17:58:05.038364 kubelet[2570]: E0317 17:58:05.038318 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:58:06.040285 kubelet[2570]: E0317 17:58:06.040234 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:58:09.457535 sshd[4513]: Connection closed by 10.0.0.1 port 33268 Mar 17 17:58:09.457980 sshd-session[4510]: pam_unix(sshd:session): session closed for user core Mar 17 17:58:09.461573 systemd[1]: sshd@31-10.0.0.132:22-10.0.0.1:33268.service: Deactivated successfully. Mar 17 17:58:09.463524 systemd[1]: session-32.scope: Deactivated successfully. Mar 17 17:58:09.464212 systemd-logind[1470]: Session 32 logged out. Waiting for processes to exit. Mar 17 17:58:09.465013 systemd-logind[1470]: Removed session 32.