Jan 13 20:38:14.907865 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 20:38:14.907886 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:38:14.907896 kernel: BIOS-provided physical RAM map: Jan 13 20:38:14.907903 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 13 20:38:14.907909 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 13 20:38:14.907915 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 13 20:38:14.907922 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 13 20:38:14.907928 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 13 20:38:14.907934 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 13 20:38:14.907940 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 13 20:38:14.907949 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 13 20:38:14.907956 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 13 20:38:14.907965 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 13 20:38:14.907974 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 13 20:38:14.907984 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 13 20:38:14.907991 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 13 20:38:14.908000 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 13 20:38:14.908007 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 13 20:38:14.908013 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 13 20:38:14.908020 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 13 20:38:14.908026 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 13 20:38:14.908033 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 13 20:38:14.908039 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 13 20:38:14.908045 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:38:14.908052 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 13 20:38:14.908058 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:38:14.908065 kernel: NX (Execute Disable) protection: active Jan 13 20:38:14.908075 kernel: APIC: Static calls initialized Jan 13 20:38:14.908083 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 13 20:38:14.908090 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 13 20:38:14.908097 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 13 20:38:14.908103 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 13 20:38:14.908110 kernel: extended physical RAM map: Jan 13 20:38:14.908119 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 13 20:38:14.908128 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 13 20:38:14.908136 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 13 20:38:14.908144 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 13 20:38:14.908150 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 13 20:38:14.908170 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 13 20:38:14.908178 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 13 20:38:14.908192 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Jan 13 20:38:14.908201 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Jan 13 20:38:14.908208 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Jan 13 20:38:14.908215 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Jan 13 20:38:14.908223 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Jan 13 20:38:14.908237 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 13 20:38:14.908245 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 13 20:38:14.908254 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 13 20:38:14.908263 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 13 20:38:14.908274 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 13 20:38:14.908285 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 13 20:38:14.908295 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 13 20:38:14.908307 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 13 20:38:14.908318 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 13 20:38:14.908329 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 13 20:38:14.908339 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 13 20:38:14.908350 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 13 20:38:14.908360 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:38:14.908370 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 13 20:38:14.908381 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:38:14.908388 kernel: efi: EFI v2.7 by EDK II Jan 13 20:38:14.908395 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Jan 13 20:38:14.908402 kernel: random: crng init done Jan 13 20:38:14.908411 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 13 20:38:14.908420 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 13 20:38:14.908432 kernel: secureboot: Secure boot disabled Jan 13 20:38:14.908441 kernel: SMBIOS 2.8 present. Jan 13 20:38:14.908448 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 13 20:38:14.908455 kernel: Hypervisor detected: KVM Jan 13 20:38:14.908462 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:38:14.908469 kernel: kvm-clock: using sched offset of 2836779578 cycles Jan 13 20:38:14.908477 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:38:14.908486 kernel: tsc: Detected 2794.750 MHz processor Jan 13 20:38:14.908494 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:38:14.908501 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:38:14.908511 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 13 20:38:14.908523 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 13 20:38:14.908533 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:38:14.908543 kernel: Using GB pages for direct mapping Jan 13 20:38:14.908552 kernel: ACPI: Early table checksum verification disabled Jan 13 20:38:14.908563 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 13 20:38:14.908573 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:38:14.908580 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:38:14.908587 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:38:14.908594 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 13 20:38:14.908606 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:38:14.908614 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:38:14.908621 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:38:14.908629 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:38:14.908636 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 20:38:14.908643 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 13 20:38:14.908650 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 13 20:38:14.908657 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 13 20:38:14.908666 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 13 20:38:14.908673 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 13 20:38:14.908680 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 13 20:38:14.908687 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 13 20:38:14.908694 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 13 20:38:14.908701 kernel: No NUMA configuration found Jan 13 20:38:14.908708 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 13 20:38:14.908715 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Jan 13 20:38:14.908722 kernel: Zone ranges: Jan 13 20:38:14.908730 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:38:14.908742 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 13 20:38:14.908761 kernel: Normal empty Jan 13 20:38:14.908768 kernel: Movable zone start for each node Jan 13 20:38:14.908775 kernel: Early memory node ranges Jan 13 20:38:14.908782 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 13 20:38:14.908789 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 13 20:38:14.908796 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 13 20:38:14.908803 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 13 20:38:14.908810 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 13 20:38:14.908822 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 13 20:38:14.908830 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Jan 13 20:38:14.908837 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Jan 13 20:38:14.908844 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 13 20:38:14.908852 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:38:14.908862 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 13 20:38:14.908881 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 13 20:38:14.908891 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:38:14.908898 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 13 20:38:14.908906 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 13 20:38:14.908913 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 13 20:38:14.908921 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 13 20:38:14.908933 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 13 20:38:14.908942 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:38:14.908952 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:38:14.908963 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:38:14.908972 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:38:14.908983 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:38:14.908992 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:38:14.909002 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:38:14.909012 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:38:14.909023 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:38:14.909033 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:38:14.909042 kernel: TSC deadline timer available Jan 13 20:38:14.909053 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 20:38:14.909063 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:38:14.909074 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 20:38:14.909084 kernel: kvm-guest: setup PV sched yield Jan 13 20:38:14.909091 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 13 20:38:14.909099 kernel: Booting paravirtualized kernel on KVM Jan 13 20:38:14.909106 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:38:14.909114 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 20:38:14.909121 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 20:38:14.909129 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 20:38:14.909136 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 20:38:14.909143 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:38:14.909162 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:38:14.909173 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:38:14.909181 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:38:14.909190 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:38:14.909200 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:38:14.909207 kernel: Fallback order for Node 0: 0 Jan 13 20:38:14.909215 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Jan 13 20:38:14.909222 kernel: Policy zone: DMA32 Jan 13 20:38:14.909232 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:38:14.909240 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 175776K reserved, 0K cma-reserved) Jan 13 20:38:14.909247 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:38:14.909254 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 20:38:14.909262 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:38:14.909269 kernel: Dynamic Preempt: voluntary Jan 13 20:38:14.909277 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:38:14.909285 kernel: rcu: RCU event tracing is enabled. Jan 13 20:38:14.909292 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:38:14.909302 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:38:14.909310 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:38:14.909317 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:38:14.909325 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:38:14.909335 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:38:14.909344 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 20:38:14.909355 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:38:14.909365 kernel: Console: colour dummy device 80x25 Jan 13 20:38:14.909377 kernel: printk: console [ttyS0] enabled Jan 13 20:38:14.909391 kernel: ACPI: Core revision 20230628 Jan 13 20:38:14.909400 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 20:38:14.909407 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:38:14.909415 kernel: x2apic enabled Jan 13 20:38:14.909422 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:38:14.909430 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 20:38:14.909437 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 20:38:14.909444 kernel: kvm-guest: setup PV IPIs Jan 13 20:38:14.909452 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:38:14.909462 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:38:14.909469 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 13 20:38:14.909476 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 20:38:14.909484 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 20:38:14.909491 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 20:38:14.909499 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:38:14.909506 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:38:14.909513 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:38:14.909521 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:38:14.909530 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 20:38:14.909538 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 20:38:14.909546 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:38:14.909555 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:38:14.909564 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 20:38:14.909572 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 20:38:14.909580 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 20:38:14.909587 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:38:14.909597 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:38:14.909604 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:38:14.909611 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:38:14.909620 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 20:38:14.909629 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:38:14.909639 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:38:14.909650 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:38:14.909660 kernel: landlock: Up and running. Jan 13 20:38:14.909671 kernel: SELinux: Initializing. Jan 13 20:38:14.909685 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:38:14.909695 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:38:14.909705 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 20:38:14.909715 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:38:14.909723 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:38:14.909730 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:38:14.909738 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 20:38:14.909745 kernel: ... version: 0 Jan 13 20:38:14.909772 kernel: ... bit width: 48 Jan 13 20:38:14.909783 kernel: ... generic registers: 6 Jan 13 20:38:14.909792 kernel: ... value mask: 0000ffffffffffff Jan 13 20:38:14.909801 kernel: ... max period: 00007fffffffffff Jan 13 20:38:14.909808 kernel: ... fixed-purpose events: 0 Jan 13 20:38:14.909816 kernel: ... event mask: 000000000000003f Jan 13 20:38:14.909823 kernel: signal: max sigframe size: 1776 Jan 13 20:38:14.909830 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:38:14.909838 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:38:14.909845 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:38:14.909855 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:38:14.909862 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 20:38:14.909870 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:38:14.909877 kernel: smpboot: Max logical packages: 1 Jan 13 20:38:14.909884 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 13 20:38:14.909892 kernel: devtmpfs: initialized Jan 13 20:38:14.909901 kernel: x86/mm: Memory block size: 128MB Jan 13 20:38:14.909911 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 13 20:38:14.909918 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 13 20:38:14.909928 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 13 20:38:14.909936 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 13 20:38:14.909943 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Jan 13 20:38:14.909951 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 13 20:38:14.909958 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:38:14.909966 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:38:14.909973 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:38:14.909980 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:38:14.909988 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:38:14.909997 kernel: audit: type=2000 audit(1736800694.787:1): state=initialized audit_enabled=0 res=1 Jan 13 20:38:14.910005 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:38:14.910015 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:38:14.910023 kernel: cpuidle: using governor menu Jan 13 20:38:14.910030 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:38:14.910038 kernel: dca service started, version 1.12.1 Jan 13 20:38:14.910045 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 13 20:38:14.910053 kernel: PCI: Using configuration type 1 for base access Jan 13 20:38:14.910060 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:38:14.910072 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:38:14.910081 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:38:14.910088 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:38:14.910095 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:38:14.910103 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:38:14.910110 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:38:14.910117 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:38:14.910125 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:38:14.910132 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:38:14.910142 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:38:14.910149 kernel: ACPI: Interpreter enabled Jan 13 20:38:14.910167 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:38:14.910174 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:38:14.910184 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:38:14.910194 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:38:14.910204 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 20:38:14.910213 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:38:14.910403 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:38:14.910539 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 20:38:14.910664 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 20:38:14.910674 kernel: PCI host bridge to bus 0000:00 Jan 13 20:38:14.910831 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:38:14.910953 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:38:14.911077 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:38:14.911227 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 13 20:38:14.911378 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 13 20:38:14.911511 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 13 20:38:14.911622 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:38:14.911788 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 20:38:14.911926 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 20:38:14.912052 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 13 20:38:14.912206 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 13 20:38:14.912338 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 13 20:38:14.912462 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 13 20:38:14.912586 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:38:14.912719 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:38:14.912864 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 13 20:38:14.913006 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 13 20:38:14.913132 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Jan 13 20:38:14.913293 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:38:14.913434 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 13 20:38:14.913571 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 13 20:38:14.913708 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Jan 13 20:38:14.913879 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:38:14.914013 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 13 20:38:14.914139 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 13 20:38:14.914292 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 13 20:38:14.914439 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 13 20:38:14.914583 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 20:38:14.914721 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 20:38:14.914874 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 20:38:14.915013 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 13 20:38:14.915141 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 13 20:38:14.915307 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 20:38:14.915442 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 13 20:38:14.915455 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:38:14.915463 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:38:14.915471 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:38:14.915483 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:38:14.915490 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 20:38:14.915498 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 20:38:14.915505 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 20:38:14.915512 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 20:38:14.915520 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 20:38:14.915528 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 20:38:14.915538 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 20:38:14.915549 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 20:38:14.915562 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 20:38:14.915572 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 20:38:14.915582 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 20:38:14.915592 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 20:38:14.915602 kernel: iommu: Default domain type: Translated Jan 13 20:38:14.915612 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:38:14.915619 kernel: efivars: Registered efivars operations Jan 13 20:38:14.915627 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:38:14.915634 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:38:14.915645 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 13 20:38:14.915653 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 13 20:38:14.915662 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Jan 13 20:38:14.915672 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Jan 13 20:38:14.915682 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 13 20:38:14.915690 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 13 20:38:14.915697 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Jan 13 20:38:14.915704 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 13 20:38:14.915867 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 20:38:14.915998 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 20:38:14.916126 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:38:14.916137 kernel: vgaarb: loaded Jan 13 20:38:14.916144 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 20:38:14.916152 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 20:38:14.916173 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:38:14.916183 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:38:14.916193 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:38:14.916204 kernel: pnp: PnP ACPI init Jan 13 20:38:14.916349 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 13 20:38:14.916361 kernel: pnp: PnP ACPI: found 6 devices Jan 13 20:38:14.916369 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:38:14.916377 kernel: NET: Registered PF_INET protocol family Jan 13 20:38:14.916406 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:38:14.916420 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:38:14.916431 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:38:14.916445 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:38:14.916457 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:38:14.916467 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:38:14.916475 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:38:14.916483 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:38:14.916491 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:38:14.916499 kernel: NET: Registered PF_XDP protocol family Jan 13 20:38:14.916631 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 13 20:38:14.916781 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 13 20:38:14.916909 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:38:14.917025 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:38:14.917146 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:38:14.917280 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 13 20:38:14.917412 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 13 20:38:14.917561 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 13 20:38:14.917573 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:38:14.917581 kernel: Initialise system trusted keyrings Jan 13 20:38:14.917593 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:38:14.917601 kernel: Key type asymmetric registered Jan 13 20:38:14.917609 kernel: Asymmetric key parser 'x509' registered Jan 13 20:38:14.917617 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:38:14.917625 kernel: io scheduler mq-deadline registered Jan 13 20:38:14.917633 kernel: io scheduler kyber registered Jan 13 20:38:14.917641 kernel: io scheduler bfq registered Jan 13 20:38:14.917649 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:38:14.917659 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 20:38:14.917672 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 20:38:14.917682 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 20:38:14.917690 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:38:14.917698 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:38:14.917707 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:38:14.917715 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:38:14.917725 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:38:14.917915 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:38:14.918036 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:38:14.918047 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 13 20:38:14.918168 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:38:14 UTC (1736800694) Jan 13 20:38:14.918300 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 13 20:38:14.918316 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:38:14.918333 kernel: efifb: probing for efifb Jan 13 20:38:14.918345 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 13 20:38:14.918355 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 13 20:38:14.918363 kernel: efifb: scrolling: redraw Jan 13 20:38:14.918371 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 13 20:38:14.918381 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:38:14.918389 kernel: fb0: EFI VGA frame buffer device Jan 13 20:38:14.918397 kernel: pstore: Using crash dump compression: deflate Jan 13 20:38:14.918405 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 20:38:14.918413 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:38:14.918432 kernel: Segment Routing with IPv6 Jan 13 20:38:14.918448 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:38:14.918456 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:38:14.918471 kernel: Key type dns_resolver registered Jan 13 20:38:14.918479 kernel: IPI shorthand broadcast: enabled Jan 13 20:38:14.918500 kernel: sched_clock: Marking stable (615003412, 153958943)->(820915637, -51953282) Jan 13 20:38:14.918514 kernel: registered taskstats version 1 Jan 13 20:38:14.918532 kernel: Loading compiled-in X.509 certificates Jan 13 20:38:14.918543 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 20:38:14.918558 kernel: Key type .fscrypt registered Jan 13 20:38:14.918566 kernel: Key type fscrypt-provisioning registered Jan 13 20:38:14.918574 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:38:14.918582 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:38:14.918590 kernel: ima: No architecture policies found Jan 13 20:38:14.918598 kernel: clk: Disabling unused clocks Jan 13 20:38:14.918606 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 20:38:14.918614 kernel: Write protecting the kernel read-only data: 36864k Jan 13 20:38:14.918624 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 20:38:14.918632 kernel: Run /init as init process Jan 13 20:38:14.918639 kernel: with arguments: Jan 13 20:38:14.918647 kernel: /init Jan 13 20:38:14.918655 kernel: with environment: Jan 13 20:38:14.918663 kernel: HOME=/ Jan 13 20:38:14.918670 kernel: TERM=linux Jan 13 20:38:14.918678 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:38:14.918688 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:38:14.918701 systemd[1]: Detected virtualization kvm. Jan 13 20:38:14.918709 systemd[1]: Detected architecture x86-64. Jan 13 20:38:14.918719 systemd[1]: Running in initrd. Jan 13 20:38:14.918730 systemd[1]: No hostname configured, using default hostname. Jan 13 20:38:14.918740 systemd[1]: Hostname set to . Jan 13 20:38:14.918765 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:38:14.918776 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:38:14.918787 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:38:14.918799 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:38:14.918809 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:38:14.918817 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:38:14.918826 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:38:14.918835 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:38:14.918845 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:38:14.918855 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:38:14.918866 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:38:14.918877 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:38:14.918887 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:38:14.918896 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:38:14.918906 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:38:14.918915 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:38:14.918923 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:38:14.918932 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:38:14.918943 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:38:14.918951 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:38:14.918959 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:38:14.918968 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:38:14.918976 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:38:14.918984 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:38:14.918992 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:38:14.919001 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:38:14.919011 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:38:14.919020 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:38:14.919028 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:38:14.919036 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:38:14.919044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:38:14.919053 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:38:14.919061 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:38:14.919069 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:38:14.919101 systemd-journald[194]: Collecting audit messages is disabled. Jan 13 20:38:14.919125 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:38:14.919133 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:38:14.919142 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:38:14.919150 systemd-journald[194]: Journal started Jan 13 20:38:14.919179 systemd-journald[194]: Runtime Journal (/run/log/journal/60b4cf3868924e3aa03d0d98be779015) is 6.0M, max 48.3M, 42.2M free. Jan 13 20:38:14.920917 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:38:14.922575 systemd-modules-load[195]: Inserted module 'overlay' Jan 13 20:38:14.924938 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:38:14.926118 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:38:14.929927 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:38:14.941515 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:38:14.945122 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:38:14.955458 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:38:14.967778 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:38:14.970410 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 13 20:38:14.971362 kernel: Bridge firewalling registered Jan 13 20:38:14.972907 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:38:14.973541 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:38:14.976782 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:38:14.990725 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:38:14.993773 dracut-cmdline[222]: dracut-dracut-053 Jan 13 20:38:15.003558 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:38:15.001895 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:38:15.046330 systemd-resolved[236]: Positive Trust Anchors: Jan 13 20:38:15.046353 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:38:15.046384 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:38:15.049338 systemd-resolved[236]: Defaulting to hostname 'linux'. Jan 13 20:38:15.050490 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:38:15.057582 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:38:15.085786 kernel: SCSI subsystem initialized Jan 13 20:38:15.095796 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:38:15.105780 kernel: iscsi: registered transport (tcp) Jan 13 20:38:15.128040 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:38:15.128118 kernel: QLogic iSCSI HBA Driver Jan 13 20:38:15.178585 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:38:15.187975 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:38:15.212840 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:38:15.212876 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:38:15.213846 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:38:15.255780 kernel: raid6: avx2x4 gen() 30424 MB/s Jan 13 20:38:15.272776 kernel: raid6: avx2x2 gen() 30161 MB/s Jan 13 20:38:15.289894 kernel: raid6: avx2x1 gen() 25208 MB/s Jan 13 20:38:15.289917 kernel: raid6: using algorithm avx2x4 gen() 30424 MB/s Jan 13 20:38:15.307858 kernel: raid6: .... xor() 8254 MB/s, rmw enabled Jan 13 20:38:15.307890 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:38:15.327777 kernel: xor: automatically using best checksumming function avx Jan 13 20:38:15.489786 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:38:15.503082 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:38:15.511018 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:38:15.525190 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 13 20:38:15.530100 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:38:15.545934 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:38:15.561075 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Jan 13 20:38:15.597159 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:38:15.613894 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:38:15.682907 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:38:15.690807 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:38:15.712344 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:38:15.716597 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:38:15.719373 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:38:15.722014 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:38:15.727788 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 20:38:15.757298 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:38:15.757328 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:38:15.757472 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:38:15.757484 kernel: GPT:9289727 != 19775487 Jan 13 20:38:15.757494 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:38:15.757505 kernel: GPT:9289727 != 19775487 Jan 13 20:38:15.757515 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:38:15.757525 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:38:15.757535 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:38:15.757548 kernel: AES CTR mode by8 optimization enabled Jan 13 20:38:15.732096 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:38:15.740698 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:38:15.741621 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:38:15.743165 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:38:15.744305 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:38:15.744443 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:38:15.748333 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:38:15.774641 kernel: libata version 3.00 loaded. Jan 13 20:38:15.773240 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:38:15.777526 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:38:15.780217 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (458) Jan 13 20:38:15.784772 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 20:38:15.807880 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 20:38:15.807905 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (461) Jan 13 20:38:15.807920 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 20:38:15.808151 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 20:38:15.808342 kernel: scsi host0: ahci Jan 13 20:38:15.808542 kernel: scsi host1: ahci Jan 13 20:38:15.808767 kernel: scsi host2: ahci Jan 13 20:38:15.808966 kernel: scsi host3: ahci Jan 13 20:38:15.809169 kernel: scsi host4: ahci Jan 13 20:38:15.809363 kernel: scsi host5: ahci Jan 13 20:38:15.809552 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 13 20:38:15.809568 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 13 20:38:15.809582 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 13 20:38:15.809596 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 13 20:38:15.809610 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 13 20:38:15.809624 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 13 20:38:15.799944 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:38:15.813330 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:38:15.822596 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:38:15.827987 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:38:15.832204 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:38:15.832463 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:38:15.842905 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:38:15.844828 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:38:15.853862 disk-uuid[555]: Primary Header is updated. Jan 13 20:38:15.853862 disk-uuid[555]: Secondary Entries is updated. Jan 13 20:38:15.853862 disk-uuid[555]: Secondary Header is updated. Jan 13 20:38:15.857074 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:38:15.861785 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:38:15.863587 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:38:16.121872 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 20:38:16.121948 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 20:38:16.121959 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 20:38:16.121970 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 20:38:16.122774 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 20:38:16.123784 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 20:38:16.124776 kernel: ata3.00: applying bridge limits Jan 13 20:38:16.124790 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 20:38:16.125780 kernel: ata3.00: configured for UDMA/100 Jan 13 20:38:16.126787 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:38:16.182801 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 20:38:16.196487 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:38:16.196502 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:38:16.863786 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:38:16.863853 disk-uuid[559]: The operation has completed successfully. Jan 13 20:38:16.889969 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:38:16.890118 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:38:16.917936 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:38:16.923104 sh[593]: Success Jan 13 20:38:16.936813 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 20:38:16.975737 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:38:16.991722 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:38:16.997053 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:38:17.010040 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 20:38:17.010124 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:38:17.010137 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:38:17.011184 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:38:17.011986 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:38:17.017445 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:38:17.018761 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:38:17.028947 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:38:17.030350 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:38:17.041697 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:38:17.041742 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:38:17.041779 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:38:17.044787 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:38:17.054641 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:38:17.056421 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:38:17.068149 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:38:17.081059 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:38:17.138737 ignition[686]: Ignition 2.20.0 Jan 13 20:38:17.138771 ignition[686]: Stage: fetch-offline Jan 13 20:38:17.138821 ignition[686]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:38:17.138834 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:38:17.138964 ignition[686]: parsed url from cmdline: "" Jan 13 20:38:17.138969 ignition[686]: no config URL provided Jan 13 20:38:17.138976 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:38:17.138990 ignition[686]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:38:17.139027 ignition[686]: op(1): [started] loading QEMU firmware config module Jan 13 20:38:17.139035 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:38:17.150815 ignition[686]: op(1): [finished] loading QEMU firmware config module Jan 13 20:38:17.163200 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:38:17.174926 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:38:17.199308 ignition[686]: parsing config with SHA512: 0e730264c32f89f715a4eb880bf7a8a4c37c8a1fc810d4d6ae6ab7a43cc4a1573193961caea9db36d1dfc414b3372d51d0190932b44901498b895fc0216a63c6 Jan 13 20:38:17.200022 systemd-networkd[782]: lo: Link UP Jan 13 20:38:17.200032 systemd-networkd[782]: lo: Gained carrier Jan 13 20:38:17.201746 systemd-networkd[782]: Enumeration completed Jan 13 20:38:17.202216 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:38:17.202384 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:38:17.202388 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:38:17.203538 systemd-networkd[782]: eth0: Link UP Jan 13 20:38:17.203542 systemd-networkd[782]: eth0: Gained carrier Jan 13 20:38:17.203549 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:38:17.203574 systemd[1]: Reached target network.target - Network. Jan 13 20:38:17.216806 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:38:17.219789 unknown[686]: fetched base config from "system" Jan 13 20:38:17.219803 unknown[686]: fetched user config from "qemu" Jan 13 20:38:17.222544 ignition[686]: fetch-offline: fetch-offline passed Jan 13 20:38:17.222699 ignition[686]: Ignition finished successfully Jan 13 20:38:17.226704 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:38:17.227294 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:38:17.232956 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:38:17.246584 ignition[786]: Ignition 2.20.0 Jan 13 20:38:17.246597 ignition[786]: Stage: kargs Jan 13 20:38:17.246812 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:38:17.246823 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:38:17.250860 ignition[786]: kargs: kargs passed Jan 13 20:38:17.250920 ignition[786]: Ignition finished successfully Jan 13 20:38:17.254747 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:38:17.262955 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:38:17.275026 ignition[794]: Ignition 2.20.0 Jan 13 20:38:17.275039 ignition[794]: Stage: disks Jan 13 20:38:17.275344 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:38:17.275359 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:38:17.276469 ignition[794]: disks: disks passed Jan 13 20:38:17.276524 ignition[794]: Ignition finished successfully Jan 13 20:38:17.279962 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:38:17.281182 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:38:17.283032 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:38:17.286957 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:38:17.287448 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:38:17.289418 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:38:17.310955 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:38:17.323678 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:38:17.331582 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:38:17.336904 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:38:17.424715 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:38:17.427563 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 20:38:17.425799 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:38:17.438857 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:38:17.440773 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:38:17.442180 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:38:17.447958 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (813) Jan 13 20:38:17.447977 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:38:17.442217 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:38:17.454927 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:38:17.454947 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:38:17.454960 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:38:17.442237 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:38:17.450669 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:38:17.455656 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:38:17.458194 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:38:17.492077 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:38:17.497082 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:38:17.500878 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:38:17.504687 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:38:17.591469 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:38:17.598954 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:38:17.600716 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:38:17.606798 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:38:17.627729 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:38:17.630299 ignition[927]: INFO : Ignition 2.20.0 Jan 13 20:38:17.630299 ignition[927]: INFO : Stage: mount Jan 13 20:38:17.630299 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:38:17.630299 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:38:17.636105 ignition[927]: INFO : mount: mount passed Jan 13 20:38:17.636105 ignition[927]: INFO : Ignition finished successfully Jan 13 20:38:17.633445 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:38:17.645839 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:38:18.008904 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:38:18.016974 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:38:18.024547 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (941) Jan 13 20:38:18.024580 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:38:18.024592 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:38:18.025404 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:38:18.028777 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:38:18.029931 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:38:18.051076 ignition[958]: INFO : Ignition 2.20.0 Jan 13 20:38:18.051076 ignition[958]: INFO : Stage: files Jan 13 20:38:18.053008 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:38:18.053008 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:38:18.053008 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:38:18.056801 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:38:18.056801 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:38:18.056801 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:38:18.056801 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:38:18.056801 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:38:18.056437 unknown[958]: wrote ssh authorized keys file for user: core Jan 13 20:38:18.065883 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:38:18.065883 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:38:18.065883 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:38:18.065883 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:38:18.096169 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:38:18.203210 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:38:18.203210 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:38:18.207383 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 20:38:18.642366 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 13 20:38:18.649924 systemd-networkd[782]: eth0: Gained IPv6LL Jan 13 20:38:18.746435 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:38:18.746435 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:38:18.750795 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:38:18.750795 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:38:18.750795 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:38:18.750795 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:38:18.750795 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:38:18.750795 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:38:18.750795 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:38:18.750795 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:38:18.750795 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:38:18.750795 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:38:18.750795 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:38:18.750795 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:38:18.750795 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 20:38:19.181948 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 13 20:38:19.583102 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:38:19.583102 ignition[958]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 13 20:38:19.626760 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:38:19.626760 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:38:19.626760 ignition[958]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 13 20:38:19.626760 ignition[958]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 13 20:38:19.626760 ignition[958]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:38:19.626760 ignition[958]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:38:19.626760 ignition[958]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 13 20:38:19.626760 ignition[958]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 13 20:38:19.626760 ignition[958]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:38:19.626760 ignition[958]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:38:19.626760 ignition[958]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 13 20:38:19.626760 ignition[958]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:38:19.655525 ignition[958]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:38:19.660652 ignition[958]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:38:19.662291 ignition[958]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:38:19.662291 ignition[958]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:38:19.662291 ignition[958]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:38:19.662291 ignition[958]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:38:19.662291 ignition[958]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:38:19.662291 ignition[958]: INFO : files: files passed Jan 13 20:38:19.662291 ignition[958]: INFO : Ignition finished successfully Jan 13 20:38:19.664513 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:38:19.686109 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:38:19.689003 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:38:19.694888 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:38:19.695069 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:38:19.701054 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:38:19.704381 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:38:19.704381 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:38:19.709621 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:38:19.713265 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:38:19.713743 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:38:19.729095 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:38:19.764061 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:38:19.764206 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:38:19.766697 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:38:19.767181 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:38:19.767587 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:38:19.768580 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:38:19.802742 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:38:19.816128 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:38:19.829215 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:38:19.831963 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:38:19.832317 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:38:19.835320 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:38:19.835471 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:38:19.837419 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:38:19.837806 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:38:19.842245 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:38:19.842547 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:38:19.843066 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:38:19.843448 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:38:19.844097 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:38:19.844427 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:38:19.844858 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:38:19.845330 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:38:19.846029 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:38:19.846206 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:38:19.861625 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:38:19.862114 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:38:19.862411 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:38:19.862546 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:38:19.869585 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:38:19.869733 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:38:19.873028 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:38:19.873157 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:38:19.873502 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:38:19.874104 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:38:19.880810 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:38:19.890334 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:38:19.890635 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:38:19.891151 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:38:19.891248 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:38:19.901479 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:38:19.901566 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:38:19.903297 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:38:19.903405 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:38:19.905477 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:38:19.905579 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:38:19.921937 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:38:19.922255 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:38:19.922371 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:38:19.926354 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:38:19.928441 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:38:19.928620 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:38:19.930482 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:38:19.930626 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:38:19.939911 ignition[1013]: INFO : Ignition 2.20.0 Jan 13 20:38:19.939911 ignition[1013]: INFO : Stage: umount Jan 13 20:38:19.939911 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:38:19.939911 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:38:19.939911 ignition[1013]: INFO : umount: umount passed Jan 13 20:38:19.939911 ignition[1013]: INFO : Ignition finished successfully Jan 13 20:38:19.941122 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:38:19.941268 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:38:19.943634 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:38:19.943805 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:38:19.947976 systemd[1]: Stopped target network.target - Network. Jan 13 20:38:19.949901 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:38:19.949967 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:38:19.959475 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:38:19.959546 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:38:19.961945 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:38:19.962003 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:38:19.964011 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:38:19.964070 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:38:19.966440 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:38:19.968438 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:38:19.971332 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:38:19.977827 systemd-networkd[782]: eth0: DHCPv6 lease lost Jan 13 20:38:19.983454 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:38:19.983654 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:38:19.985920 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:38:19.986073 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:38:19.990262 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:38:19.990353 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:38:20.009907 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:38:20.010378 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:38:20.010438 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:38:20.010771 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:38:20.010816 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:38:20.011096 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:38:20.011137 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:38:20.023130 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:38:20.023188 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:38:20.023807 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:38:20.033441 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:38:20.033579 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:38:20.058938 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:38:20.059190 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:38:20.061488 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:38:20.061556 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:38:20.063481 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:38:20.063525 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:38:20.065495 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:38:20.065556 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:38:20.067887 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:38:20.067948 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:38:20.070081 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:38:20.070151 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:38:20.111071 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:38:20.113408 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:38:20.113505 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:38:20.115844 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:38:20.115909 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:38:20.118994 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:38:20.119140 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:38:20.443055 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:38:20.443192 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:38:20.454978 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:38:20.456282 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:38:20.456347 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:38:20.468910 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:38:20.512984 systemd[1]: Switching root. Jan 13 20:38:20.540610 systemd-journald[194]: Journal stopped Jan 13 20:38:22.253791 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 13 20:38:22.253864 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:38:22.253878 kernel: SELinux: policy capability open_perms=1 Jan 13 20:38:22.253890 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:38:22.253901 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:38:22.253912 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:38:22.253925 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:38:22.253938 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:38:22.253957 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:38:22.253972 kernel: audit: type=1403 audit(1736800701.148:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:38:22.253996 systemd[1]: Successfully loaded SELinux policy in 61.800ms. Jan 13 20:38:22.254027 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.250ms. Jan 13 20:38:22.254041 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:38:22.254053 systemd[1]: Detected virtualization kvm. Jan 13 20:38:22.254065 systemd[1]: Detected architecture x86-64. Jan 13 20:38:22.254079 systemd[1]: Detected first boot. Jan 13 20:38:22.254095 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:38:22.254107 zram_generator::config[1079]: No configuration found. Jan 13 20:38:22.254120 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:38:22.254132 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:38:22.254144 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:38:22.254157 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:38:22.254169 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:38:22.254183 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:38:22.254195 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:38:22.254207 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:38:22.254220 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:38:22.254232 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:38:22.254244 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:38:22.254256 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:38:22.254268 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:38:22.254280 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:38:22.254295 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:38:22.254308 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:38:22.254320 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:38:22.254331 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:38:22.254343 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:38:22.254355 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:38:22.254367 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:38:22.254379 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:38:22.254395 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:38:22.254409 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:38:22.254421 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:38:22.254433 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:38:22.254446 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:38:22.254458 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:38:22.254469 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:38:22.254481 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:38:22.254493 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:38:22.254507 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:38:22.254519 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:38:22.254530 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:38:22.254542 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:38:22.254556 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:38:22.254568 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:38:22.254580 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:38:22.254592 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:38:22.254604 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:38:22.254618 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:38:22.254629 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:38:22.254642 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:38:22.254654 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:38:22.254666 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:38:22.254677 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:38:22.254689 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:38:22.254701 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:38:22.254716 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:38:22.254729 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 20:38:22.254741 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 20:38:22.254768 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:38:22.254780 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:38:22.254813 systemd-journald[1152]: Collecting audit messages is disabled. Jan 13 20:38:22.254842 kernel: loop: module loaded Jan 13 20:38:22.254857 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:38:22.254870 systemd-journald[1152]: Journal started Jan 13 20:38:22.254892 systemd-journald[1152]: Runtime Journal (/run/log/journal/60b4cf3868924e3aa03d0d98be779015) is 6.0M, max 48.3M, 42.2M free. Jan 13 20:38:22.257970 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:38:22.300863 kernel: fuse: init (API version 7.39) Jan 13 20:38:22.308742 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:38:22.311767 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:38:22.315595 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:38:22.316654 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:38:22.318018 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:38:22.319820 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:38:22.322224 kernel: ACPI: bus type drm_connector registered Jan 13 20:38:22.322078 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:38:22.323549 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:38:22.325187 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:38:22.326702 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:38:22.368778 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:38:22.369066 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:38:22.370787 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:38:22.371012 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:38:22.372738 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:38:22.373018 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:38:22.374536 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:38:22.374804 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:38:22.376457 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:38:22.376659 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:38:22.378165 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:38:22.378442 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:38:22.430689 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:38:22.432307 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:38:22.433924 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:38:22.445851 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:38:22.501873 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:38:22.504910 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:38:22.506853 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:38:22.510678 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:38:22.513412 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:38:22.558890 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:38:22.567398 systemd-journald[1152]: Time spent on flushing to /var/log/journal/60b4cf3868924e3aa03d0d98be779015 is 13.127ms for 1026 entries. Jan 13 20:38:22.567398 systemd-journald[1152]: System Journal (/var/log/journal/60b4cf3868924e3aa03d0d98be779015) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:38:22.814442 systemd-journald[1152]: Received client request to flush runtime journal. Jan 13 20:38:22.567936 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:38:22.570008 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:38:22.573094 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:38:22.615557 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:38:22.619259 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:38:22.621072 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:38:22.622584 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:38:22.628617 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:38:22.643703 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:38:22.686980 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:38:22.688837 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jan 13 20:38:22.688856 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jan 13 20:38:22.695358 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:38:22.804506 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:38:22.805991 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:38:22.817443 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:38:22.819689 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:38:22.836057 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:38:22.866590 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:38:22.874923 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:38:22.894056 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 13 20:38:22.894082 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 13 20:38:22.901271 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:38:23.410795 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:38:23.430102 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:38:23.460190 systemd-udevd[1241]: Using default interface naming scheme 'v255'. Jan 13 20:38:23.481558 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:38:23.492974 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:38:23.510931 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:38:23.528007 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1245) Jan 13 20:38:23.551883 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 13 20:38:23.599821 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:38:23.606784 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 20:38:23.609034 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:38:23.619813 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:38:23.627834 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 13 20:38:23.633716 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 20:38:23.633976 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 20:38:23.634219 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 20:38:23.663797 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 20:38:23.666071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:38:23.690709 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:38:23.691244 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:38:23.696974 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:38:23.728397 systemd-networkd[1244]: lo: Link UP Jan 13 20:38:23.728623 systemd-networkd[1244]: lo: Gained carrier Jan 13 20:38:23.730561 systemd-networkd[1244]: Enumeration completed Jan 13 20:38:23.731055 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:38:23.731060 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:38:23.732018 systemd-networkd[1244]: eth0: Link UP Jan 13 20:38:23.732024 systemd-networkd[1244]: eth0: Gained carrier Jan 13 20:38:23.732038 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:38:23.752306 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:38:23.754007 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:38:23.761166 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:38:23.761837 systemd-networkd[1244]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:38:23.775135 kernel: kvm_amd: TSC scaling supported Jan 13 20:38:23.775213 kernel: kvm_amd: Nested Virtualization enabled Jan 13 20:38:23.775233 kernel: kvm_amd: Nested Paging enabled Jan 13 20:38:23.775248 kernel: kvm_amd: LBR virtualization supported Jan 13 20:38:23.777797 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 20:38:23.777872 kernel: kvm_amd: Virtual GIF supported Jan 13 20:38:23.795870 kernel: EDAC MC: Ver: 3.0.0 Jan 13 20:38:23.821939 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:38:23.830584 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:38:23.841259 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:38:23.850721 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:38:23.885738 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:38:23.887466 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:38:23.901060 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:38:23.907812 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:38:23.943135 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:38:23.944887 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:38:23.946370 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:38:23.946395 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:38:23.947570 systemd[1]: Reached target machines.target - Containers. Jan 13 20:38:23.949777 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:38:23.962069 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:38:23.965624 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:38:23.967148 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:38:23.968681 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:38:23.971673 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:38:23.977413 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:38:23.981015 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:38:23.992365 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:38:24.016801 kernel: loop0: detected capacity change from 0 to 138184 Jan 13 20:38:24.028351 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:38:24.029742 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:38:24.043792 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:38:24.078789 kernel: loop1: detected capacity change from 0 to 140992 Jan 13 20:38:24.112780 kernel: loop2: detected capacity change from 0 to 211296 Jan 13 20:38:24.146786 kernel: loop3: detected capacity change from 0 to 138184 Jan 13 20:38:24.158783 kernel: loop4: detected capacity change from 0 to 140992 Jan 13 20:38:24.168775 kernel: loop5: detected capacity change from 0 to 211296 Jan 13 20:38:24.175893 (sd-merge)[1316]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:38:24.176579 (sd-merge)[1316]: Merged extensions into '/usr'. Jan 13 20:38:24.180920 systemd[1]: Reloading requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:38:24.180953 systemd[1]: Reloading... Jan 13 20:38:24.247794 zram_generator::config[1347]: No configuration found. Jan 13 20:38:24.292450 ldconfig[1301]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:38:24.396331 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:38:24.462911 systemd[1]: Reloading finished in 281 ms. Jan 13 20:38:24.483579 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:38:24.485654 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:38:24.500972 systemd[1]: Starting ensure-sysext.service... Jan 13 20:38:24.548854 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:38:24.553589 systemd[1]: Reloading requested from client PID 1388 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:38:24.553605 systemd[1]: Reloading... Jan 13 20:38:24.576620 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:38:24.577067 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:38:24.578081 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:38:24.578375 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Jan 13 20:38:24.578446 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Jan 13 20:38:24.583620 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:38:24.583717 systemd-tmpfiles[1389]: Skipping /boot Jan 13 20:38:24.602214 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:38:24.602391 systemd-tmpfiles[1389]: Skipping /boot Jan 13 20:38:24.603592 zram_generator::config[1417]: No configuration found. Jan 13 20:38:24.864655 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:38:24.955550 systemd[1]: Reloading finished in 401 ms. Jan 13 20:38:24.999140 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:38:25.052200 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:38:25.077183 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:38:25.091172 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:38:25.105894 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:38:25.122326 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:38:25.131347 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:38:25.131583 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:38:25.136131 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:38:25.164181 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:38:25.186480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:38:25.198523 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:38:25.198744 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:38:25.201832 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:38:25.207869 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:38:25.208168 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:38:25.210618 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:38:25.214366 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:38:25.217070 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:38:25.217327 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:38:25.238808 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:38:25.239774 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:38:25.494126 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:38:25.500446 augenrules[1500]: No rules Jan 13 20:38:25.506343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:38:25.519605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:38:25.529084 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:38:25.546297 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:38:25.553501 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:38:25.555722 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:38:25.560038 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:38:25.564950 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:38:25.564957 systemd-resolved[1468]: Positive Trust Anchors: Jan 13 20:38:25.564970 systemd-resolved[1468]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:38:25.565014 systemd-resolved[1468]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:38:25.587008 systemd-resolved[1468]: Defaulting to hostname 'linux'. Jan 13 20:38:25.593520 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:38:25.599763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:38:25.600108 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:38:25.602300 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:38:25.602576 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:38:25.604925 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:38:25.605403 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:38:25.608555 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:38:25.626574 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:38:25.643616 systemd[1]: Reached target network.target - Network. Jan 13 20:38:25.653033 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:38:25.654641 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:38:25.654889 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:38:25.655067 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:38:25.660434 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:38:25.678185 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:38:25.680311 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:38:25.683164 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:38:25.692195 systemd-networkd[1244]: eth0: Gained IPv6LL Jan 13 20:38:25.701248 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:38:25.765274 augenrules[1523]: /sbin/augenrules: No change Jan 13 20:38:25.780151 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:38:25.794876 augenrules[1547]: No rules Jan 13 20:38:25.829370 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:38:25.835111 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:38:25.837084 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:38:25.837206 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:38:25.839984 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:38:25.843337 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:38:25.843892 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:38:25.858556 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:38:25.859267 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:38:25.864824 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:38:25.865959 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:38:25.869114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:38:25.869365 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:38:25.872371 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:38:25.872667 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:38:25.891981 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:38:25.895437 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:38:25.901120 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:38:25.901959 systemd[1]: Finished ensure-sysext.service. Jan 13 20:38:25.925039 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:38:25.990700 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:38:26.955509 systemd-resolved[1468]: Clock change detected. Flushing caches. Jan 13 20:38:26.955518 systemd-timesyncd[1564]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:38:26.955561 systemd-timesyncd[1564]: Initial clock synchronization to Mon 2025-01-13 20:38:26.955431 UTC. Jan 13 20:38:26.956813 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:38:26.958102 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:38:26.959421 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:38:26.960746 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:38:26.962218 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:38:26.962255 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:38:26.963374 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:38:26.964913 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:38:26.966378 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:38:26.967691 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:38:26.969423 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:38:26.972930 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:38:26.975844 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:38:26.979588 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:38:26.980883 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:38:26.982108 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:38:26.983530 systemd[1]: System is tainted: cgroupsv1 Jan 13 20:38:26.983609 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:38:26.983643 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:38:26.986154 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:38:26.989629 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:38:26.992940 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:38:26.997921 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:38:27.004179 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:38:27.013730 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:38:27.017260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:38:27.021037 dbus-daemon[1570]: [system] SELinux support is enabled Jan 13 20:38:27.022858 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:38:27.023716 jq[1572]: false Jan 13 20:38:27.028036 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:38:27.033685 extend-filesystems[1574]: Found loop3 Jan 13 20:38:27.035161 extend-filesystems[1574]: Found loop4 Jan 13 20:38:27.035161 extend-filesystems[1574]: Found loop5 Jan 13 20:38:27.035161 extend-filesystems[1574]: Found sr0 Jan 13 20:38:27.035161 extend-filesystems[1574]: Found vda Jan 13 20:38:27.035161 extend-filesystems[1574]: Found vda1 Jan 13 20:38:27.035161 extend-filesystems[1574]: Found vda2 Jan 13 20:38:27.035161 extend-filesystems[1574]: Found vda3 Jan 13 20:38:27.035161 extend-filesystems[1574]: Found usr Jan 13 20:38:27.035161 extend-filesystems[1574]: Found vda4 Jan 13 20:38:27.035161 extend-filesystems[1574]: Found vda6 Jan 13 20:38:27.035161 extend-filesystems[1574]: Found vda7 Jan 13 20:38:27.035161 extend-filesystems[1574]: Found vda9 Jan 13 20:38:27.035161 extend-filesystems[1574]: Checking size of /dev/vda9 Jan 13 20:38:27.036704 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:38:27.040509 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:38:27.054653 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:38:27.064015 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:38:27.067590 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:38:27.069876 extend-filesystems[1574]: Resized partition /dev/vda9 Jan 13 20:38:27.075048 extend-filesystems[1602]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:38:27.081926 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:38:27.077045 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:38:27.083084 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:38:27.088377 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:38:27.096666 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1252) Jan 13 20:38:27.104246 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:38:27.104610 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:38:27.111959 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:38:27.111249 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:38:27.112092 jq[1608]: true Jan 13 20:38:27.111607 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:38:27.173222 update_engine[1603]: I20250113 20:38:27.127934 1603 main.cc:92] Flatcar Update Engine starting Jan 13 20:38:27.173222 update_engine[1603]: I20250113 20:38:27.129304 1603 update_check_scheduler.cc:74] Next update check in 11m0s Jan 13 20:38:27.190069 extend-filesystems[1602]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:38:27.190069 extend-filesystems[1602]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:38:27.190069 extend-filesystems[1602]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:38:27.113631 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:38:27.196487 extend-filesystems[1574]: Resized filesystem in /dev/vda9 Jan 13 20:38:27.139331 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:38:27.139687 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:38:27.174225 (ntainerd)[1618]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:38:27.197436 jq[1617]: true Jan 13 20:38:27.194291 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:38:27.194666 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:38:27.201121 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:38:27.202257 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:38:27.214232 systemd-logind[1594]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:38:27.214266 systemd-logind[1594]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:38:27.215970 systemd-logind[1594]: New seat seat0. Jan 13 20:38:27.217533 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:38:27.246813 tar[1614]: linux-amd64/helm Jan 13 20:38:27.261209 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:38:27.276587 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:38:27.277081 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:38:27.277254 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:38:27.279476 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:38:27.279626 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:38:27.282242 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:38:27.288056 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:38:27.441669 bash[1654]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:38:27.443695 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:38:27.446076 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:38:27.458540 locksmithd[1653]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:38:27.493489 sshd_keygen[1605]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:38:27.575772 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:38:27.587452 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:38:27.600010 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:38:27.600395 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:38:27.624158 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:38:27.643414 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:38:27.705303 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:38:27.715215 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:38:27.716706 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:38:27.724660 containerd[1618]: time="2025-01-13T20:38:27.724477447Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:38:27.749983 containerd[1618]: time="2025-01-13T20:38:27.749894323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:38:27.752192 containerd[1618]: time="2025-01-13T20:38:27.752148991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:38:27.752798 containerd[1618]: time="2025-01-13T20:38:27.752247055Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:38:27.752798 containerd[1618]: time="2025-01-13T20:38:27.752269096Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:38:27.752798 containerd[1618]: time="2025-01-13T20:38:27.752463841Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:38:27.752798 containerd[1618]: time="2025-01-13T20:38:27.752478839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:38:27.752798 containerd[1618]: time="2025-01-13T20:38:27.752556485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:38:27.752798 containerd[1618]: time="2025-01-13T20:38:27.752567596Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:38:27.753061 containerd[1618]: time="2025-01-13T20:38:27.752862138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:38:27.753061 containerd[1618]: time="2025-01-13T20:38:27.752887996Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:38:27.753061 containerd[1618]: time="2025-01-13T20:38:27.752900740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:38:27.753061 containerd[1618]: time="2025-01-13T20:38:27.752910509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:38:27.753061 containerd[1618]: time="2025-01-13T20:38:27.753011768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:38:27.753343 containerd[1618]: time="2025-01-13T20:38:27.753311901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:38:27.753569 containerd[1618]: time="2025-01-13T20:38:27.753537093Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:38:27.753569 containerd[1618]: time="2025-01-13T20:38:27.753559135Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:38:27.753742 containerd[1618]: time="2025-01-13T20:38:27.753698035Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:38:27.753854 containerd[1618]: time="2025-01-13T20:38:27.753829482Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:38:27.895928 containerd[1618]: time="2025-01-13T20:38:27.895770902Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:38:27.895928 containerd[1618]: time="2025-01-13T20:38:27.895902489Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:38:27.895928 containerd[1618]: time="2025-01-13T20:38:27.895930601Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:38:27.896108 containerd[1618]: time="2025-01-13T20:38:27.895955618Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:38:27.896108 containerd[1618]: time="2025-01-13T20:38:27.895980395Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:38:27.897276 containerd[1618]: time="2025-01-13T20:38:27.896497434Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:38:27.897276 containerd[1618]: time="2025-01-13T20:38:27.897035513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:38:27.897276 containerd[1618]: time="2025-01-13T20:38:27.897200472Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:38:27.897276 containerd[1618]: time="2025-01-13T20:38:27.897221782Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:38:27.897276 containerd[1618]: time="2025-01-13T20:38:27.897241649Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:38:27.897276 containerd[1618]: time="2025-01-13T20:38:27.897264913Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:38:27.897404 containerd[1618]: time="2025-01-13T20:38:27.897282276Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:38:27.897404 containerd[1618]: time="2025-01-13T20:38:27.897303766Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:38:27.897404 containerd[1618]: time="2025-01-13T20:38:27.897323393Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:38:27.897404 containerd[1618]: time="2025-01-13T20:38:27.897343971Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:38:27.897404 containerd[1618]: time="2025-01-13T20:38:27.897364329Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:38:27.897404 containerd[1618]: time="2025-01-13T20:38:27.897379488Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:38:27.897404 containerd[1618]: time="2025-01-13T20:38:27.897394706Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:38:27.897528 containerd[1618]: time="2025-01-13T20:38:27.897424883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.897528 containerd[1618]: time="2025-01-13T20:38:27.897443468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.897528 containerd[1618]: time="2025-01-13T20:38:27.897460510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.897528 containerd[1618]: time="2025-01-13T20:38:27.897476961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.897528 containerd[1618]: time="2025-01-13T20:38:27.897501877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.897528 containerd[1618]: time="2025-01-13T20:38:27.897519691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.897645 containerd[1618]: time="2025-01-13T20:38:27.897535490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.897645 containerd[1618]: time="2025-01-13T20:38:27.897552282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.897645 containerd[1618]: time="2025-01-13T20:38:27.897570406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.897645 containerd[1618]: time="2025-01-13T20:38:27.897590173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.897645 containerd[1618]: time="2025-01-13T20:38:27.897617634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.897645 containerd[1618]: time="2025-01-13T20:38:27.897636339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.897771 containerd[1618]: time="2025-01-13T20:38:27.897652660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.897771 containerd[1618]: time="2025-01-13T20:38:27.897674080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:38:27.897771 containerd[1618]: time="2025-01-13T20:38:27.897699969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.897771 containerd[1618]: time="2025-01-13T20:38:27.897717401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.897771 containerd[1618]: time="2025-01-13T20:38:27.897732800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:38:27.897856 containerd[1618]: time="2025-01-13T20:38:27.897812660Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:38:27.897856 containerd[1618]: time="2025-01-13T20:38:27.897840001Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:38:27.897911 containerd[1618]: time="2025-01-13T20:38:27.897855220Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:38:27.897911 containerd[1618]: time="2025-01-13T20:38:27.897881649Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:38:27.897911 containerd[1618]: time="2025-01-13T20:38:27.897895776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.897966 containerd[1618]: time="2025-01-13T20:38:27.897912788Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:38:27.897966 containerd[1618]: time="2025-01-13T20:38:27.897935350Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:38:27.897966 containerd[1618]: time="2025-01-13T20:38:27.897950138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:38:27.898907 containerd[1618]: time="2025-01-13T20:38:27.898650371Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:38:27.898907 containerd[1618]: time="2025-01-13T20:38:27.898732134Z" level=info msg="Connect containerd service" Jan 13 20:38:27.899940 containerd[1618]: time="2025-01-13T20:38:27.899779598Z" level=info msg="using legacy CRI server" Jan 13 20:38:27.899940 containerd[1618]: time="2025-01-13T20:38:27.899827107Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:38:27.900123 containerd[1618]: time="2025-01-13T20:38:27.900073199Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:38:27.901505 containerd[1618]: time="2025-01-13T20:38:27.901467202Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:38:27.902515 containerd[1618]: time="2025-01-13T20:38:27.901650346Z" level=info msg="Start subscribing containerd event" Jan 13 20:38:27.902515 containerd[1618]: time="2025-01-13T20:38:27.901726599Z" level=info msg="Start recovering state" Jan 13 20:38:27.902515 containerd[1618]: time="2025-01-13T20:38:27.901839460Z" level=info msg="Start event monitor" Jan 13 20:38:27.902515 containerd[1618]: time="2025-01-13T20:38:27.901854088Z" level=info msg="Start snapshots syncer" Jan 13 20:38:27.902515 containerd[1618]: time="2025-01-13T20:38:27.901866912Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:38:27.902515 containerd[1618]: time="2025-01-13T20:38:27.901888482Z" level=info msg="Start streaming server" Jan 13 20:38:27.902515 containerd[1618]: time="2025-01-13T20:38:27.902237296Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:38:27.902515 containerd[1618]: time="2025-01-13T20:38:27.902312798Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:38:27.904137 containerd[1618]: time="2025-01-13T20:38:27.903529800Z" level=info msg="containerd successfully booted in 0.180338s" Jan 13 20:38:27.902549 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:38:27.958414 tar[1614]: linux-amd64/LICENSE Jan 13 20:38:27.958565 tar[1614]: linux-amd64/README.md Jan 13 20:38:27.979806 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:38:28.350895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:38:28.367027 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:38:28.368393 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:38:28.371333 systemd[1]: Startup finished in 7.173s (kernel) + 6.314s (userspace) = 13.487s. Jan 13 20:38:29.221342 kubelet[1703]: E0113 20:38:29.221167 1703 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:38:29.226073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:38:29.226446 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:38:34.872308 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:38:34.883039 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:44368.service - OpenSSH per-connection server daemon (10.0.0.1:44368). Jan 13 20:38:34.932692 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 44368 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:38:34.935084 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:34.945015 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:38:34.954107 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:38:34.956455 systemd-logind[1594]: New session 1 of user core. Jan 13 20:38:34.967451 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:38:34.970522 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:38:34.979487 (systemd)[1723]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:38:35.122153 systemd[1723]: Queued start job for default target default.target. Jan 13 20:38:35.122603 systemd[1723]: Created slice app.slice - User Application Slice. Jan 13 20:38:35.122629 systemd[1723]: Reached target paths.target - Paths. Jan 13 20:38:35.122644 systemd[1723]: Reached target timers.target - Timers. Jan 13 20:38:35.139821 systemd[1723]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:38:35.146150 systemd[1723]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:38:35.146207 systemd[1723]: Reached target sockets.target - Sockets. Jan 13 20:38:35.146220 systemd[1723]: Reached target basic.target - Basic System. Jan 13 20:38:35.146253 systemd[1723]: Reached target default.target - Main User Target. Jan 13 20:38:35.146283 systemd[1723]: Startup finished in 157ms. Jan 13 20:38:35.146924 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:38:35.148589 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:38:35.204011 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:44382.service - OpenSSH per-connection server daemon (10.0.0.1:44382). Jan 13 20:38:35.248891 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 44382 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:38:35.250561 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:35.255035 systemd-logind[1594]: New session 2 of user core. Jan 13 20:38:35.271050 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:38:35.326173 sshd[1738]: Connection closed by 10.0.0.1 port 44382 Jan 13 20:38:35.326598 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:35.337988 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:44388.service - OpenSSH per-connection server daemon (10.0.0.1:44388). Jan 13 20:38:35.338574 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:44382.service: Deactivated successfully. Jan 13 20:38:35.341073 systemd-logind[1594]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:38:35.341882 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:38:35.343394 systemd-logind[1594]: Removed session 2. Jan 13 20:38:35.378286 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 44388 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:38:35.380015 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:35.383927 systemd-logind[1594]: New session 3 of user core. Jan 13 20:38:35.393995 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:38:35.443133 sshd[1746]: Connection closed by 10.0.0.1 port 44388 Jan 13 20:38:35.443462 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:35.452972 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:44390.service - OpenSSH per-connection server daemon (10.0.0.1:44390). Jan 13 20:38:35.453686 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:44388.service: Deactivated successfully. Jan 13 20:38:35.455494 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:38:35.456131 systemd-logind[1594]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:38:35.457186 systemd-logind[1594]: Removed session 3. Jan 13 20:38:35.489771 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 44390 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:38:35.491448 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:35.495633 systemd-logind[1594]: New session 4 of user core. Jan 13 20:38:35.508087 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:38:35.564815 sshd[1754]: Connection closed by 10.0.0.1 port 44390 Jan 13 20:38:35.565252 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:35.570978 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:44394.service - OpenSSH per-connection server daemon (10.0.0.1:44394). Jan 13 20:38:35.571435 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:44390.service: Deactivated successfully. Jan 13 20:38:35.574560 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:38:35.575194 systemd-logind[1594]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:38:35.576529 systemd-logind[1594]: Removed session 4. Jan 13 20:38:35.610617 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 44394 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:38:35.612770 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:35.617865 systemd-logind[1594]: New session 5 of user core. Jan 13 20:38:35.639258 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:38:35.699345 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:38:35.699705 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:38:35.720823 sudo[1763]: pam_unix(sudo:session): session closed for user root Jan 13 20:38:35.722529 sshd[1762]: Connection closed by 10.0.0.1 port 44394 Jan 13 20:38:35.723032 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:35.735102 systemd[1]: Started sshd@5-10.0.0.71:22-10.0.0.1:44408.service - OpenSSH per-connection server daemon (10.0.0.1:44408). Jan 13 20:38:35.735714 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:44394.service: Deactivated successfully. Jan 13 20:38:35.738607 systemd-logind[1594]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:38:35.739577 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:38:35.740745 systemd-logind[1594]: Removed session 5. Jan 13 20:38:35.776075 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 44408 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:38:35.777525 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:35.781872 systemd-logind[1594]: New session 6 of user core. Jan 13 20:38:35.792020 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:38:35.847978 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:38:35.848339 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:38:35.852504 sudo[1773]: pam_unix(sudo:session): session closed for user root Jan 13 20:38:35.860289 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:38:35.860666 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:38:35.884280 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:38:35.919579 augenrules[1795]: No rules Jan 13 20:38:35.921242 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:38:35.921590 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:38:35.923031 sudo[1772]: pam_unix(sudo:session): session closed for user root Jan 13 20:38:35.924543 sshd[1771]: Connection closed by 10.0.0.1 port 44408 Jan 13 20:38:35.924938 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:35.934066 systemd[1]: Started sshd@6-10.0.0.71:22-10.0.0.1:44410.service - OpenSSH per-connection server daemon (10.0.0.1:44410). Jan 13 20:38:35.934656 systemd[1]: sshd@5-10.0.0.71:22-10.0.0.1:44408.service: Deactivated successfully. Jan 13 20:38:35.937492 systemd-logind[1594]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:38:35.938281 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:38:35.940071 systemd-logind[1594]: Removed session 6. Jan 13 20:38:35.973306 sshd[1801]: Accepted publickey for core from 10.0.0.1 port 44410 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:38:35.974979 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:35.980225 systemd-logind[1594]: New session 7 of user core. Jan 13 20:38:35.990288 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:38:36.043999 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:38:36.044331 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:38:36.499031 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:38:36.499320 (dockerd)[1828]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:38:37.022655 dockerd[1828]: time="2025-01-13T20:38:37.022587971Z" level=info msg="Starting up" Jan 13 20:38:38.807642 dockerd[1828]: time="2025-01-13T20:38:38.807548953Z" level=info msg="Loading containers: start." Jan 13 20:38:39.133772 kernel: Initializing XFRM netlink socket Jan 13 20:38:39.226042 systemd-networkd[1244]: docker0: Link UP Jan 13 20:38:39.322362 dockerd[1828]: time="2025-01-13T20:38:39.322298578Z" level=info msg="Loading containers: done." Jan 13 20:38:39.339175 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2870942569-merged.mount: Deactivated successfully. Jan 13 20:38:39.340195 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:38:39.351898 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:38:39.520129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:38:39.527138 (kubelet)[2002]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:38:39.688081 kubelet[2002]: E0113 20:38:39.688014 2002 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:38:39.696076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:38:39.696398 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:38:39.763837 dockerd[1828]: time="2025-01-13T20:38:39.763733799Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:38:39.763984 dockerd[1828]: time="2025-01-13T20:38:39.763900351Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:38:39.764070 dockerd[1828]: time="2025-01-13T20:38:39.764045143Z" level=info msg="Daemon has completed initialization" Jan 13 20:38:40.818192 dockerd[1828]: time="2025-01-13T20:38:40.818111307Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:38:40.818383 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:38:41.710518 containerd[1618]: time="2025-01-13T20:38:41.710422330Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:38:44.025194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1518530287.mount: Deactivated successfully. Jan 13 20:38:46.150590 containerd[1618]: time="2025-01-13T20:38:46.150516184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:46.177394 containerd[1618]: time="2025-01-13T20:38:46.177307308Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 20:38:46.241782 containerd[1618]: time="2025-01-13T20:38:46.239729229Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:46.270574 containerd[1618]: time="2025-01-13T20:38:46.270499543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:46.271884 containerd[1618]: time="2025-01-13T20:38:46.271826211Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 4.561324182s" Jan 13 20:38:46.271884 containerd[1618]: time="2025-01-13T20:38:46.271890992Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 20:38:46.293450 containerd[1618]: time="2025-01-13T20:38:46.293408878Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:38:48.280143 containerd[1618]: time="2025-01-13T20:38:48.280077461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:48.281953 containerd[1618]: time="2025-01-13T20:38:48.281889278Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 20:38:48.283312 containerd[1618]: time="2025-01-13T20:38:48.283265218Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:48.287115 containerd[1618]: time="2025-01-13T20:38:48.287053611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:48.288454 containerd[1618]: time="2025-01-13T20:38:48.288383725Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.994925855s" Jan 13 20:38:48.288454 containerd[1618]: time="2025-01-13T20:38:48.288446523Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 20:38:48.315816 containerd[1618]: time="2025-01-13T20:38:48.315749466Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:38:49.761908 containerd[1618]: time="2025-01-13T20:38:49.761836749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:49.762799 containerd[1618]: time="2025-01-13T20:38:49.762696672Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 20:38:49.764074 containerd[1618]: time="2025-01-13T20:38:49.764031935Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:49.767444 containerd[1618]: time="2025-01-13T20:38:49.767410620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:49.768622 containerd[1618]: time="2025-01-13T20:38:49.768582708Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.452578635s" Jan 13 20:38:49.768622 containerd[1618]: time="2025-01-13T20:38:49.768618585Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 20:38:49.783125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:38:49.793979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:38:49.799038 containerd[1618]: time="2025-01-13T20:38:49.798993398Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:38:50.702704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:38:50.708566 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:38:50.830440 kubelet[2148]: E0113 20:38:50.830313 2148 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:38:50.835200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:38:50.835567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:38:52.278604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount519146640.mount: Deactivated successfully. Jan 13 20:38:52.633617 containerd[1618]: time="2025-01-13T20:38:52.633543409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:52.634397 containerd[1618]: time="2025-01-13T20:38:52.634347186Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 20:38:52.635748 containerd[1618]: time="2025-01-13T20:38:52.635700012Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:52.640776 containerd[1618]: time="2025-01-13T20:38:52.638405205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:52.640776 containerd[1618]: time="2025-01-13T20:38:52.639674094Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.840394148s" Jan 13 20:38:52.640776 containerd[1618]: time="2025-01-13T20:38:52.639727424Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 20:38:52.667157 containerd[1618]: time="2025-01-13T20:38:52.667121868Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:38:53.259695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3144137768.mount: Deactivated successfully. Jan 13 20:38:54.210599 containerd[1618]: time="2025-01-13T20:38:54.210514459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:54.219315 containerd[1618]: time="2025-01-13T20:38:54.219267471Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:38:54.233337 containerd[1618]: time="2025-01-13T20:38:54.233295313Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:54.252347 containerd[1618]: time="2025-01-13T20:38:54.252273026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:54.253577 containerd[1618]: time="2025-01-13T20:38:54.253517860Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.586343193s" Jan 13 20:38:54.253645 containerd[1618]: time="2025-01-13T20:38:54.253575769Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:38:54.277526 containerd[1618]: time="2025-01-13T20:38:54.277487904Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:38:55.300401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191463585.mount: Deactivated successfully. Jan 13 20:38:55.411668 containerd[1618]: time="2025-01-13T20:38:55.411590506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:55.424674 containerd[1618]: time="2025-01-13T20:38:55.424601171Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 20:38:55.426255 containerd[1618]: time="2025-01-13T20:38:55.426157509Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:55.428484 containerd[1618]: time="2025-01-13T20:38:55.428445089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:55.429290 containerd[1618]: time="2025-01-13T20:38:55.429238386Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.151702972s" Jan 13 20:38:55.429290 containerd[1618]: time="2025-01-13T20:38:55.429272360Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:38:55.454263 containerd[1618]: time="2025-01-13T20:38:55.454201562Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:38:56.082435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1698741028.mount: Deactivated successfully. Jan 13 20:38:58.742579 containerd[1618]: time="2025-01-13T20:38:58.742513626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:58.743363 containerd[1618]: time="2025-01-13T20:38:58.743299279Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 20:38:58.744725 containerd[1618]: time="2025-01-13T20:38:58.744686991Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:58.749695 containerd[1618]: time="2025-01-13T20:38:58.749641311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:38:58.751084 containerd[1618]: time="2025-01-13T20:38:58.751029954Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.296786393s" Jan 13 20:38:58.751084 containerd[1618]: time="2025-01-13T20:38:58.751072664Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 20:39:01.033180 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:39:01.041953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:01.196284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:01.201465 (kubelet)[2370]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:39:01.244508 kubelet[2370]: E0113 20:39:01.244422 2370 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:39:01.249403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:39:01.249710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:39:01.392974 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:01.403976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:01.455610 systemd[1]: Reloading requested from client PID 2387 ('systemctl') (unit session-7.scope)... Jan 13 20:39:01.455631 systemd[1]: Reloading... Jan 13 20:39:01.548786 zram_generator::config[2429]: No configuration found. Jan 13 20:39:02.222839 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:39:02.298655 systemd[1]: Reloading finished in 842 ms. Jan 13 20:39:02.354889 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:39:02.355058 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:39:02.355595 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:02.358510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:02.503298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:02.509349 (kubelet)[2487]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:39:02.578437 kubelet[2487]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:39:02.578437 kubelet[2487]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:39:02.578437 kubelet[2487]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:39:02.578895 kubelet[2487]: I0113 20:39:02.578535 2487 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:39:03.015481 kubelet[2487]: I0113 20:39:03.015418 2487 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:39:03.015481 kubelet[2487]: I0113 20:39:03.015457 2487 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:39:03.015736 kubelet[2487]: I0113 20:39:03.015706 2487 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:39:03.033376 kubelet[2487]: E0113 20:39:03.033332 2487 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:03.034480 kubelet[2487]: I0113 20:39:03.034449 2487 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:39:03.052846 kubelet[2487]: I0113 20:39:03.052796 2487 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:39:03.055230 kubelet[2487]: I0113 20:39:03.055183 2487 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:39:03.055476 kubelet[2487]: I0113 20:39:03.055444 2487 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:39:03.055476 kubelet[2487]: I0113 20:39:03.055478 2487 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:39:03.055653 kubelet[2487]: I0113 20:39:03.055490 2487 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:39:03.055680 kubelet[2487]: I0113 20:39:03.055654 2487 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:39:03.055831 kubelet[2487]: I0113 20:39:03.055798 2487 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:39:03.055831 kubelet[2487]: I0113 20:39:03.055824 2487 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:39:03.055905 kubelet[2487]: I0113 20:39:03.055872 2487 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:39:03.055905 kubelet[2487]: I0113 20:39:03.055895 2487 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:39:03.057633 kubelet[2487]: I0113 20:39:03.057602 2487 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:39:03.058136 kubelet[2487]: W0113 20:39:03.058065 2487 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:03.058136 kubelet[2487]: E0113 20:39:03.058137 2487 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:03.058994 kubelet[2487]: W0113 20:39:03.058934 2487 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:03.058994 kubelet[2487]: E0113 20:39:03.058989 2487 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:03.061645 kubelet[2487]: I0113 20:39:03.061596 2487 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:39:03.062946 kubelet[2487]: W0113 20:39:03.062918 2487 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:39:03.063683 kubelet[2487]: I0113 20:39:03.063666 2487 server.go:1256] "Started kubelet" Jan 13 20:39:03.064979 kubelet[2487]: I0113 20:39:03.063901 2487 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:39:03.064979 kubelet[2487]: I0113 20:39:03.064534 2487 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:39:03.064979 kubelet[2487]: I0113 20:39:03.064592 2487 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:39:03.065394 kubelet[2487]: I0113 20:39:03.065368 2487 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:39:03.065598 kubelet[2487]: I0113 20:39:03.065570 2487 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:39:03.066802 kubelet[2487]: I0113 20:39:03.066782 2487 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:39:03.067118 kubelet[2487]: I0113 20:39:03.066869 2487 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:39:03.067118 kubelet[2487]: I0113 20:39:03.066921 2487 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:39:03.067349 kubelet[2487]: W0113 20:39:03.067305 2487 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:03.067405 kubelet[2487]: E0113 20:39:03.067358 2487 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:03.070239 kubelet[2487]: E0113 20:39:03.070052 2487 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="200ms" Jan 13 20:39:03.072145 kubelet[2487]: E0113 20:39:03.072119 2487 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:39:03.072232 kubelet[2487]: E0113 20:39:03.072164 2487 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5b12e7f13f58 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:39:03.063633752 +0000 UTC m=+0.549701815,LastTimestamp:2025-01-13 20:39:03.063633752 +0000 UTC m=+0.549701815,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:39:03.076437 kubelet[2487]: I0113 20:39:03.076374 2487 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:39:03.076437 kubelet[2487]: I0113 20:39:03.076414 2487 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:39:03.076582 kubelet[2487]: I0113 20:39:03.076525 2487 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:39:03.089402 kubelet[2487]: I0113 20:39:03.089266 2487 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:39:03.092249 kubelet[2487]: I0113 20:39:03.092229 2487 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:39:03.092375 kubelet[2487]: I0113 20:39:03.092363 2487 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:39:03.092450 kubelet[2487]: I0113 20:39:03.092439 2487 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:39:03.093231 kubelet[2487]: E0113 20:39:03.092553 2487 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:39:03.093231 kubelet[2487]: W0113 20:39:03.093149 2487 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:03.093231 kubelet[2487]: E0113 20:39:03.093205 2487 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:03.104581 kubelet[2487]: I0113 20:39:03.104538 2487 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:39:03.104581 kubelet[2487]: I0113 20:39:03.104562 2487 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:39:03.104581 kubelet[2487]: I0113 20:39:03.104579 2487 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:39:03.169358 kubelet[2487]: I0113 20:39:03.169317 2487 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:39:03.169874 kubelet[2487]: E0113 20:39:03.169852 2487 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jan 13 20:39:03.193227 kubelet[2487]: E0113 20:39:03.193150 2487 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:39:03.271441 kubelet[2487]: E0113 20:39:03.271326 2487 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="400ms" Jan 13 20:39:03.281354 kubelet[2487]: I0113 20:39:03.281284 2487 policy_none.go:49] "None policy: Start" Jan 13 20:39:03.282394 kubelet[2487]: I0113 20:39:03.282372 2487 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:39:03.282474 kubelet[2487]: I0113 20:39:03.282444 2487 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:39:03.293130 kubelet[2487]: I0113 20:39:03.293097 2487 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:39:03.293466 kubelet[2487]: I0113 20:39:03.293437 2487 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:39:03.295118 kubelet[2487]: E0113 20:39:03.295097 2487 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:39:03.372245 kubelet[2487]: I0113 20:39:03.372177 2487 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:39:03.372816 kubelet[2487]: E0113 20:39:03.372781 2487 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jan 13 20:39:03.394127 kubelet[2487]: I0113 20:39:03.394042 2487 topology_manager.go:215] "Topology Admit Handler" podUID="ef7ba300d1a7a4a2681d541ebe70c863" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:39:03.395536 kubelet[2487]: I0113 20:39:03.395515 2487 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:39:03.396373 kubelet[2487]: I0113 20:39:03.396338 2487 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:39:03.468812 kubelet[2487]: I0113 20:39:03.468735 2487 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef7ba300d1a7a4a2681d541ebe70c863-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ef7ba300d1a7a4a2681d541ebe70c863\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:39:03.468812 kubelet[2487]: I0113 20:39:03.468814 2487 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef7ba300d1a7a4a2681d541ebe70c863-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ef7ba300d1a7a4a2681d541ebe70c863\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:39:03.468967 kubelet[2487]: I0113 20:39:03.468917 2487 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef7ba300d1a7a4a2681d541ebe70c863-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ef7ba300d1a7a4a2681d541ebe70c863\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:39:03.468967 kubelet[2487]: I0113 20:39:03.468954 2487 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:39:03.469058 kubelet[2487]: I0113 20:39:03.468980 2487 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:39:03.469058 kubelet[2487]: I0113 20:39:03.469004 2487 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:39:03.469058 kubelet[2487]: I0113 20:39:03.469044 2487 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:39:03.469229 kubelet[2487]: I0113 20:39:03.469154 2487 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:39:03.469271 kubelet[2487]: I0113 20:39:03.469238 2487 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:39:03.672349 kubelet[2487]: E0113 20:39:03.672278 2487 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="800ms" Jan 13 20:39:03.701614 kubelet[2487]: E0113 20:39:03.701566 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:03.702359 containerd[1618]: time="2025-01-13T20:39:03.702317673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ef7ba300d1a7a4a2681d541ebe70c863,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:03.702776 kubelet[2487]: E0113 20:39:03.702496 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:03.703101 containerd[1618]: time="2025-01-13T20:39:03.703049256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:03.704224 kubelet[2487]: E0113 20:39:03.704189 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:03.704487 containerd[1618]: time="2025-01-13T20:39:03.704462415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:03.774194 kubelet[2487]: I0113 20:39:03.774155 2487 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:39:03.774527 kubelet[2487]: E0113 20:39:03.774513 2487 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jan 13 20:39:03.867337 kubelet[2487]: W0113 20:39:03.867253 2487 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:03.867337 kubelet[2487]: E0113 20:39:03.867329 2487 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:03.952243 kubelet[2487]: W0113 20:39:03.952049 2487 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:03.952243 kubelet[2487]: E0113 20:39:03.952108 2487 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:04.419550 kubelet[2487]: W0113 20:39:04.419499 2487 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:04.419550 kubelet[2487]: E0113 20:39:04.419551 2487 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:04.473329 kubelet[2487]: E0113 20:39:04.473289 2487 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="1.6s" Jan 13 20:39:04.514656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573228169.mount: Deactivated successfully. Jan 13 20:39:04.522005 containerd[1618]: time="2025-01-13T20:39:04.521941610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:39:04.524216 containerd[1618]: time="2025-01-13T20:39:04.524139829Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:39:04.527557 containerd[1618]: time="2025-01-13T20:39:04.527507385Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:39:04.528984 containerd[1618]: time="2025-01-13T20:39:04.528955196Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:39:04.530664 containerd[1618]: time="2025-01-13T20:39:04.530606006Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:39:04.531999 containerd[1618]: time="2025-01-13T20:39:04.531931984Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:39:04.533036 containerd[1618]: time="2025-01-13T20:39:04.532986923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:39:04.533845 containerd[1618]: time="2025-01-13T20:39:04.533806583Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 829.285596ms" Jan 13 20:39:04.534193 containerd[1618]: time="2025-01-13T20:39:04.534132847Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:39:04.537787 containerd[1618]: time="2025-01-13T20:39:04.537719272Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 835.292029ms" Jan 13 20:39:04.541367 containerd[1618]: time="2025-01-13T20:39:04.541320957Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 838.170617ms" Jan 13 20:39:04.572535 kubelet[2487]: W0113 20:39:04.572464 2487 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:04.572535 kubelet[2487]: E0113 20:39:04.572538 2487 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Jan 13 20:39:04.575979 kubelet[2487]: I0113 20:39:04.575906 2487 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:39:04.576372 kubelet[2487]: E0113 20:39:04.576319 2487 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jan 13 20:39:04.701308 containerd[1618]: time="2025-01-13T20:39:04.700801380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:04.701308 containerd[1618]: time="2025-01-13T20:39:04.700912783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:04.701308 containerd[1618]: time="2025-01-13T20:39:04.700949333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:04.701308 containerd[1618]: time="2025-01-13T20:39:04.701070044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:04.702021 containerd[1618]: time="2025-01-13T20:39:04.699866761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:04.702021 containerd[1618]: time="2025-01-13T20:39:04.701669201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:04.702021 containerd[1618]: time="2025-01-13T20:39:04.701683899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:04.702021 containerd[1618]: time="2025-01-13T20:39:04.701791826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:04.706036 containerd[1618]: time="2025-01-13T20:39:04.703542055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:04.706036 containerd[1618]: time="2025-01-13T20:39:04.705858651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:04.706036 containerd[1618]: time="2025-01-13T20:39:04.705874611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:04.706036 containerd[1618]: time="2025-01-13T20:39:04.705965696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:04.790710 containerd[1618]: time="2025-01-13T20:39:04.790666903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d611a1bd3b9bcbb7abe5c5ec7358ab26767b9ed9d8f51fb6a9c46f11fecc513c\"" Jan 13 20:39:04.791680 kubelet[2487]: E0113 20:39:04.791642 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:04.794057 containerd[1618]: time="2025-01-13T20:39:04.793835920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ef7ba300d1a7a4a2681d541ebe70c863,Namespace:kube-system,Attempt:0,} returns sandbox id \"063df52165ebb489f069c969a96ae069376acb1b82ccec40ecfc2bc041be76c7\"" Jan 13 20:39:04.795672 kubelet[2487]: E0113 20:39:04.795524 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:04.796963 containerd[1618]: time="2025-01-13T20:39:04.796929301Z" level=info msg="CreateContainer within sandbox \"d611a1bd3b9bcbb7abe5c5ec7358ab26767b9ed9d8f51fb6a9c46f11fecc513c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:39:04.797638 containerd[1618]: time="2025-01-13T20:39:04.797616416Z" level=info msg="CreateContainer within sandbox \"063df52165ebb489f069c969a96ae069376acb1b82ccec40ecfc2bc041be76c7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:39:04.797825 containerd[1618]: time="2025-01-13T20:39:04.797798625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"f19d1230f1f1279b3c9b22ba14a15329e7c2f7877832a709e5cbc917e67b7acb\"" Jan 13 20:39:04.798433 kubelet[2487]: E0113 20:39:04.798417 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:04.799882 containerd[1618]: time="2025-01-13T20:39:04.799854310Z" level=info msg="CreateContainer within sandbox \"f19d1230f1f1279b3c9b22ba14a15329e7c2f7877832a709e5cbc917e67b7acb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:39:04.829844 containerd[1618]: time="2025-01-13T20:39:04.829785285Z" level=info msg="CreateContainer within sandbox \"d611a1bd3b9bcbb7abe5c5ec7358ab26767b9ed9d8f51fb6a9c46f11fecc513c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ffcd500c9cb04f184c85305b20db505357c24db4a97c0f24055d09e2f1bf545c\"" Jan 13 20:39:04.830697 containerd[1618]: time="2025-01-13T20:39:04.830668726Z" level=info msg="StartContainer for \"ffcd500c9cb04f184c85305b20db505357c24db4a97c0f24055d09e2f1bf545c\"" Jan 13 20:39:04.836546 containerd[1618]: time="2025-01-13T20:39:04.836494067Z" level=info msg="CreateContainer within sandbox \"063df52165ebb489f069c969a96ae069376acb1b82ccec40ecfc2bc041be76c7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a4b6c84099dff7245ccbb413ae6cdcf872820b3196dcafccddcc81f832f6d311\"" Jan 13 20:39:04.837176 containerd[1618]: time="2025-01-13T20:39:04.837124133Z" level=info msg="StartContainer for \"a4b6c84099dff7245ccbb413ae6cdcf872820b3196dcafccddcc81f832f6d311\"" Jan 13 20:39:04.840089 containerd[1618]: time="2025-01-13T20:39:04.840053420Z" level=info msg="CreateContainer within sandbox \"f19d1230f1f1279b3c9b22ba14a15329e7c2f7877832a709e5cbc917e67b7acb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b287bc71d664e3c0156d30ffabccec0e66d8527f3ecd98332bbdc3189b44178f\"" Jan 13 20:39:04.840595 containerd[1618]: time="2025-01-13T20:39:04.840566192Z" level=info msg="StartContainer for \"b287bc71d664e3c0156d30ffabccec0e66d8527f3ecd98332bbdc3189b44178f\"" Jan 13 20:39:04.938095 containerd[1618]: time="2025-01-13T20:39:04.936536983Z" level=info msg="StartContainer for \"ffcd500c9cb04f184c85305b20db505357c24db4a97c0f24055d09e2f1bf545c\" returns successfully" Jan 13 20:39:04.984871 containerd[1618]: time="2025-01-13T20:39:04.981287536Z" level=info msg="StartContainer for \"a4b6c84099dff7245ccbb413ae6cdcf872820b3196dcafccddcc81f832f6d311\" returns successfully" Jan 13 20:39:04.997198 containerd[1618]: time="2025-01-13T20:39:04.996406376Z" level=info msg="StartContainer for \"b287bc71d664e3c0156d30ffabccec0e66d8527f3ecd98332bbdc3189b44178f\" returns successfully" Jan 13 20:39:05.105124 kubelet[2487]: E0113 20:39:05.105085 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:05.109176 kubelet[2487]: E0113 20:39:05.109147 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:05.112709 kubelet[2487]: E0113 20:39:05.112685 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:06.133045 kubelet[2487]: E0113 20:39:06.131965 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:06.133045 kubelet[2487]: E0113 20:39:06.132867 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:06.179978 kubelet[2487]: I0113 20:39:06.179956 2487 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:39:06.326977 kubelet[2487]: E0113 20:39:06.326944 2487 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 20:39:06.473527 kubelet[2487]: I0113 20:39:06.473391 2487 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:39:07.059914 kubelet[2487]: I0113 20:39:07.059836 2487 apiserver.go:52] "Watching apiserver" Jan 13 20:39:07.067775 kubelet[2487]: I0113 20:39:07.067712 2487 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:39:07.405939 kubelet[2487]: E0113 20:39:07.403601 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:08.133720 kubelet[2487]: E0113 20:39:08.133683 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:09.116866 systemd[1]: Reloading requested from client PID 2765 ('systemctl') (unit session-7.scope)... Jan 13 20:39:09.116883 systemd[1]: Reloading... Jan 13 20:39:09.180799 zram_generator::config[2804]: No configuration found. Jan 13 20:39:09.300663 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:39:09.392040 systemd[1]: Reloading finished in 274 ms. Jan 13 20:39:09.432370 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:09.455632 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:39:09.456202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:09.476054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:09.621347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:09.626657 (kubelet)[2859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:39:09.681415 kubelet[2859]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:39:09.681415 kubelet[2859]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:39:09.681415 kubelet[2859]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:39:09.681415 kubelet[2859]: I0113 20:39:09.681013 2859 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:39:09.685631 kubelet[2859]: I0113 20:39:09.685595 2859 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:39:09.685631 kubelet[2859]: I0113 20:39:09.685627 2859 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:39:09.685891 kubelet[2859]: I0113 20:39:09.685875 2859 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:39:09.687306 kubelet[2859]: I0113 20:39:09.687267 2859 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:39:09.689295 kubelet[2859]: I0113 20:39:09.689070 2859 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:39:09.698289 kubelet[2859]: I0113 20:39:09.698254 2859 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:39:09.698933 kubelet[2859]: I0113 20:39:09.698897 2859 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:39:09.699073 kubelet[2859]: I0113 20:39:09.699058 2859 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:39:09.699163 kubelet[2859]: I0113 20:39:09.699086 2859 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:39:09.699163 kubelet[2859]: I0113 20:39:09.699096 2859 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:39:09.699163 kubelet[2859]: I0113 20:39:09.699131 2859 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:39:09.699262 kubelet[2859]: I0113 20:39:09.699233 2859 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:39:09.699262 kubelet[2859]: I0113 20:39:09.699253 2859 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:39:09.699347 kubelet[2859]: I0113 20:39:09.699291 2859 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:39:09.699347 kubelet[2859]: I0113 20:39:09.699308 2859 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:39:09.703029 kubelet[2859]: I0113 20:39:09.701423 2859 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:39:09.703029 kubelet[2859]: I0113 20:39:09.701736 2859 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:39:09.703029 kubelet[2859]: I0113 20:39:09.702408 2859 server.go:1256] "Started kubelet" Jan 13 20:39:09.706613 kubelet[2859]: I0113 20:39:09.706595 2859 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:39:09.707813 kubelet[2859]: I0113 20:39:09.707637 2859 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:39:09.709036 kubelet[2859]: I0113 20:39:09.709022 2859 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:39:09.709301 kubelet[2859]: I0113 20:39:09.709283 2859 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:39:09.709407 kubelet[2859]: I0113 20:39:09.709379 2859 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:39:09.710256 sudo[2874]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:39:09.710799 sudo[2874]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:39:09.713671 kubelet[2859]: I0113 20:39:09.713640 2859 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:39:09.719472 kubelet[2859]: I0113 20:39:09.719445 2859 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:39:09.719669 kubelet[2859]: I0113 20:39:09.719650 2859 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:39:09.721096 kubelet[2859]: I0113 20:39:09.721080 2859 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:39:09.721340 kubelet[2859]: I0113 20:39:09.721323 2859 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:39:09.723655 kubelet[2859]: E0113 20:39:09.723636 2859 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:39:09.724781 kubelet[2859]: I0113 20:39:09.724384 2859 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:39:09.728084 kubelet[2859]: I0113 20:39:09.728057 2859 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:39:09.729526 kubelet[2859]: I0113 20:39:09.729497 2859 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:39:09.729526 kubelet[2859]: I0113 20:39:09.729527 2859 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:39:09.729665 kubelet[2859]: I0113 20:39:09.729545 2859 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:39:09.729665 kubelet[2859]: E0113 20:39:09.729593 2859 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:39:09.782246 kubelet[2859]: I0113 20:39:09.782205 2859 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:39:09.782246 kubelet[2859]: I0113 20:39:09.782234 2859 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:39:09.782246 kubelet[2859]: I0113 20:39:09.782255 2859 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:39:09.782478 kubelet[2859]: I0113 20:39:09.782440 2859 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:39:09.782478 kubelet[2859]: I0113 20:39:09.782473 2859 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:39:09.782546 kubelet[2859]: I0113 20:39:09.782482 2859 policy_none.go:49] "None policy: Start" Jan 13 20:39:09.783298 kubelet[2859]: I0113 20:39:09.783263 2859 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:39:09.783383 kubelet[2859]: I0113 20:39:09.783306 2859 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:39:09.783548 kubelet[2859]: I0113 20:39:09.783527 2859 state_mem.go:75] "Updated machine memory state" Jan 13 20:39:09.785384 kubelet[2859]: I0113 20:39:09.785356 2859 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:39:09.785655 kubelet[2859]: I0113 20:39:09.785633 2859 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:39:09.818516 kubelet[2859]: I0113 20:39:09.818312 2859 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:39:09.830722 kubelet[2859]: I0113 20:39:09.830687 2859 topology_manager.go:215] "Topology Admit Handler" podUID="ef7ba300d1a7a4a2681d541ebe70c863" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:39:09.830815 kubelet[2859]: I0113 20:39:09.830794 2859 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:39:09.830840 kubelet[2859]: I0113 20:39:09.830830 2859 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:39:09.939101 kubelet[2859]: E0113 20:39:09.939000 2859 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 13 20:39:09.944416 kubelet[2859]: I0113 20:39:09.944168 2859 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 20:39:09.944416 kubelet[2859]: I0113 20:39:09.944245 2859 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:39:10.021644 kubelet[2859]: I0113 20:39:10.021588 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:39:10.021644 kubelet[2859]: I0113 20:39:10.021641 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:39:10.021855 kubelet[2859]: I0113 20:39:10.021670 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:39:10.021855 kubelet[2859]: I0113 20:39:10.021697 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:39:10.021855 kubelet[2859]: I0113 20:39:10.021720 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef7ba300d1a7a4a2681d541ebe70c863-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ef7ba300d1a7a4a2681d541ebe70c863\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:39:10.021855 kubelet[2859]: I0113 20:39:10.021740 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef7ba300d1a7a4a2681d541ebe70c863-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ef7ba300d1a7a4a2681d541ebe70c863\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:39:10.021855 kubelet[2859]: I0113 20:39:10.021786 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef7ba300d1a7a4a2681d541ebe70c863-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ef7ba300d1a7a4a2681d541ebe70c863\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:39:10.021959 kubelet[2859]: I0113 20:39:10.021811 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:39:10.021959 kubelet[2859]: I0113 20:39:10.021837 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:39:10.200092 sudo[2874]: pam_unix(sudo:session): session closed for user root Jan 13 20:39:10.238867 kubelet[2859]: E0113 20:39:10.238500 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:10.238867 kubelet[2859]: E0113 20:39:10.238646 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:10.240077 kubelet[2859]: E0113 20:39:10.240011 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:10.700493 kubelet[2859]: I0113 20:39:10.700446 2859 apiserver.go:52] "Watching apiserver" Jan 13 20:39:10.719612 kubelet[2859]: I0113 20:39:10.719588 2859 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:39:10.754335 kubelet[2859]: E0113 20:39:10.753905 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:10.824502 kubelet[2859]: E0113 20:39:10.824464 2859 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 13 20:39:10.825894 kubelet[2859]: E0113 20:39:10.825871 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:10.826168 kubelet[2859]: E0113 20:39:10.826136 2859 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 20:39:10.826605 kubelet[2859]: E0113 20:39:10.826585 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:10.839521 kubelet[2859]: I0113 20:39:10.839486 2859 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.8394299629999997 podStartE2EDuration="3.839429963s" podCreationTimestamp="2025-01-13 20:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:39:10.839364729 +0000 UTC m=+1.208464211" watchObservedRunningTime="2025-01-13 20:39:10.839429963 +0000 UTC m=+1.208529445" Jan 13 20:39:10.853970 kubelet[2859]: I0113 20:39:10.853918 2859 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.853872848 podStartE2EDuration="1.853872848s" podCreationTimestamp="2025-01-13 20:39:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:39:10.853849954 +0000 UTC m=+1.222949446" watchObservedRunningTime="2025-01-13 20:39:10.853872848 +0000 UTC m=+1.222972330" Jan 13 20:39:10.854152 kubelet[2859]: I0113 20:39:10.854034 2859 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8540111910000001 podStartE2EDuration="1.854011191s" podCreationTimestamp="2025-01-13 20:39:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:39:10.846945083 +0000 UTC m=+1.216044575" watchObservedRunningTime="2025-01-13 20:39:10.854011191 +0000 UTC m=+1.223110693" Jan 13 20:39:11.546027 sudo[1808]: pam_unix(sudo:session): session closed for user root Jan 13 20:39:11.547795 sshd[1807]: Connection closed by 10.0.0.1 port 44410 Jan 13 20:39:11.548632 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:11.552673 systemd[1]: sshd@6-10.0.0.71:22-10.0.0.1:44410.service: Deactivated successfully. Jan 13 20:39:11.554984 systemd-logind[1594]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:39:11.555193 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:39:11.556045 systemd-logind[1594]: Removed session 7. Jan 13 20:39:11.754960 kubelet[2859]: E0113 20:39:11.754934 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:11.755493 kubelet[2859]: E0113 20:39:11.755043 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:11.885066 update_engine[1603]: I20250113 20:39:11.884928 1603 update_attempter.cc:509] Updating boot flags... Jan 13 20:39:12.755697 kubelet[2859]: E0113 20:39:12.755657 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:13.113952 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2941) Jan 13 20:39:13.138854 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2944) Jan 13 20:39:13.174055 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2944) Jan 13 20:39:17.628029 kubelet[2859]: E0113 20:39:17.627992 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:17.763526 kubelet[2859]: E0113 20:39:17.763501 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:18.172450 kubelet[2859]: E0113 20:39:18.172094 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:18.764946 kubelet[2859]: E0113 20:39:18.764906 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:19.766648 kubelet[2859]: E0113 20:39:19.766604 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:19.767723 kubelet[2859]: E0113 20:39:19.767673 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:20.769057 kubelet[2859]: E0113 20:39:20.769018 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:24.930331 kubelet[2859]: I0113 20:39:24.930290 2859 topology_manager.go:215] "Topology Admit Handler" podUID="d735ded6-ff64-46a9-9a32-852d81af5361" podNamespace="kube-system" podName="cilium-operator-5cc964979-jhr6f" Jan 13 20:39:24.971258 kubelet[2859]: I0113 20:39:24.971223 2859 topology_manager.go:215] "Topology Admit Handler" podUID="6bf8a465-8e2b-475f-83fb-4aaae0395d1c" podNamespace="kube-system" podName="cilium-btvtd" Jan 13 20:39:24.977798 kubelet[2859]: I0113 20:39:24.974236 2859 topology_manager.go:215] "Topology Admit Handler" podUID="b4926214-0676-4d2d-a744-4af91c70d4ab" podNamespace="kube-system" podName="kube-proxy-pnq2s" Jan 13 20:39:25.019156 kubelet[2859]: I0113 20:39:25.019071 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b4926214-0676-4d2d-a744-4af91c70d4ab-kube-proxy\") pod \"kube-proxy-pnq2s\" (UID: \"b4926214-0676-4d2d-a744-4af91c70d4ab\") " pod="kube-system/kube-proxy-pnq2s" Jan 13 20:39:25.019311 kubelet[2859]: I0113 20:39:25.019174 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7kqf\" (UniqueName: \"kubernetes.io/projected/b4926214-0676-4d2d-a744-4af91c70d4ab-kube-api-access-f7kqf\") pod \"kube-proxy-pnq2s\" (UID: \"b4926214-0676-4d2d-a744-4af91c70d4ab\") " pod="kube-system/kube-proxy-pnq2s" Jan 13 20:39:25.019311 kubelet[2859]: I0113 20:39:25.019242 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d735ded6-ff64-46a9-9a32-852d81af5361-cilium-config-path\") pod \"cilium-operator-5cc964979-jhr6f\" (UID: \"d735ded6-ff64-46a9-9a32-852d81af5361\") " pod="kube-system/cilium-operator-5cc964979-jhr6f" Jan 13 20:39:25.019311 kubelet[2859]: I0113 20:39:25.019264 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-etc-cni-netd\") pod \"cilium-btvtd\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " pod="kube-system/cilium-btvtd" Jan 13 20:39:25.019394 kubelet[2859]: I0113 20:39:25.019317 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-hostproc\") pod \"cilium-btvtd\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " pod="kube-system/cilium-btvtd" Jan 13 20:39:25.019394 kubelet[2859]: I0113 20:39:25.019338 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-cilium-config-path\") pod \"cilium-btvtd\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " pod="kube-system/cilium-btvtd" Jan 13 20:39:25.019394 kubelet[2859]: I0113 20:39:25.019385 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-bpf-maps\") pod \"cilium-btvtd\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " pod="kube-system/cilium-btvtd" Jan 13 20:39:25.019464 kubelet[2859]: I0113 20:39:25.019407 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-lib-modules\") pod \"cilium-btvtd\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " pod="kube-system/cilium-btvtd" Jan 13 20:39:25.019464 kubelet[2859]: I0113 20:39:25.019461 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4926214-0676-4d2d-a744-4af91c70d4ab-xtables-lock\") pod \"kube-proxy-pnq2s\" (UID: \"b4926214-0676-4d2d-a744-4af91c70d4ab\") " pod="kube-system/kube-proxy-pnq2s" Jan 13 20:39:25.019508 kubelet[2859]: I0113 20:39:25.019486 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szr4h\" (UniqueName: \"kubernetes.io/projected/d735ded6-ff64-46a9-9a32-852d81af5361-kube-api-access-szr4h\") pod \"cilium-operator-5cc964979-jhr6f\" (UID: \"d735ded6-ff64-46a9-9a32-852d81af5361\") " pod="kube-system/cilium-operator-5cc964979-jhr6f" Jan 13 20:39:25.019535 kubelet[2859]: I0113 20:39:25.019515 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-cilium-cgroup\") pod \"cilium-btvtd\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " pod="kube-system/cilium-btvtd" Jan 13 20:39:25.019558 kubelet[2859]: I0113 20:39:25.019536 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-host-proc-sys-kernel\") pod \"cilium-btvtd\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " pod="kube-system/cilium-btvtd" Jan 13 20:39:25.019597 kubelet[2859]: I0113 20:39:25.019554 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-cni-path\") pod \"cilium-btvtd\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " pod="kube-system/cilium-btvtd" Jan 13 20:39:25.019597 kubelet[2859]: I0113 20:39:25.019577 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-xtables-lock\") pod \"cilium-btvtd\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " pod="kube-system/cilium-btvtd" Jan 13 20:39:25.019597 kubelet[2859]: I0113 20:39:25.019595 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4926214-0676-4d2d-a744-4af91c70d4ab-lib-modules\") pod \"kube-proxy-pnq2s\" (UID: \"b4926214-0676-4d2d-a744-4af91c70d4ab\") " pod="kube-system/kube-proxy-pnq2s" Jan 13 20:39:25.019660 kubelet[2859]: I0113 20:39:25.019618 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-798ff\" (UniqueName: \"kubernetes.io/projected/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-kube-api-access-798ff\") pod \"cilium-btvtd\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " pod="kube-system/cilium-btvtd" Jan 13 20:39:25.019660 kubelet[2859]: I0113 20:39:25.019636 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-clustermesh-secrets\") pod \"cilium-btvtd\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " pod="kube-system/cilium-btvtd" Jan 13 20:39:25.019660 kubelet[2859]: I0113 20:39:25.019653 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-host-proc-sys-net\") pod \"cilium-btvtd\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " pod="kube-system/cilium-btvtd" Jan 13 20:39:25.019738 kubelet[2859]: I0113 20:39:25.019671 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-hubble-tls\") pod \"cilium-btvtd\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " pod="kube-system/cilium-btvtd" Jan 13 20:39:25.019738 kubelet[2859]: I0113 20:39:25.019695 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-cilium-run\") pod \"cilium-btvtd\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " pod="kube-system/cilium-btvtd" Jan 13 20:39:25.029732 kubelet[2859]: I0113 20:39:25.029699 2859 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:39:25.030102 containerd[1618]: time="2025-01-13T20:39:25.030066999Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:39:25.030543 kubelet[2859]: I0113 20:39:25.030275 2859 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:39:25.237963 kubelet[2859]: E0113 20:39:25.237013 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:25.238564 containerd[1618]: time="2025-01-13T20:39:25.238437121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-jhr6f,Uid:d735ded6-ff64-46a9-9a32-852d81af5361,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:25.272413 containerd[1618]: time="2025-01-13T20:39:25.272293345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:25.272413 containerd[1618]: time="2025-01-13T20:39:25.272355231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:25.272413 containerd[1618]: time="2025-01-13T20:39:25.272367695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:25.272580 containerd[1618]: time="2025-01-13T20:39:25.272472793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:25.281335 kubelet[2859]: E0113 20:39:25.281312 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:25.281815 kubelet[2859]: E0113 20:39:25.281437 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:25.282582 containerd[1618]: time="2025-01-13T20:39:25.282022215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-btvtd,Uid:6bf8a465-8e2b-475f-83fb-4aaae0395d1c,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:25.282582 containerd[1618]: time="2025-01-13T20:39:25.282466913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pnq2s,Uid:b4926214-0676-4d2d-a744-4af91c70d4ab,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:25.317352 containerd[1618]: time="2025-01-13T20:39:25.317209116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:25.317517 containerd[1618]: time="2025-01-13T20:39:25.317445351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:25.317517 containerd[1618]: time="2025-01-13T20:39:25.317488223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:25.317694 containerd[1618]: time="2025-01-13T20:39:25.317660978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:25.321626 containerd[1618]: time="2025-01-13T20:39:25.321539279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:25.321626 containerd[1618]: time="2025-01-13T20:39:25.321587810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:25.321626 containerd[1618]: time="2025-01-13T20:39:25.321602097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:25.321823 containerd[1618]: time="2025-01-13T20:39:25.321684633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:25.338346 containerd[1618]: time="2025-01-13T20:39:25.338293493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-jhr6f,Uid:d735ded6-ff64-46a9-9a32-852d81af5361,Namespace:kube-system,Attempt:0,} returns sandbox id \"84528d6927d9a0ad5e9392cd1fd79fe6d26c5ab4b2c3d8134ddca864f43e8feb\"" Jan 13 20:39:25.339258 kubelet[2859]: E0113 20:39:25.339123 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:25.340907 containerd[1618]: time="2025-01-13T20:39:25.340623986Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:39:25.368061 containerd[1618]: time="2025-01-13T20:39:25.367980397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pnq2s,Uid:b4926214-0676-4d2d-a744-4af91c70d4ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"821c2a92927c04d99472e6b518366062b9eea10e96b2a6d174a0ed59408bb35c\"" Jan 13 20:39:25.368793 kubelet[2859]: E0113 20:39:25.368603 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:25.371467 containerd[1618]: time="2025-01-13T20:39:25.371406755Z" level=info msg="CreateContainer within sandbox \"821c2a92927c04d99472e6b518366062b9eea10e96b2a6d174a0ed59408bb35c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:39:25.375529 containerd[1618]: time="2025-01-13T20:39:25.375499641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-btvtd,Uid:6bf8a465-8e2b-475f-83fb-4aaae0395d1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f\"" Jan 13 20:39:25.376702 kubelet[2859]: E0113 20:39:25.376682 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:25.393875 containerd[1618]: time="2025-01-13T20:39:25.393820729Z" level=info msg="CreateContainer within sandbox \"821c2a92927c04d99472e6b518366062b9eea10e96b2a6d174a0ed59408bb35c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ecd0eebc8570ec4c766838909ce93b11deb0d6190c2f7fdad4689a0726f72049\"" Jan 13 20:39:25.394601 containerd[1618]: time="2025-01-13T20:39:25.394530627Z" level=info msg="StartContainer for \"ecd0eebc8570ec4c766838909ce93b11deb0d6190c2f7fdad4689a0726f72049\"" Jan 13 20:39:25.465046 containerd[1618]: time="2025-01-13T20:39:25.465000831Z" level=info msg="StartContainer for \"ecd0eebc8570ec4c766838909ce93b11deb0d6190c2f7fdad4689a0726f72049\" returns successfully" Jan 13 20:39:25.780242 kubelet[2859]: E0113 20:39:25.779334 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:25.792015 kubelet[2859]: I0113 20:39:25.791736 2859 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pnq2s" podStartSLOduration=1.79169066 podStartE2EDuration="1.79169066s" podCreationTimestamp="2025-01-13 20:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:39:25.791334508 +0000 UTC m=+16.160434000" watchObservedRunningTime="2025-01-13 20:39:25.79169066 +0000 UTC m=+16.160790142" Jan 13 20:39:26.504783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2430407350.mount: Deactivated successfully. Jan 13 20:39:26.832481 containerd[1618]: time="2025-01-13T20:39:26.832417916Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:26.833366 containerd[1618]: time="2025-01-13T20:39:26.833326538Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907185" Jan 13 20:39:26.834793 containerd[1618]: time="2025-01-13T20:39:26.834714023Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:26.836348 containerd[1618]: time="2025-01-13T20:39:26.836297116Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.495639797s" Jan 13 20:39:26.836391 containerd[1618]: time="2025-01-13T20:39:26.836348614Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 20:39:26.837401 containerd[1618]: time="2025-01-13T20:39:26.837369517Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:39:26.838648 containerd[1618]: time="2025-01-13T20:39:26.838523962Z" level=info msg="CreateContainer within sandbox \"84528d6927d9a0ad5e9392cd1fd79fe6d26c5ab4b2c3d8134ddca864f43e8feb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:39:26.849959 containerd[1618]: time="2025-01-13T20:39:26.849910390Z" level=info msg="CreateContainer within sandbox \"84528d6927d9a0ad5e9392cd1fd79fe6d26c5ab4b2c3d8134ddca864f43e8feb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7\"" Jan 13 20:39:26.850552 containerd[1618]: time="2025-01-13T20:39:26.850509920Z" level=info msg="StartContainer for \"942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7\"" Jan 13 20:39:26.914175 containerd[1618]: time="2025-01-13T20:39:26.914124772Z" level=info msg="StartContainer for \"942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7\" returns successfully" Jan 13 20:39:27.786408 kubelet[2859]: E0113 20:39:27.786366 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:28.787369 kubelet[2859]: E0113 20:39:28.787317 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:31.357316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2483813966.mount: Deactivated successfully. Jan 13 20:39:35.657037 systemd[1]: Started sshd@7-10.0.0.71:22-10.0.0.1:35374.service - OpenSSH per-connection server daemon (10.0.0.1:35374). Jan 13 20:39:35.707531 sshd[3302]: Accepted publickey for core from 10.0.0.1 port 35374 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:35.709436 sshd-session[3302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:35.713629 systemd-logind[1594]: New session 8 of user core. Jan 13 20:39:35.722216 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:39:35.904070 sshd[3305]: Connection closed by 10.0.0.1 port 35374 Jan 13 20:39:35.904435 sshd-session[3302]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:35.908017 systemd[1]: sshd@7-10.0.0.71:22-10.0.0.1:35374.service: Deactivated successfully. Jan 13 20:39:35.910296 systemd-logind[1594]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:39:35.910403 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:39:35.911518 systemd-logind[1594]: Removed session 8. Jan 13 20:39:38.243573 containerd[1618]: time="2025-01-13T20:39:38.243505391Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:38.244629 containerd[1618]: time="2025-01-13T20:39:38.244537680Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166733527" Jan 13 20:39:38.245981 containerd[1618]: time="2025-01-13T20:39:38.245934135Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:38.247474 containerd[1618]: time="2025-01-13T20:39:38.247425418Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.41001858s" Jan 13 20:39:38.247474 containerd[1618]: time="2025-01-13T20:39:38.247469010Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 20:39:38.249521 containerd[1618]: time="2025-01-13T20:39:38.249491972Z" level=info msg="CreateContainer within sandbox \"e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:39:38.262866 containerd[1618]: time="2025-01-13T20:39:38.262798479Z" level=info msg="CreateContainer within sandbox \"e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1ad005cb74c67cb2a691ddd938ab79ebd4965c48d387d1bb56a061de1fee37d6\"" Jan 13 20:39:38.263330 containerd[1618]: time="2025-01-13T20:39:38.263302217Z" level=info msg="StartContainer for \"1ad005cb74c67cb2a691ddd938ab79ebd4965c48d387d1bb56a061de1fee37d6\"" Jan 13 20:39:38.498050 containerd[1618]: time="2025-01-13T20:39:38.497413068Z" level=info msg="StartContainer for \"1ad005cb74c67cb2a691ddd938ab79ebd4965c48d387d1bb56a061de1fee37d6\" returns successfully" Jan 13 20:39:38.782686 containerd[1618]: time="2025-01-13T20:39:38.782615328Z" level=info msg="shim disconnected" id=1ad005cb74c67cb2a691ddd938ab79ebd4965c48d387d1bb56a061de1fee37d6 namespace=k8s.io Jan 13 20:39:38.782933 containerd[1618]: time="2025-01-13T20:39:38.782687244Z" level=warning msg="cleaning up after shim disconnected" id=1ad005cb74c67cb2a691ddd938ab79ebd4965c48d387d1bb56a061de1fee37d6 namespace=k8s.io Jan 13 20:39:38.782933 containerd[1618]: time="2025-01-13T20:39:38.782702763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:38.814411 kubelet[2859]: E0113 20:39:38.814379 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:38.817013 containerd[1618]: time="2025-01-13T20:39:38.816445397Z" level=info msg="CreateContainer within sandbox \"e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:39:38.833615 containerd[1618]: time="2025-01-13T20:39:38.833405533Z" level=info msg="CreateContainer within sandbox \"e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f73572b2545541f5934227fa54151b4465bc364f0069d128b8ace180673c8dbb\"" Jan 13 20:39:38.836050 containerd[1618]: time="2025-01-13T20:39:38.835695536Z" level=info msg="StartContainer for \"f73572b2545541f5934227fa54151b4465bc364f0069d128b8ace180673c8dbb\"" Jan 13 20:39:38.836148 kubelet[2859]: I0113 20:39:38.836120 2859 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-jhr6f" podStartSLOduration=13.339093644 podStartE2EDuration="14.836015157s" podCreationTimestamp="2025-01-13 20:39:24 +0000 UTC" firstStartedPulling="2025-01-13 20:39:25.339887328 +0000 UTC m=+15.708986810" lastFinishedPulling="2025-01-13 20:39:26.836808841 +0000 UTC m=+17.205908323" observedRunningTime="2025-01-13 20:39:27.80320304 +0000 UTC m=+18.172302522" watchObservedRunningTime="2025-01-13 20:39:38.836015157 +0000 UTC m=+29.205114649" Jan 13 20:39:38.891074 containerd[1618]: time="2025-01-13T20:39:38.891018378Z" level=info msg="StartContainer for \"f73572b2545541f5934227fa54151b4465bc364f0069d128b8ace180673c8dbb\" returns successfully" Jan 13 20:39:38.902558 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:39:38.902900 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:39:38.902980 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:39:38.910231 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:39:38.931526 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:39:38.935290 containerd[1618]: time="2025-01-13T20:39:38.935224328Z" level=info msg="shim disconnected" id=f73572b2545541f5934227fa54151b4465bc364f0069d128b8ace180673c8dbb namespace=k8s.io Jan 13 20:39:38.935290 containerd[1618]: time="2025-01-13T20:39:38.935289280Z" level=warning msg="cleaning up after shim disconnected" id=f73572b2545541f5934227fa54151b4465bc364f0069d128b8ace180673c8dbb namespace=k8s.io Jan 13 20:39:38.935430 containerd[1618]: time="2025-01-13T20:39:38.935299158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:39.259299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ad005cb74c67cb2a691ddd938ab79ebd4965c48d387d1bb56a061de1fee37d6-rootfs.mount: Deactivated successfully. Jan 13 20:39:39.818186 kubelet[2859]: E0113 20:39:39.818156 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:39.820257 containerd[1618]: time="2025-01-13T20:39:39.820185123Z" level=info msg="CreateContainer within sandbox \"e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:39:39.849713 containerd[1618]: time="2025-01-13T20:39:39.849537991Z" level=info msg="CreateContainer within sandbox \"e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6619cbd919b52abe9a9bc58130b54e916427e621cdb0efedcb88e654e53c4070\"" Jan 13 20:39:39.850576 containerd[1618]: time="2025-01-13T20:39:39.850513614Z" level=info msg="StartContainer for \"6619cbd919b52abe9a9bc58130b54e916427e621cdb0efedcb88e654e53c4070\"" Jan 13 20:39:39.923698 containerd[1618]: time="2025-01-13T20:39:39.923530961Z" level=info msg="StartContainer for \"6619cbd919b52abe9a9bc58130b54e916427e621cdb0efedcb88e654e53c4070\" returns successfully" Jan 13 20:39:39.956906 containerd[1618]: time="2025-01-13T20:39:39.956791029Z" level=info msg="shim disconnected" id=6619cbd919b52abe9a9bc58130b54e916427e621cdb0efedcb88e654e53c4070 namespace=k8s.io Jan 13 20:39:39.956906 containerd[1618]: time="2025-01-13T20:39:39.956870229Z" level=warning msg="cleaning up after shim disconnected" id=6619cbd919b52abe9a9bc58130b54e916427e621cdb0efedcb88e654e53c4070 namespace=k8s.io Jan 13 20:39:39.956906 containerd[1618]: time="2025-01-13T20:39:39.956881971Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:40.258950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6619cbd919b52abe9a9bc58130b54e916427e621cdb0efedcb88e654e53c4070-rootfs.mount: Deactivated successfully. Jan 13 20:39:40.822415 kubelet[2859]: E0113 20:39:40.822379 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:40.824775 containerd[1618]: time="2025-01-13T20:39:40.824713170Z" level=info msg="CreateContainer within sandbox \"e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:39:40.916170 systemd[1]: Started sshd@8-10.0.0.71:22-10.0.0.1:35384.service - OpenSSH per-connection server daemon (10.0.0.1:35384). Jan 13 20:39:40.964985 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 35384 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:40.967163 sshd-session[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:40.974698 systemd-logind[1594]: New session 9 of user core. Jan 13 20:39:40.982040 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:39:41.079847 containerd[1618]: time="2025-01-13T20:39:41.079682048Z" level=info msg="CreateContainer within sandbox \"e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1e3da900e4fb2080e33195723f5538f78f8be6dfb14bfb1789efc372e04e66d8\"" Jan 13 20:39:41.081401 containerd[1618]: time="2025-01-13T20:39:41.080646489Z" level=info msg="StartContainer for \"1e3da900e4fb2080e33195723f5538f78f8be6dfb14bfb1789efc372e04e66d8\"" Jan 13 20:39:41.148715 sshd[3523]: Connection closed by 10.0.0.1 port 35384 Jan 13 20:39:41.149115 sshd-session[3520]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:41.153378 systemd[1]: sshd@8-10.0.0.71:22-10.0.0.1:35384.service: Deactivated successfully. Jan 13 20:39:41.155415 systemd-logind[1594]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:39:41.155520 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:39:41.156497 systemd-logind[1594]: Removed session 9. Jan 13 20:39:41.202202 containerd[1618]: time="2025-01-13T20:39:41.202135259Z" level=info msg="StartContainer for \"1e3da900e4fb2080e33195723f5538f78f8be6dfb14bfb1789efc372e04e66d8\" returns successfully" Jan 13 20:39:41.258857 systemd[1]: run-containerd-runc-k8s.io-1e3da900e4fb2080e33195723f5538f78f8be6dfb14bfb1789efc372e04e66d8-runc.4Kz8FO.mount: Deactivated successfully. Jan 13 20:39:41.259041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e3da900e4fb2080e33195723f5538f78f8be6dfb14bfb1789efc372e04e66d8-rootfs.mount: Deactivated successfully. Jan 13 20:39:41.381852 containerd[1618]: time="2025-01-13T20:39:41.381673369Z" level=info msg="shim disconnected" id=1e3da900e4fb2080e33195723f5538f78f8be6dfb14bfb1789efc372e04e66d8 namespace=k8s.io Jan 13 20:39:41.381852 containerd[1618]: time="2025-01-13T20:39:41.381731829Z" level=warning msg="cleaning up after shim disconnected" id=1e3da900e4fb2080e33195723f5538f78f8be6dfb14bfb1789efc372e04e66d8 namespace=k8s.io Jan 13 20:39:41.381852 containerd[1618]: time="2025-01-13T20:39:41.381740505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:39:41.826423 kubelet[2859]: E0113 20:39:41.826391 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:41.829131 containerd[1618]: time="2025-01-13T20:39:41.829086961Z" level=info msg="CreateContainer within sandbox \"e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:39:41.974305 containerd[1618]: time="2025-01-13T20:39:41.974247149Z" level=info msg="CreateContainer within sandbox \"e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"09c5454296932d96e95dbc615663431fcf51f52283c1e1fd884fd8a115852876\"" Jan 13 20:39:41.974817 containerd[1618]: time="2025-01-13T20:39:41.974781052Z" level=info msg="StartContainer for \"09c5454296932d96e95dbc615663431fcf51f52283c1e1fd884fd8a115852876\"" Jan 13 20:39:42.037666 containerd[1618]: time="2025-01-13T20:39:42.037602720Z" level=info msg="StartContainer for \"09c5454296932d96e95dbc615663431fcf51f52283c1e1fd884fd8a115852876\" returns successfully" Jan 13 20:39:42.229691 kubelet[2859]: I0113 20:39:42.229577 2859 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:39:42.312948 kubelet[2859]: I0113 20:39:42.312871 2859 topology_manager.go:215] "Topology Admit Handler" podUID="52d785d5-e2b5-4791-9770-7aefc7851684" podNamespace="kube-system" podName="coredns-76f75df574-xf8df" Jan 13 20:39:42.322323 kubelet[2859]: I0113 20:39:42.322284 2859 topology_manager.go:215] "Topology Admit Handler" podUID="b8791e00-d243-4b34-8645-2cb199a6d891" podNamespace="kube-system" podName="coredns-76f75df574-96tpz" Jan 13 20:39:42.346178 kubelet[2859]: I0113 20:39:42.346147 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52d785d5-e2b5-4791-9770-7aefc7851684-config-volume\") pod \"coredns-76f75df574-xf8df\" (UID: \"52d785d5-e2b5-4791-9770-7aefc7851684\") " pod="kube-system/coredns-76f75df574-xf8df" Jan 13 20:39:42.346394 kubelet[2859]: I0113 20:39:42.346368 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf9t7\" (UniqueName: \"kubernetes.io/projected/52d785d5-e2b5-4791-9770-7aefc7851684-kube-api-access-lf9t7\") pod \"coredns-76f75df574-xf8df\" (UID: \"52d785d5-e2b5-4791-9770-7aefc7851684\") " pod="kube-system/coredns-76f75df574-xf8df" Jan 13 20:39:42.447685 kubelet[2859]: I0113 20:39:42.447535 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8791e00-d243-4b34-8645-2cb199a6d891-config-volume\") pod \"coredns-76f75df574-96tpz\" (UID: \"b8791e00-d243-4b34-8645-2cb199a6d891\") " pod="kube-system/coredns-76f75df574-96tpz" Jan 13 20:39:42.447685 kubelet[2859]: I0113 20:39:42.447591 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l2fj\" (UniqueName: \"kubernetes.io/projected/b8791e00-d243-4b34-8645-2cb199a6d891-kube-api-access-7l2fj\") pod \"coredns-76f75df574-96tpz\" (UID: \"b8791e00-d243-4b34-8645-2cb199a6d891\") " pod="kube-system/coredns-76f75df574-96tpz" Jan 13 20:39:42.618239 kubelet[2859]: E0113 20:39:42.618102 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:42.624853 containerd[1618]: time="2025-01-13T20:39:42.624788036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xf8df,Uid:52d785d5-e2b5-4791-9770-7aefc7851684,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:42.628223 kubelet[2859]: E0113 20:39:42.628011 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:42.628374 containerd[1618]: time="2025-01-13T20:39:42.628342132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-96tpz,Uid:b8791e00-d243-4b34-8645-2cb199a6d891,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:42.830947 kubelet[2859]: E0113 20:39:42.830918 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:42.870548 kubelet[2859]: I0113 20:39:42.870255 2859 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-btvtd" podStartSLOduration=5.999462598 podStartE2EDuration="18.870186297s" podCreationTimestamp="2025-01-13 20:39:24 +0000 UTC" firstStartedPulling="2025-01-13 20:39:25.377080752 +0000 UTC m=+15.746180234" lastFinishedPulling="2025-01-13 20:39:38.247804451 +0000 UTC m=+28.616903933" observedRunningTime="2025-01-13 20:39:42.86971949 +0000 UTC m=+33.238818972" watchObservedRunningTime="2025-01-13 20:39:42.870186297 +0000 UTC m=+33.239285789" Jan 13 20:39:43.833172 kubelet[2859]: E0113 20:39:43.833128 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:44.210933 systemd-networkd[1244]: cilium_host: Link UP Jan 13 20:39:44.211101 systemd-networkd[1244]: cilium_net: Link UP Jan 13 20:39:44.211279 systemd-networkd[1244]: cilium_net: Gained carrier Jan 13 20:39:44.211458 systemd-networkd[1244]: cilium_host: Gained carrier Jan 13 20:39:44.319248 systemd-networkd[1244]: cilium_vxlan: Link UP Jan 13 20:39:44.319261 systemd-networkd[1244]: cilium_vxlan: Gained carrier Jan 13 20:39:44.569814 kernel: NET: Registered PF_ALG protocol family Jan 13 20:39:44.797962 systemd-networkd[1244]: cilium_host: Gained IPv6LL Jan 13 20:39:44.834829 kubelet[2859]: E0113 20:39:44.834692 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:45.245923 systemd-networkd[1244]: cilium_net: Gained IPv6LL Jan 13 20:39:45.330846 systemd-networkd[1244]: lxc_health: Link UP Jan 13 20:39:45.333510 systemd-networkd[1244]: lxc_health: Gained carrier Jan 13 20:39:45.438016 systemd-networkd[1244]: cilium_vxlan: Gained IPv6LL Jan 13 20:39:45.461238 systemd-networkd[1244]: lxc412a90194c85: Link UP Jan 13 20:39:45.470202 systemd-networkd[1244]: lxc49c31fba2b44: Link UP Jan 13 20:39:45.477894 kernel: eth0: renamed from tmp39140 Jan 13 20:39:45.484626 systemd-networkd[1244]: lxc412a90194c85: Gained carrier Jan 13 20:39:45.487428 kernel: eth0: renamed from tmp850fe Jan 13 20:39:45.493931 systemd-networkd[1244]: lxc49c31fba2b44: Gained carrier Jan 13 20:39:46.161191 systemd[1]: Started sshd@9-10.0.0.71:22-10.0.0.1:43392.service - OpenSSH per-connection server daemon (10.0.0.1:43392). Jan 13 20:39:46.209744 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 43392 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:46.212107 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:46.217922 systemd-logind[1594]: New session 10 of user core. Jan 13 20:39:46.224016 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:39:46.373487 sshd[4108]: Connection closed by 10.0.0.1 port 43392 Jan 13 20:39:46.373874 sshd-session[4104]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:46.378360 systemd[1]: sshd@9-10.0.0.71:22-10.0.0.1:43392.service: Deactivated successfully. Jan 13 20:39:46.381496 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:39:46.382879 systemd-logind[1594]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:39:46.383950 systemd-logind[1594]: Removed session 10. Jan 13 20:39:47.229920 systemd-networkd[1244]: lxc412a90194c85: Gained IPv6LL Jan 13 20:39:47.284095 kubelet[2859]: E0113 20:39:47.283915 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:47.293964 systemd-networkd[1244]: lxc_health: Gained IPv6LL Jan 13 20:39:47.485992 systemd-networkd[1244]: lxc49c31fba2b44: Gained IPv6LL Jan 13 20:39:49.306423 containerd[1618]: time="2025-01-13T20:39:49.306310431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:49.306423 containerd[1618]: time="2025-01-13T20:39:49.306375635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:49.306423 containerd[1618]: time="2025-01-13T20:39:49.306390072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:49.307115 containerd[1618]: time="2025-01-13T20:39:49.307064157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:49.318826 containerd[1618]: time="2025-01-13T20:39:49.318675914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:49.319127 containerd[1618]: time="2025-01-13T20:39:49.319012906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:49.319127 containerd[1618]: time="2025-01-13T20:39:49.319036420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:49.319717 containerd[1618]: time="2025-01-13T20:39:49.319255231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:49.335926 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:39:49.346948 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:39:49.365066 containerd[1618]: time="2025-01-13T20:39:49.365018385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-96tpz,Uid:b8791e00-d243-4b34-8645-2cb199a6d891,Namespace:kube-system,Attempt:0,} returns sandbox id \"39140476be9c6368190c40e8d897a74b1f0495427ab1bdb3566b8d87ad9e8f87\"" Jan 13 20:39:49.366842 kubelet[2859]: E0113 20:39:49.366303 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:49.369985 containerd[1618]: time="2025-01-13T20:39:49.369952187Z" level=info msg="CreateContainer within sandbox \"39140476be9c6368190c40e8d897a74b1f0495427ab1bdb3566b8d87ad9e8f87\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:39:49.375438 containerd[1618]: time="2025-01-13T20:39:49.375383081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xf8df,Uid:52d785d5-e2b5-4791-9770-7aefc7851684,Namespace:kube-system,Attempt:0,} returns sandbox id \"850fe9f2965560f2da9df16fde2854c0d92e0b0f7f6b560fc3eb488bbb7bd3ed\"" Jan 13 20:39:49.376714 kubelet[2859]: E0113 20:39:49.376684 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:49.379024 containerd[1618]: time="2025-01-13T20:39:49.378990784Z" level=info msg="CreateContainer within sandbox \"850fe9f2965560f2da9df16fde2854c0d92e0b0f7f6b560fc3eb488bbb7bd3ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:39:49.397027 containerd[1618]: time="2025-01-13T20:39:49.396970485Z" level=info msg="CreateContainer within sandbox \"39140476be9c6368190c40e8d897a74b1f0495427ab1bdb3566b8d87ad9e8f87\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a27f8d4dddf7004af6306292d568a34f5e0bf68e31ce63d563f2906a787e571\"" Jan 13 20:39:49.397771 containerd[1618]: time="2025-01-13T20:39:49.397538551Z" level=info msg="StartContainer for \"0a27f8d4dddf7004af6306292d568a34f5e0bf68e31ce63d563f2906a787e571\"" Jan 13 20:39:49.405877 containerd[1618]: time="2025-01-13T20:39:49.405832059Z" level=info msg="CreateContainer within sandbox \"850fe9f2965560f2da9df16fde2854c0d92e0b0f7f6b560fc3eb488bbb7bd3ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e9e8e798e87a204b9de889b0fb37d671dc35f779430b720aa6b4a373d4c4b57\"" Jan 13 20:39:49.406354 containerd[1618]: time="2025-01-13T20:39:49.406306990Z" level=info msg="StartContainer for \"1e9e8e798e87a204b9de889b0fb37d671dc35f779430b720aa6b4a373d4c4b57\"" Jan 13 20:39:49.471471 containerd[1618]: time="2025-01-13T20:39:49.471416801Z" level=info msg="StartContainer for \"0a27f8d4dddf7004af6306292d568a34f5e0bf68e31ce63d563f2906a787e571\" returns successfully" Jan 13 20:39:49.471471 containerd[1618]: time="2025-01-13T20:39:49.471417172Z" level=info msg="StartContainer for \"1e9e8e798e87a204b9de889b0fb37d671dc35f779430b720aa6b4a373d4c4b57\" returns successfully" Jan 13 20:39:49.845034 kubelet[2859]: E0113 20:39:49.844985 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:49.845522 kubelet[2859]: E0113 20:39:49.845503 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:50.036356 kubelet[2859]: I0113 20:39:50.036301 2859 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-96tpz" podStartSLOduration=26.036256796 podStartE2EDuration="26.036256796s" podCreationTimestamp="2025-01-13 20:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:39:50.033449756 +0000 UTC m=+40.402549238" watchObservedRunningTime="2025-01-13 20:39:50.036256796 +0000 UTC m=+40.405356278" Jan 13 20:39:50.172141 kubelet[2859]: I0113 20:39:50.171966 2859 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xf8df" podStartSLOduration=26.171910189 podStartE2EDuration="26.171910189s" podCreationTimestamp="2025-01-13 20:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:39:50.168973657 +0000 UTC m=+40.538073139" watchObservedRunningTime="2025-01-13 20:39:50.171910189 +0000 UTC m=+40.541009671" Jan 13 20:39:50.312823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2061680527.mount: Deactivated successfully. Jan 13 20:39:50.848064 kubelet[2859]: E0113 20:39:50.847747 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:50.848064 kubelet[2859]: E0113 20:39:50.847824 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:50.975000 kubelet[2859]: I0113 20:39:50.974940 2859 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:39:50.975989 kubelet[2859]: E0113 20:39:50.975940 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:51.388136 systemd[1]: Started sshd@10-10.0.0.71:22-10.0.0.1:33892.service - OpenSSH per-connection server daemon (10.0.0.1:33892). Jan 13 20:39:51.432610 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 33892 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:51.434781 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:51.439208 systemd-logind[1594]: New session 11 of user core. Jan 13 20:39:51.444034 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:39:51.597048 sshd[4298]: Connection closed by 10.0.0.1 port 33892 Jan 13 20:39:51.597434 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:51.601902 systemd[1]: sshd@10-10.0.0.71:22-10.0.0.1:33892.service: Deactivated successfully. Jan 13 20:39:51.604070 systemd-logind[1594]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:39:51.604130 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:39:51.605241 systemd-logind[1594]: Removed session 11. Jan 13 20:39:51.849580 kubelet[2859]: E0113 20:39:51.849535 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:51.850023 kubelet[2859]: E0113 20:39:51.849817 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:51.850023 kubelet[2859]: E0113 20:39:51.849880 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:39:56.609977 systemd[1]: Started sshd@11-10.0.0.71:22-10.0.0.1:33898.service - OpenSSH per-connection server daemon (10.0.0.1:33898). Jan 13 20:39:56.646454 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 33898 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:56.648142 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:56.652240 systemd-logind[1594]: New session 12 of user core. Jan 13 20:39:56.663251 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:39:56.794544 sshd[4320]: Connection closed by 10.0.0.1 port 33898 Jan 13 20:39:56.794979 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:56.805118 systemd[1]: Started sshd@12-10.0.0.71:22-10.0.0.1:33904.service - OpenSSH per-connection server daemon (10.0.0.1:33904). Jan 13 20:39:56.805771 systemd[1]: sshd@11-10.0.0.71:22-10.0.0.1:33898.service: Deactivated successfully. Jan 13 20:39:56.809372 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:39:56.810338 systemd-logind[1594]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:39:56.811429 systemd-logind[1594]: Removed session 12. Jan 13 20:39:56.846835 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 33904 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:56.848486 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:56.853091 systemd-logind[1594]: New session 13 of user core. Jan 13 20:39:56.859086 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:39:57.032502 sshd[4336]: Connection closed by 10.0.0.1 port 33904 Jan 13 20:39:57.032961 sshd-session[4330]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:57.041085 systemd[1]: Started sshd@13-10.0.0.71:22-10.0.0.1:33920.service - OpenSSH per-connection server daemon (10.0.0.1:33920). Jan 13 20:39:57.042395 systemd[1]: sshd@12-10.0.0.71:22-10.0.0.1:33904.service: Deactivated successfully. Jan 13 20:39:57.050281 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:39:57.052812 systemd-logind[1594]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:39:57.056484 systemd-logind[1594]: Removed session 13. Jan 13 20:39:57.090168 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 33920 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:39:57.092225 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:57.096854 systemd-logind[1594]: New session 14 of user core. Jan 13 20:39:57.113285 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:39:57.240337 sshd[4349]: Connection closed by 10.0.0.1 port 33920 Jan 13 20:39:57.240733 sshd-session[4343]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:57.245522 systemd[1]: sshd@13-10.0.0.71:22-10.0.0.1:33920.service: Deactivated successfully. Jan 13 20:39:57.248021 systemd-logind[1594]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:39:57.248085 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:39:57.249367 systemd-logind[1594]: Removed session 14. Jan 13 20:40:02.253035 systemd[1]: Started sshd@14-10.0.0.71:22-10.0.0.1:37516.service - OpenSSH per-connection server daemon (10.0.0.1:37516). Jan 13 20:40:02.295813 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 37516 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:40:02.297732 sshd-session[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:02.302451 systemd-logind[1594]: New session 15 of user core. Jan 13 20:40:02.315026 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:40:02.430664 sshd[4364]: Connection closed by 10.0.0.1 port 37516 Jan 13 20:40:02.431032 sshd-session[4361]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:02.435141 systemd[1]: sshd@14-10.0.0.71:22-10.0.0.1:37516.service: Deactivated successfully. Jan 13 20:40:02.437393 systemd-logind[1594]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:40:02.437477 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:40:02.438588 systemd-logind[1594]: Removed session 15. Jan 13 20:40:07.447266 systemd[1]: Started sshd@15-10.0.0.71:22-10.0.0.1:37522.service - OpenSSH per-connection server daemon (10.0.0.1:37522). Jan 13 20:40:07.488158 sshd[4376]: Accepted publickey for core from 10.0.0.1 port 37522 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:40:07.490361 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:07.496062 systemd-logind[1594]: New session 16 of user core. Jan 13 20:40:07.503279 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:40:07.624370 sshd[4379]: Connection closed by 10.0.0.1 port 37522 Jan 13 20:40:07.624746 sshd-session[4376]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:07.628935 systemd[1]: sshd@15-10.0.0.71:22-10.0.0.1:37522.service: Deactivated successfully. Jan 13 20:40:07.631298 systemd-logind[1594]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:40:07.631381 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:40:07.632468 systemd-logind[1594]: Removed session 16. Jan 13 20:40:12.634273 systemd[1]: Started sshd@16-10.0.0.71:22-10.0.0.1:48190.service - OpenSSH per-connection server daemon (10.0.0.1:48190). Jan 13 20:40:12.675524 sshd[4393]: Accepted publickey for core from 10.0.0.1 port 48190 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:40:12.677550 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:12.682980 systemd-logind[1594]: New session 17 of user core. Jan 13 20:40:12.694291 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:40:12.812183 sshd[4396]: Connection closed by 10.0.0.1 port 48190 Jan 13 20:40:12.812567 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:12.820096 systemd[1]: Started sshd@17-10.0.0.71:22-10.0.0.1:48206.service - OpenSSH per-connection server daemon (10.0.0.1:48206). Jan 13 20:40:12.820843 systemd[1]: sshd@16-10.0.0.71:22-10.0.0.1:48190.service: Deactivated successfully. Jan 13 20:40:12.823648 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:40:12.825722 systemd-logind[1594]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:40:12.826871 systemd-logind[1594]: Removed session 17. Jan 13 20:40:12.859140 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 48206 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:40:12.860645 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:12.864492 systemd-logind[1594]: New session 18 of user core. Jan 13 20:40:12.872046 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:40:13.113942 sshd[4411]: Connection closed by 10.0.0.1 port 48206 Jan 13 20:40:13.114337 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:13.125132 systemd[1]: Started sshd@18-10.0.0.71:22-10.0.0.1:48218.service - OpenSSH per-connection server daemon (10.0.0.1:48218). Jan 13 20:40:13.125735 systemd[1]: sshd@17-10.0.0.71:22-10.0.0.1:48206.service: Deactivated successfully. Jan 13 20:40:13.128763 systemd-logind[1594]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:40:13.129565 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:40:13.131136 systemd-logind[1594]: Removed session 18. Jan 13 20:40:13.167357 sshd[4418]: Accepted publickey for core from 10.0.0.1 port 48218 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:40:13.169136 sshd-session[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:13.173805 systemd-logind[1594]: New session 19 of user core. Jan 13 20:40:13.183201 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:40:14.970866 sshd[4424]: Connection closed by 10.0.0.1 port 48218 Jan 13 20:40:14.973398 sshd-session[4418]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:14.983505 systemd[1]: Started sshd@19-10.0.0.71:22-10.0.0.1:48228.service - OpenSSH per-connection server daemon (10.0.0.1:48228). Jan 13 20:40:14.986326 systemd[1]: sshd@18-10.0.0.71:22-10.0.0.1:48218.service: Deactivated successfully. Jan 13 20:40:14.987491 systemd-logind[1594]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:40:14.992855 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:40:14.995906 systemd-logind[1594]: Removed session 19. Jan 13 20:40:15.039080 sshd[4454]: Accepted publickey for core from 10.0.0.1 port 48228 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:40:15.041526 sshd-session[4454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:15.046352 systemd-logind[1594]: New session 20 of user core. Jan 13 20:40:15.059209 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:40:15.287210 sshd[4460]: Connection closed by 10.0.0.1 port 48228 Jan 13 20:40:15.287649 sshd-session[4454]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:15.300109 systemd[1]: Started sshd@20-10.0.0.71:22-10.0.0.1:48240.service - OpenSSH per-connection server daemon (10.0.0.1:48240). Jan 13 20:40:15.300908 systemd[1]: sshd@19-10.0.0.71:22-10.0.0.1:48228.service: Deactivated successfully. Jan 13 20:40:15.303504 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:40:15.305519 systemd-logind[1594]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:40:15.307172 systemd-logind[1594]: Removed session 20. Jan 13 20:40:15.338259 sshd[4468]: Accepted publickey for core from 10.0.0.1 port 48240 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:40:15.339788 sshd-session[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:15.343727 systemd-logind[1594]: New session 21 of user core. Jan 13 20:40:15.357023 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:40:15.481987 sshd[4473]: Connection closed by 10.0.0.1 port 48240 Jan 13 20:40:15.482415 sshd-session[4468]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:15.486845 systemd[1]: sshd@20-10.0.0.71:22-10.0.0.1:48240.service: Deactivated successfully. Jan 13 20:40:15.489293 systemd-logind[1594]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:40:15.489439 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:40:15.490741 systemd-logind[1594]: Removed session 21. Jan 13 20:40:20.501081 systemd[1]: Started sshd@21-10.0.0.71:22-10.0.0.1:48246.service - OpenSSH per-connection server daemon (10.0.0.1:48246). Jan 13 20:40:20.538866 sshd[4485]: Accepted publickey for core from 10.0.0.1 port 48246 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:40:20.540461 sshd-session[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:20.545595 systemd-logind[1594]: New session 22 of user core. Jan 13 20:40:20.554133 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:40:20.672580 sshd[4488]: Connection closed by 10.0.0.1 port 48246 Jan 13 20:40:20.672971 sshd-session[4485]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:20.677429 systemd[1]: sshd@21-10.0.0.71:22-10.0.0.1:48246.service: Deactivated successfully. Jan 13 20:40:20.680196 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:40:20.681010 systemd-logind[1594]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:40:20.682169 systemd-logind[1594]: Removed session 22. Jan 13 20:40:25.689977 systemd[1]: Started sshd@22-10.0.0.71:22-10.0.0.1:54946.service - OpenSSH per-connection server daemon (10.0.0.1:54946). Jan 13 20:40:25.726818 sshd[4505]: Accepted publickey for core from 10.0.0.1 port 54946 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:40:25.728345 sshd-session[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:25.732675 systemd-logind[1594]: New session 23 of user core. Jan 13 20:40:25.748046 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:40:25.868923 sshd[4508]: Connection closed by 10.0.0.1 port 54946 Jan 13 20:40:25.869332 sshd-session[4505]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:25.874404 systemd[1]: sshd@22-10.0.0.71:22-10.0.0.1:54946.service: Deactivated successfully. Jan 13 20:40:25.877226 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:40:25.878049 systemd-logind[1594]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:40:25.879044 systemd-logind[1594]: Removed session 23. Jan 13 20:40:30.884983 systemd[1]: Started sshd@23-10.0.0.71:22-10.0.0.1:54962.service - OpenSSH per-connection server daemon (10.0.0.1:54962). Jan 13 20:40:30.922654 sshd[4520]: Accepted publickey for core from 10.0.0.1 port 54962 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:40:30.924206 sshd-session[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:30.928071 systemd-logind[1594]: New session 24 of user core. Jan 13 20:40:30.938106 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:40:31.058275 sshd[4523]: Connection closed by 10.0.0.1 port 54962 Jan 13 20:40:31.058650 sshd-session[4520]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:31.062501 systemd[1]: sshd@23-10.0.0.71:22-10.0.0.1:54962.service: Deactivated successfully. Jan 13 20:40:31.064987 systemd-logind[1594]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:40:31.065044 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:40:31.066212 systemd-logind[1594]: Removed session 24. Jan 13 20:40:32.730432 kubelet[2859]: E0113 20:40:32.730356 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:35.730919 kubelet[2859]: E0113 20:40:35.730870 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:36.068041 systemd[1]: Started sshd@24-10.0.0.71:22-10.0.0.1:49274.service - OpenSSH per-connection server daemon (10.0.0.1:49274). Jan 13 20:40:36.106008 sshd[4535]: Accepted publickey for core from 10.0.0.1 port 49274 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:40:36.107877 sshd-session[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:36.112225 systemd-logind[1594]: New session 25 of user core. Jan 13 20:40:36.122047 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:40:36.233501 sshd[4538]: Connection closed by 10.0.0.1 port 49274 Jan 13 20:40:36.233902 sshd-session[4535]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:36.243184 systemd[1]: Started sshd@25-10.0.0.71:22-10.0.0.1:49288.service - OpenSSH per-connection server daemon (10.0.0.1:49288). Jan 13 20:40:36.243964 systemd[1]: sshd@24-10.0.0.71:22-10.0.0.1:49274.service: Deactivated successfully. Jan 13 20:40:36.246490 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:40:36.248428 systemd-logind[1594]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:40:36.249965 systemd-logind[1594]: Removed session 25. Jan 13 20:40:36.283989 sshd[4548]: Accepted publickey for core from 10.0.0.1 port 49288 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:40:36.285712 sshd-session[4548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:36.290278 systemd-logind[1594]: New session 26 of user core. Jan 13 20:40:36.298272 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:40:37.731428 kubelet[2859]: E0113 20:40:37.731367 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:37.803005 containerd[1618]: time="2025-01-13T20:40:37.802959184Z" level=info msg="StopContainer for \"942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7\" with timeout 30 (s)" Jan 13 20:40:37.803547 containerd[1618]: time="2025-01-13T20:40:37.803529967Z" level=info msg="Stop container \"942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7\" with signal terminated" Jan 13 20:40:37.853681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7-rootfs.mount: Deactivated successfully. Jan 13 20:40:37.861381 containerd[1618]: time="2025-01-13T20:40:37.861323376Z" level=info msg="StopContainer for \"09c5454296932d96e95dbc615663431fcf51f52283c1e1fd884fd8a115852876\" with timeout 2 (s)" Jan 13 20:40:37.861604 containerd[1618]: time="2025-01-13T20:40:37.861577839Z" level=info msg="Stop container \"09c5454296932d96e95dbc615663431fcf51f52283c1e1fd884fd8a115852876\" with signal terminated" Jan 13 20:40:37.862301 containerd[1618]: time="2025-01-13T20:40:37.862260575Z" level=info msg="shim disconnected" id=942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7 namespace=k8s.io Jan 13 20:40:37.862346 containerd[1618]: time="2025-01-13T20:40:37.862302244Z" level=warning msg="cleaning up after shim disconnected" id=942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7 namespace=k8s.io Jan 13 20:40:37.862346 containerd[1618]: time="2025-01-13T20:40:37.862310740Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:37.866702 containerd[1618]: time="2025-01-13T20:40:37.866650752Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:40:37.869236 systemd-networkd[1244]: lxc_health: Link DOWN Jan 13 20:40:37.869642 systemd-networkd[1244]: lxc_health: Lost carrier Jan 13 20:40:37.882387 containerd[1618]: time="2025-01-13T20:40:37.882352377Z" level=info msg="StopContainer for \"942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7\" returns successfully" Jan 13 20:40:37.883073 containerd[1618]: time="2025-01-13T20:40:37.883041896Z" level=info msg="StopPodSandbox for \"84528d6927d9a0ad5e9392cd1fd79fe6d26c5ab4b2c3d8134ddca864f43e8feb\"" Jan 13 20:40:37.889315 containerd[1618]: time="2025-01-13T20:40:37.883072092Z" level=info msg="Container to stop \"942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:40:37.892016 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84528d6927d9a0ad5e9392cd1fd79fe6d26c5ab4b2c3d8134ddca864f43e8feb-shm.mount: Deactivated successfully. Jan 13 20:40:37.926268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09c5454296932d96e95dbc615663431fcf51f52283c1e1fd884fd8a115852876-rootfs.mount: Deactivated successfully. Jan 13 20:40:37.930024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84528d6927d9a0ad5e9392cd1fd79fe6d26c5ab4b2c3d8134ddca864f43e8feb-rootfs.mount: Deactivated successfully. Jan 13 20:40:37.930919 containerd[1618]: time="2025-01-13T20:40:37.930865072Z" level=info msg="shim disconnected" id=09c5454296932d96e95dbc615663431fcf51f52283c1e1fd884fd8a115852876 namespace=k8s.io Jan 13 20:40:37.931250 containerd[1618]: time="2025-01-13T20:40:37.931091291Z" level=warning msg="cleaning up after shim disconnected" id=09c5454296932d96e95dbc615663431fcf51f52283c1e1fd884fd8a115852876 namespace=k8s.io Jan 13 20:40:37.931250 containerd[1618]: time="2025-01-13T20:40:37.931107553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:37.931353 containerd[1618]: time="2025-01-13T20:40:37.931299417Z" level=info msg="shim disconnected" id=84528d6927d9a0ad5e9392cd1fd79fe6d26c5ab4b2c3d8134ddca864f43e8feb namespace=k8s.io Jan 13 20:40:37.931379 containerd[1618]: time="2025-01-13T20:40:37.931361103Z" level=warning msg="cleaning up after shim disconnected" id=84528d6927d9a0ad5e9392cd1fd79fe6d26c5ab4b2c3d8134ddca864f43e8feb namespace=k8s.io Jan 13 20:40:37.931379 containerd[1618]: time="2025-01-13T20:40:37.931374199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:37.948224 containerd[1618]: time="2025-01-13T20:40:37.948168747Z" level=info msg="TearDown network for sandbox \"84528d6927d9a0ad5e9392cd1fd79fe6d26c5ab4b2c3d8134ddca864f43e8feb\" successfully" Jan 13 20:40:37.948224 containerd[1618]: time="2025-01-13T20:40:37.948209393Z" level=info msg="StopPodSandbox for \"84528d6927d9a0ad5e9392cd1fd79fe6d26c5ab4b2c3d8134ddca864f43e8feb\" returns successfully" Jan 13 20:40:37.950544 containerd[1618]: time="2025-01-13T20:40:37.950506982Z" level=info msg="StopContainer for \"09c5454296932d96e95dbc615663431fcf51f52283c1e1fd884fd8a115852876\" returns successfully" Jan 13 20:40:37.950833 containerd[1618]: time="2025-01-13T20:40:37.950802282Z" level=info msg="StopPodSandbox for \"e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f\"" Jan 13 20:40:37.951153 containerd[1618]: time="2025-01-13T20:40:37.950988566Z" level=info msg="Container to stop \"1e3da900e4fb2080e33195723f5538f78f8be6dfb14bfb1789efc372e04e66d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:40:37.951153 containerd[1618]: time="2025-01-13T20:40:37.951030925Z" level=info msg="Container to stop \"09c5454296932d96e95dbc615663431fcf51f52283c1e1fd884fd8a115852876\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:40:37.951153 containerd[1618]: time="2025-01-13T20:40:37.951040514Z" level=info msg="Container to stop \"1ad005cb74c67cb2a691ddd938ab79ebd4965c48d387d1bb56a061de1fee37d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:40:37.951153 containerd[1618]: time="2025-01-13T20:40:37.951052096Z" level=info msg="Container to stop \"f73572b2545541f5934227fa54151b4465bc364f0069d128b8ace180673c8dbb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:40:37.951153 containerd[1618]: time="2025-01-13T20:40:37.951064389Z" level=info msg="Container to stop \"6619cbd919b52abe9a9bc58130b54e916427e621cdb0efedcb88e654e53c4070\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:40:37.955906 kubelet[2859]: I0113 20:40:37.955814 2859 scope.go:117] "RemoveContainer" containerID="942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7" Jan 13 20:40:37.964170 containerd[1618]: time="2025-01-13T20:40:37.964117009Z" level=info msg="RemoveContainer for \"942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7\"" Jan 13 20:40:37.972594 containerd[1618]: time="2025-01-13T20:40:37.972447882Z" level=info msg="RemoveContainer for \"942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7\" returns successfully" Jan 13 20:40:37.973023 kubelet[2859]: I0113 20:40:37.972999 2859 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d735ded6-ff64-46a9-9a32-852d81af5361-cilium-config-path\") pod \"d735ded6-ff64-46a9-9a32-852d81af5361\" (UID: \"d735ded6-ff64-46a9-9a32-852d81af5361\") " Jan 13 20:40:37.973091 kubelet[2859]: I0113 20:40:37.973036 2859 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szr4h\" (UniqueName: \"kubernetes.io/projected/d735ded6-ff64-46a9-9a32-852d81af5361-kube-api-access-szr4h\") pod \"d735ded6-ff64-46a9-9a32-852d81af5361\" (UID: \"d735ded6-ff64-46a9-9a32-852d81af5361\") " Jan 13 20:40:37.975027 kubelet[2859]: I0113 20:40:37.975006 2859 scope.go:117] "RemoveContainer" containerID="942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7" Jan 13 20:40:37.977821 containerd[1618]: time="2025-01-13T20:40:37.975901173Z" level=error msg="ContainerStatus for \"942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7\": not found" Jan 13 20:40:37.977942 kubelet[2859]: E0113 20:40:37.976447 2859 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7\": not found" containerID="942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7" Jan 13 20:40:37.977942 kubelet[2859]: I0113 20:40:37.976554 2859 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7"} err="failed to get container status \"942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7\": rpc error: code = NotFound desc = an error occurred when try to find container \"942e897472eea9a3b26b1d014fccb51331ca3090b603e4f25d8ef354827d1ab7\": not found" Jan 13 20:40:37.977942 kubelet[2859]: I0113 20:40:37.977685 2859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d735ded6-ff64-46a9-9a32-852d81af5361-kube-api-access-szr4h" (OuterVolumeSpecName: "kube-api-access-szr4h") pod "d735ded6-ff64-46a9-9a32-852d81af5361" (UID: "d735ded6-ff64-46a9-9a32-852d81af5361"). InnerVolumeSpecName "kube-api-access-szr4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:40:37.979178 kubelet[2859]: I0113 20:40:37.979145 2859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d735ded6-ff64-46a9-9a32-852d81af5361-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d735ded6-ff64-46a9-9a32-852d81af5361" (UID: "d735ded6-ff64-46a9-9a32-852d81af5361"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:40:37.987433 containerd[1618]: time="2025-01-13T20:40:37.987056905Z" level=info msg="shim disconnected" id=e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f namespace=k8s.io Jan 13 20:40:37.987433 containerd[1618]: time="2025-01-13T20:40:37.987242046Z" level=warning msg="cleaning up after shim disconnected" id=e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f namespace=k8s.io Jan 13 20:40:37.987433 containerd[1618]: time="2025-01-13T20:40:37.987258146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:38.003036 containerd[1618]: time="2025-01-13T20:40:38.002976793Z" level=info msg="TearDown network for sandbox \"e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f\" successfully" Jan 13 20:40:38.003188 containerd[1618]: time="2025-01-13T20:40:38.003018742Z" level=info msg="StopPodSandbox for \"e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f\" returns successfully" Jan 13 20:40:38.073529 kubelet[2859]: I0113 20:40:38.073486 2859 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-hostproc\") pod \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " Jan 13 20:40:38.073529 kubelet[2859]: I0113 20:40:38.073525 2859 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-host-proc-sys-kernel\") pod \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " Jan 13 20:40:38.073733 kubelet[2859]: I0113 20:40:38.073551 2859 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-hubble-tls\") pod \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " Jan 13 20:40:38.073733 kubelet[2859]: I0113 20:40:38.073568 2859 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-etc-cni-netd\") pod \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " Jan 13 20:40:38.073733 kubelet[2859]: I0113 20:40:38.073587 2859 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-cilium-config-path\") pod \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " Jan 13 20:40:38.073733 kubelet[2859]: I0113 20:40:38.073602 2859 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-xtables-lock\") pod \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " Jan 13 20:40:38.073733 kubelet[2859]: I0113 20:40:38.073602 2859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-hostproc" (OuterVolumeSpecName: "hostproc") pod "6bf8a465-8e2b-475f-83fb-4aaae0395d1c" (UID: "6bf8a465-8e2b-475f-83fb-4aaae0395d1c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:40:38.073890 kubelet[2859]: I0113 20:40:38.073602 2859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6bf8a465-8e2b-475f-83fb-4aaae0395d1c" (UID: "6bf8a465-8e2b-475f-83fb-4aaae0395d1c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:40:38.073890 kubelet[2859]: I0113 20:40:38.073620 2859 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-host-proc-sys-net\") pod \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " Jan 13 20:40:38.073890 kubelet[2859]: I0113 20:40:38.073634 2859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6bf8a465-8e2b-475f-83fb-4aaae0395d1c" (UID: "6bf8a465-8e2b-475f-83fb-4aaae0395d1c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:40:38.073890 kubelet[2859]: I0113 20:40:38.073652 2859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6bf8a465-8e2b-475f-83fb-4aaae0395d1c" (UID: "6bf8a465-8e2b-475f-83fb-4aaae0395d1c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:40:38.073890 kubelet[2859]: I0113 20:40:38.073657 2859 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-cni-path\") pod \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " Jan 13 20:40:38.074007 kubelet[2859]: I0113 20:40:38.073682 2859 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-798ff\" (UniqueName: \"kubernetes.io/projected/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-kube-api-access-798ff\") pod \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " Jan 13 20:40:38.074007 kubelet[2859]: I0113 20:40:38.073699 2859 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-cilium-run\") pod \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " Jan 13 20:40:38.074007 kubelet[2859]: I0113 20:40:38.073714 2859 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-bpf-maps\") pod \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " Jan 13 20:40:38.074007 kubelet[2859]: I0113 20:40:38.073729 2859 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-cilium-cgroup\") pod \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " Jan 13 20:40:38.074007 kubelet[2859]: I0113 20:40:38.073745 2859 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-lib-modules\") pod \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " Jan 13 20:40:38.074007 kubelet[2859]: I0113 20:40:38.073777 2859 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-clustermesh-secrets\") pod \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\" (UID: \"6bf8a465-8e2b-475f-83fb-4aaae0395d1c\") " Jan 13 20:40:38.074142 kubelet[2859]: I0113 20:40:38.073824 2859 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-szr4h\" (UniqueName: \"kubernetes.io/projected/d735ded6-ff64-46a9-9a32-852d81af5361-kube-api-access-szr4h\") on node \"localhost\" DevicePath \"\"" Jan 13 20:40:38.074142 kubelet[2859]: I0113 20:40:38.073840 2859 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 13 20:40:38.074142 kubelet[2859]: I0113 20:40:38.073855 2859 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 13 20:40:38.076539 kubelet[2859]: I0113 20:40:38.074845 2859 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 13 20:40:38.076539 kubelet[2859]: I0113 20:40:38.074869 2859 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d735ded6-ff64-46a9-9a32-852d81af5361-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:40:38.076823 kubelet[2859]: I0113 20:40:38.076785 2859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6bf8a465-8e2b-475f-83fb-4aaae0395d1c" (UID: "6bf8a465-8e2b-475f-83fb-4aaae0395d1c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:40:38.077064 kubelet[2859]: I0113 20:40:38.076992 2859 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 13 20:40:38.077064 kubelet[2859]: I0113 20:40:38.073909 2859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6bf8a465-8e2b-475f-83fb-4aaae0395d1c" (UID: "6bf8a465-8e2b-475f-83fb-4aaae0395d1c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:40:38.077064 kubelet[2859]: I0113 20:40:38.073937 2859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6bf8a465-8e2b-475f-83fb-4aaae0395d1c" (UID: "6bf8a465-8e2b-475f-83fb-4aaae0395d1c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:40:38.077064 kubelet[2859]: I0113 20:40:38.077025 2859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6bf8a465-8e2b-475f-83fb-4aaae0395d1c" (UID: "6bf8a465-8e2b-475f-83fb-4aaae0395d1c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:40:38.077064 kubelet[2859]: I0113 20:40:38.073969 2859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-cni-path" (OuterVolumeSpecName: "cni-path") pod "6bf8a465-8e2b-475f-83fb-4aaae0395d1c" (UID: "6bf8a465-8e2b-475f-83fb-4aaae0395d1c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:40:38.077197 kubelet[2859]: I0113 20:40:38.077009 2859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6bf8a465-8e2b-475f-83fb-4aaae0395d1c" (UID: "6bf8a465-8e2b-475f-83fb-4aaae0395d1c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:40:38.077197 kubelet[2859]: I0113 20:40:38.077049 2859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6bf8a465-8e2b-475f-83fb-4aaae0395d1c" (UID: "6bf8a465-8e2b-475f-83fb-4aaae0395d1c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:40:38.077410 kubelet[2859]: I0113 20:40:38.077367 2859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-kube-api-access-798ff" (OuterVolumeSpecName: "kube-api-access-798ff") pod "6bf8a465-8e2b-475f-83fb-4aaae0395d1c" (UID: "6bf8a465-8e2b-475f-83fb-4aaae0395d1c"). InnerVolumeSpecName "kube-api-access-798ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:40:38.077655 kubelet[2859]: I0113 20:40:38.077634 2859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6bf8a465-8e2b-475f-83fb-4aaae0395d1c" (UID: "6bf8a465-8e2b-475f-83fb-4aaae0395d1c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:40:38.077710 kubelet[2859]: I0113 20:40:38.077659 2859 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6bf8a465-8e2b-475f-83fb-4aaae0395d1c" (UID: "6bf8a465-8e2b-475f-83fb-4aaae0395d1c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:40:38.177177 kubelet[2859]: I0113 20:40:38.177116 2859 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:40:38.177177 kubelet[2859]: I0113 20:40:38.177154 2859 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 13 20:40:38.177177 kubelet[2859]: I0113 20:40:38.177167 2859 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:40:38.177177 kubelet[2859]: I0113 20:40:38.177178 2859 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-798ff\" (UniqueName: \"kubernetes.io/projected/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-kube-api-access-798ff\") on node \"localhost\" DevicePath \"\"" Jan 13 20:40:38.177177 kubelet[2859]: I0113 20:40:38.177191 2859 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 13 20:40:38.177177 kubelet[2859]: I0113 20:40:38.177199 2859 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 13 20:40:38.177494 kubelet[2859]: I0113 20:40:38.177209 2859 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 13 20:40:38.177494 kubelet[2859]: I0113 20:40:38.177218 2859 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 13 20:40:38.177494 kubelet[2859]: I0113 20:40:38.177228 2859 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 13 20:40:38.177494 kubelet[2859]: I0113 20:40:38.177237 2859 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6bf8a465-8e2b-475f-83fb-4aaae0395d1c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 13 20:40:38.832994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f-rootfs.mount: Deactivated successfully. Jan 13 20:40:38.833198 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5a7837c84646b46c7a05ebb5b7a4b7721d8debf133d470d97c9766afafacc9f-shm.mount: Deactivated successfully. Jan 13 20:40:38.833351 systemd[1]: var-lib-kubelet-pods-d735ded6\x2dff64\x2d46a9\x2d9a32\x2d852d81af5361-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dszr4h.mount: Deactivated successfully. Jan 13 20:40:38.833495 systemd[1]: var-lib-kubelet-pods-6bf8a465\x2d8e2b\x2d475f\x2d83fb\x2d4aaae0395d1c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d798ff.mount: Deactivated successfully. Jan 13 20:40:38.833649 systemd[1]: var-lib-kubelet-pods-6bf8a465\x2d8e2b\x2d475f\x2d83fb\x2d4aaae0395d1c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:40:38.833863 systemd[1]: var-lib-kubelet-pods-6bf8a465\x2d8e2b\x2d475f\x2d83fb\x2d4aaae0395d1c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:40:38.961530 kubelet[2859]: I0113 20:40:38.961492 2859 scope.go:117] "RemoveContainer" containerID="09c5454296932d96e95dbc615663431fcf51f52283c1e1fd884fd8a115852876" Jan 13 20:40:38.962660 containerd[1618]: time="2025-01-13T20:40:38.962620436Z" level=info msg="RemoveContainer for \"09c5454296932d96e95dbc615663431fcf51f52283c1e1fd884fd8a115852876\"" Jan 13 20:40:38.967527 containerd[1618]: time="2025-01-13T20:40:38.967481213Z" level=info msg="RemoveContainer for \"09c5454296932d96e95dbc615663431fcf51f52283c1e1fd884fd8a115852876\" returns successfully" Jan 13 20:40:38.967724 kubelet[2859]: I0113 20:40:38.967698 2859 scope.go:117] "RemoveContainer" containerID="1e3da900e4fb2080e33195723f5538f78f8be6dfb14bfb1789efc372e04e66d8" Jan 13 20:40:38.968780 containerd[1618]: time="2025-01-13T20:40:38.968733999Z" level=info msg="RemoveContainer for \"1e3da900e4fb2080e33195723f5538f78f8be6dfb14bfb1789efc372e04e66d8\"" Jan 13 20:40:38.972945 containerd[1618]: time="2025-01-13T20:40:38.972886775Z" level=info msg="RemoveContainer for \"1e3da900e4fb2080e33195723f5538f78f8be6dfb14bfb1789efc372e04e66d8\" returns successfully" Jan 13 20:40:38.973206 kubelet[2859]: I0113 20:40:38.973168 2859 scope.go:117] "RemoveContainer" containerID="6619cbd919b52abe9a9bc58130b54e916427e621cdb0efedcb88e654e53c4070" Jan 13 20:40:38.974453 containerd[1618]: time="2025-01-13T20:40:38.974424681Z" level=info msg="RemoveContainer for \"6619cbd919b52abe9a9bc58130b54e916427e621cdb0efedcb88e654e53c4070\"" Jan 13 20:40:38.979145 containerd[1618]: time="2025-01-13T20:40:38.979085540Z" level=info msg="RemoveContainer for \"6619cbd919b52abe9a9bc58130b54e916427e621cdb0efedcb88e654e53c4070\" returns successfully" Jan 13 20:40:38.979820 kubelet[2859]: I0113 20:40:38.979774 2859 scope.go:117] "RemoveContainer" containerID="f73572b2545541f5934227fa54151b4465bc364f0069d128b8ace180673c8dbb" Jan 13 20:40:38.980813 containerd[1618]: time="2025-01-13T20:40:38.980775705Z" level=info msg="RemoveContainer for \"f73572b2545541f5934227fa54151b4465bc364f0069d128b8ace180673c8dbb\"" Jan 13 20:40:38.984083 containerd[1618]: time="2025-01-13T20:40:38.984057239Z" level=info msg="RemoveContainer for \"f73572b2545541f5934227fa54151b4465bc364f0069d128b8ace180673c8dbb\" returns successfully" Jan 13 20:40:38.984281 kubelet[2859]: I0113 20:40:38.984253 2859 scope.go:117] "RemoveContainer" containerID="1ad005cb74c67cb2a691ddd938ab79ebd4965c48d387d1bb56a061de1fee37d6" Jan 13 20:40:38.985407 containerd[1618]: time="2025-01-13T20:40:38.985370539Z" level=info msg="RemoveContainer for \"1ad005cb74c67cb2a691ddd938ab79ebd4965c48d387d1bb56a061de1fee37d6\"" Jan 13 20:40:38.989361 containerd[1618]: time="2025-01-13T20:40:38.989325950Z" level=info msg="RemoveContainer for \"1ad005cb74c67cb2a691ddd938ab79ebd4965c48d387d1bb56a061de1fee37d6\" returns successfully" Jan 13 20:40:39.595393 sshd[4554]: Connection closed by 10.0.0.1 port 49288 Jan 13 20:40:39.595832 sshd-session[4548]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:39.605005 systemd[1]: Started sshd@26-10.0.0.71:22-10.0.0.1:49298.service - OpenSSH per-connection server daemon (10.0.0.1:49298). Jan 13 20:40:39.606115 systemd[1]: sshd@25-10.0.0.71:22-10.0.0.1:49288.service: Deactivated successfully. Jan 13 20:40:39.609351 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:40:39.610259 systemd-logind[1594]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:40:39.611276 systemd-logind[1594]: Removed session 26. Jan 13 20:40:39.645446 sshd[4716]: Accepted publickey for core from 10.0.0.1 port 49298 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:40:39.647216 sshd-session[4716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:39.651530 systemd-logind[1594]: New session 27 of user core. Jan 13 20:40:39.665053 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:40:39.732551 kubelet[2859]: I0113 20:40:39.732522 2859 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6bf8a465-8e2b-475f-83fb-4aaae0395d1c" path="/var/lib/kubelet/pods/6bf8a465-8e2b-475f-83fb-4aaae0395d1c/volumes" Jan 13 20:40:39.733692 kubelet[2859]: I0113 20:40:39.733678 2859 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d735ded6-ff64-46a9-9a32-852d81af5361" path="/var/lib/kubelet/pods/d735ded6-ff64-46a9-9a32-852d81af5361/volumes" Jan 13 20:40:39.808826 kubelet[2859]: E0113 20:40:39.808798 2859 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:40:40.413992 sshd[4722]: Connection closed by 10.0.0.1 port 49298 Jan 13 20:40:40.414874 sshd-session[4716]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:40.426642 systemd[1]: Started sshd@27-10.0.0.71:22-10.0.0.1:49314.service - OpenSSH per-connection server daemon (10.0.0.1:49314). Jan 13 20:40:40.428528 systemd[1]: sshd@26-10.0.0.71:22-10.0.0.1:49298.service: Deactivated successfully. Jan 13 20:40:40.433267 kubelet[2859]: I0113 20:40:40.433221 2859 topology_manager.go:215] "Topology Admit Handler" podUID="92ef3e60-e018-4a43-abba-0fa8eca2c318" podNamespace="kube-system" podName="cilium-2dftn" Jan 13 20:40:40.436563 kubelet[2859]: E0113 20:40:40.433296 2859 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bf8a465-8e2b-475f-83fb-4aaae0395d1c" containerName="cilium-agent" Jan 13 20:40:40.436563 kubelet[2859]: E0113 20:40:40.433309 2859 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bf8a465-8e2b-475f-83fb-4aaae0395d1c" containerName="apply-sysctl-overwrites" Jan 13 20:40:40.436563 kubelet[2859]: E0113 20:40:40.433317 2859 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bf8a465-8e2b-475f-83fb-4aaae0395d1c" containerName="mount-bpf-fs" Jan 13 20:40:40.436563 kubelet[2859]: E0113 20:40:40.433326 2859 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bf8a465-8e2b-475f-83fb-4aaae0395d1c" containerName="clean-cilium-state" Jan 13 20:40:40.436563 kubelet[2859]: E0113 20:40:40.433336 2859 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d735ded6-ff64-46a9-9a32-852d81af5361" containerName="cilium-operator" Jan 13 20:40:40.436563 kubelet[2859]: E0113 20:40:40.433345 2859 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bf8a465-8e2b-475f-83fb-4aaae0395d1c" containerName="mount-cgroup" Jan 13 20:40:40.436563 kubelet[2859]: I0113 20:40:40.433378 2859 memory_manager.go:354] "RemoveStaleState removing state" podUID="d735ded6-ff64-46a9-9a32-852d81af5361" containerName="cilium-operator" Jan 13 20:40:40.436563 kubelet[2859]: I0113 20:40:40.433388 2859 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bf8a465-8e2b-475f-83fb-4aaae0395d1c" containerName="cilium-agent" Jan 13 20:40:40.435771 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:40:40.438800 systemd-logind[1594]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:40:40.443804 systemd-logind[1594]: Removed session 27. Jan 13 20:40:40.478743 sshd[4730]: Accepted publickey for core from 10.0.0.1 port 49314 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:40:40.480318 sshd-session[4730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:40.484165 systemd-logind[1594]: New session 28 of user core. Jan 13 20:40:40.491591 kubelet[2859]: I0113 20:40:40.491561 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92ef3e60-e018-4a43-abba-0fa8eca2c318-cni-path\") pod \"cilium-2dftn\" (UID: \"92ef3e60-e018-4a43-abba-0fa8eca2c318\") " pod="kube-system/cilium-2dftn" Jan 13 20:40:40.491680 kubelet[2859]: I0113 20:40:40.491606 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92ef3e60-e018-4a43-abba-0fa8eca2c318-xtables-lock\") pod \"cilium-2dftn\" (UID: \"92ef3e60-e018-4a43-abba-0fa8eca2c318\") " pod="kube-system/cilium-2dftn" Jan 13 20:40:40.491680 kubelet[2859]: I0113 20:40:40.491626 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92ef3e60-e018-4a43-abba-0fa8eca2c318-cilium-run\") pod \"cilium-2dftn\" (UID: \"92ef3e60-e018-4a43-abba-0fa8eca2c318\") " pod="kube-system/cilium-2dftn" Jan 13 20:40:40.491680 kubelet[2859]: I0113 20:40:40.491642 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92ef3e60-e018-4a43-abba-0fa8eca2c318-cilium-cgroup\") pod \"cilium-2dftn\" (UID: \"92ef3e60-e018-4a43-abba-0fa8eca2c318\") " pod="kube-system/cilium-2dftn" Jan 13 20:40:40.491822 kubelet[2859]: I0113 20:40:40.491708 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92ef3e60-e018-4a43-abba-0fa8eca2c318-lib-modules\") pod \"cilium-2dftn\" (UID: \"92ef3e60-e018-4a43-abba-0fa8eca2c318\") " pod="kube-system/cilium-2dftn" Jan 13 20:40:40.491822 kubelet[2859]: I0113 20:40:40.491795 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92ef3e60-e018-4a43-abba-0fa8eca2c318-cilium-config-path\") pod \"cilium-2dftn\" (UID: \"92ef3e60-e018-4a43-abba-0fa8eca2c318\") " pod="kube-system/cilium-2dftn" Jan 13 20:40:40.491908 kubelet[2859]: I0113 20:40:40.491836 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6mxq\" (UniqueName: \"kubernetes.io/projected/92ef3e60-e018-4a43-abba-0fa8eca2c318-kube-api-access-z6mxq\") pod \"cilium-2dftn\" (UID: \"92ef3e60-e018-4a43-abba-0fa8eca2c318\") " pod="kube-system/cilium-2dftn" Jan 13 20:40:40.491908 kubelet[2859]: I0113 20:40:40.491900 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92ef3e60-e018-4a43-abba-0fa8eca2c318-bpf-maps\") pod \"cilium-2dftn\" (UID: \"92ef3e60-e018-4a43-abba-0fa8eca2c318\") " pod="kube-system/cilium-2dftn" Jan 13 20:40:40.491970 kubelet[2859]: I0113 20:40:40.491927 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92ef3e60-e018-4a43-abba-0fa8eca2c318-hostproc\") pod \"cilium-2dftn\" (UID: \"92ef3e60-e018-4a43-abba-0fa8eca2c318\") " pod="kube-system/cilium-2dftn" Jan 13 20:40:40.491970 kubelet[2859]: I0113 20:40:40.491953 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92ef3e60-e018-4a43-abba-0fa8eca2c318-clustermesh-secrets\") pod \"cilium-2dftn\" (UID: \"92ef3e60-e018-4a43-abba-0fa8eca2c318\") " pod="kube-system/cilium-2dftn" Jan 13 20:40:40.492028 kubelet[2859]: I0113 20:40:40.491978 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/92ef3e60-e018-4a43-abba-0fa8eca2c318-cilium-ipsec-secrets\") pod \"cilium-2dftn\" (UID: \"92ef3e60-e018-4a43-abba-0fa8eca2c318\") " pod="kube-system/cilium-2dftn" Jan 13 20:40:40.492028 kubelet[2859]: I0113 20:40:40.492022 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92ef3e60-e018-4a43-abba-0fa8eca2c318-host-proc-sys-net\") pod \"cilium-2dftn\" (UID: \"92ef3e60-e018-4a43-abba-0fa8eca2c318\") " pod="kube-system/cilium-2dftn" Jan 13 20:40:40.492105 kubelet[2859]: I0113 20:40:40.492059 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92ef3e60-e018-4a43-abba-0fa8eca2c318-host-proc-sys-kernel\") pod \"cilium-2dftn\" (UID: \"92ef3e60-e018-4a43-abba-0fa8eca2c318\") " pod="kube-system/cilium-2dftn" Jan 13 20:40:40.492130 kubelet[2859]: I0113 20:40:40.492113 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92ef3e60-e018-4a43-abba-0fa8eca2c318-hubble-tls\") pod \"cilium-2dftn\" (UID: \"92ef3e60-e018-4a43-abba-0fa8eca2c318\") " pod="kube-system/cilium-2dftn" Jan 13 20:40:40.492152 kubelet[2859]: I0113 20:40:40.492146 2859 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92ef3e60-e018-4a43-abba-0fa8eca2c318-etc-cni-netd\") pod \"cilium-2dftn\" (UID: \"92ef3e60-e018-4a43-abba-0fa8eca2c318\") " pod="kube-system/cilium-2dftn" Jan 13 20:40:40.494192 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:40:40.545197 sshd[4736]: Connection closed by 10.0.0.1 port 49314 Jan 13 20:40:40.545510 sshd-session[4730]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:40.553009 systemd[1]: Started sshd@28-10.0.0.71:22-10.0.0.1:49320.service - OpenSSH per-connection server daemon (10.0.0.1:49320). Jan 13 20:40:40.553502 systemd[1]: sshd@27-10.0.0.71:22-10.0.0.1:49314.service: Deactivated successfully. Jan 13 20:40:40.556139 systemd-logind[1594]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:40:40.556820 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:40:40.558645 systemd-logind[1594]: Removed session 28. Jan 13 20:40:40.590815 sshd[4740]: Accepted publickey for core from 10.0.0.1 port 49320 ssh2: RSA SHA256:NVvuh3rgEGbzReoHSGwX+StGkhEgwwBzICssYigrFbs Jan 13 20:40:40.592059 sshd-session[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:40.611678 systemd-logind[1594]: New session 29 of user core. Jan 13 20:40:40.620033 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 20:40:40.745676 kubelet[2859]: E0113 20:40:40.745545 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:40.746467 containerd[1618]: time="2025-01-13T20:40:40.746383440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2dftn,Uid:92ef3e60-e018-4a43-abba-0fa8eca2c318,Namespace:kube-system,Attempt:0,}" Jan 13 20:40:40.769012 containerd[1618]: time="2025-01-13T20:40:40.768912433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:40:40.769012 containerd[1618]: time="2025-01-13T20:40:40.768973639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:40:40.769012 containerd[1618]: time="2025-01-13T20:40:40.768984601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:40.769204 containerd[1618]: time="2025-01-13T20:40:40.769074722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:40.809643 containerd[1618]: time="2025-01-13T20:40:40.809605976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2dftn,Uid:92ef3e60-e018-4a43-abba-0fa8eca2c318,Namespace:kube-system,Attempt:0,} returns sandbox id \"24824792b09d293128474f555a58f5df6d1dcd7cb271942ca308bcda42f07c98\"" Jan 13 20:40:40.810242 kubelet[2859]: E0113 20:40:40.810221 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:40.812027 containerd[1618]: time="2025-01-13T20:40:40.812003331Z" level=info msg="CreateContainer within sandbox \"24824792b09d293128474f555a58f5df6d1dcd7cb271942ca308bcda42f07c98\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:40:40.824794 containerd[1618]: time="2025-01-13T20:40:40.824740975Z" level=info msg="CreateContainer within sandbox \"24824792b09d293128474f555a58f5df6d1dcd7cb271942ca308bcda42f07c98\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2bcbc9bff32e971bff147d994180d95301ced9957f1b188f595182a030849b9a\"" Jan 13 20:40:40.825160 containerd[1618]: time="2025-01-13T20:40:40.825135382Z" level=info msg="StartContainer for \"2bcbc9bff32e971bff147d994180d95301ced9957f1b188f595182a030849b9a\"" Jan 13 20:40:40.882386 containerd[1618]: time="2025-01-13T20:40:40.882338699Z" level=info msg="StartContainer for \"2bcbc9bff32e971bff147d994180d95301ced9957f1b188f595182a030849b9a\" returns successfully" Jan 13 20:40:40.924010 containerd[1618]: time="2025-01-13T20:40:40.923943579Z" level=info msg="shim disconnected" id=2bcbc9bff32e971bff147d994180d95301ced9957f1b188f595182a030849b9a namespace=k8s.io Jan 13 20:40:40.924010 containerd[1618]: time="2025-01-13T20:40:40.924005887Z" level=warning msg="cleaning up after shim disconnected" id=2bcbc9bff32e971bff147d994180d95301ced9957f1b188f595182a030849b9a namespace=k8s.io Jan 13 20:40:40.924010 containerd[1618]: time="2025-01-13T20:40:40.924017349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:40.973526 kubelet[2859]: E0113 20:40:40.973502 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:40.975160 containerd[1618]: time="2025-01-13T20:40:40.975123529Z" level=info msg="CreateContainer within sandbox \"24824792b09d293128474f555a58f5df6d1dcd7cb271942ca308bcda42f07c98\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:40:41.007144 containerd[1618]: time="2025-01-13T20:40:41.007021678Z" level=info msg="CreateContainer within sandbox \"24824792b09d293128474f555a58f5df6d1dcd7cb271942ca308bcda42f07c98\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f5de550e423a9a22965e34db00d97f44f05dd1ae4ded418f779b61a1f333af54\"" Jan 13 20:40:41.007782 containerd[1618]: time="2025-01-13T20:40:41.007554478Z" level=info msg="StartContainer for \"f5de550e423a9a22965e34db00d97f44f05dd1ae4ded418f779b61a1f333af54\"" Jan 13 20:40:41.062735 containerd[1618]: time="2025-01-13T20:40:41.062694498Z" level=info msg="StartContainer for \"f5de550e423a9a22965e34db00d97f44f05dd1ae4ded418f779b61a1f333af54\" returns successfully" Jan 13 20:40:41.091680 containerd[1618]: time="2025-01-13T20:40:41.091620352Z" level=info msg="shim disconnected" id=f5de550e423a9a22965e34db00d97f44f05dd1ae4ded418f779b61a1f333af54 namespace=k8s.io Jan 13 20:40:41.091680 containerd[1618]: time="2025-01-13T20:40:41.091676508Z" level=warning msg="cleaning up after shim disconnected" id=f5de550e423a9a22965e34db00d97f44f05dd1ae4ded418f779b61a1f333af54 namespace=k8s.io Jan 13 20:40:41.091680 containerd[1618]: time="2025-01-13T20:40:41.091687329Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:41.609823 kubelet[2859]: I0113 20:40:41.609776 2859 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:40:41Z","lastTransitionTime":"2025-01-13T20:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:40:41.977516 kubelet[2859]: E0113 20:40:41.977379 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:41.981787 containerd[1618]: time="2025-01-13T20:40:41.980403549Z" level=info msg="CreateContainer within sandbox \"24824792b09d293128474f555a58f5df6d1dcd7cb271942ca308bcda42f07c98\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:40:41.998082 containerd[1618]: time="2025-01-13T20:40:41.998014231Z" level=info msg="CreateContainer within sandbox \"24824792b09d293128474f555a58f5df6d1dcd7cb271942ca308bcda42f07c98\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"026afaad5ea8b6521d26e3295e914e7f428e8b968f7db0b6e9a04d121ef38697\"" Jan 13 20:40:41.998585 containerd[1618]: time="2025-01-13T20:40:41.998552731Z" level=info msg="StartContainer for \"026afaad5ea8b6521d26e3295e914e7f428e8b968f7db0b6e9a04d121ef38697\"" Jan 13 20:40:42.073660 containerd[1618]: time="2025-01-13T20:40:42.073593592Z" level=info msg="StartContainer for \"026afaad5ea8b6521d26e3295e914e7f428e8b968f7db0b6e9a04d121ef38697\" returns successfully" Jan 13 20:40:42.104019 containerd[1618]: time="2025-01-13T20:40:42.103936768Z" level=info msg="shim disconnected" id=026afaad5ea8b6521d26e3295e914e7f428e8b968f7db0b6e9a04d121ef38697 namespace=k8s.io Jan 13 20:40:42.104019 containerd[1618]: time="2025-01-13T20:40:42.104008495Z" level=warning msg="cleaning up after shim disconnected" id=026afaad5ea8b6521d26e3295e914e7f428e8b968f7db0b6e9a04d121ef38697 namespace=k8s.io Jan 13 20:40:42.104019 containerd[1618]: time="2025-01-13T20:40:42.104022050Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:42.598369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-026afaad5ea8b6521d26e3295e914e7f428e8b968f7db0b6e9a04d121ef38697-rootfs.mount: Deactivated successfully. Jan 13 20:40:42.730651 kubelet[2859]: E0113 20:40:42.730598 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:42.980840 kubelet[2859]: E0113 20:40:42.980613 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:42.982903 containerd[1618]: time="2025-01-13T20:40:42.982831285Z" level=info msg="CreateContainer within sandbox \"24824792b09d293128474f555a58f5df6d1dcd7cb271942ca308bcda42f07c98\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:40:43.004386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3110527457.mount: Deactivated successfully. Jan 13 20:40:43.008416 containerd[1618]: time="2025-01-13T20:40:43.008368153Z" level=info msg="CreateContainer within sandbox \"24824792b09d293128474f555a58f5df6d1dcd7cb271942ca308bcda42f07c98\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9a2408728cc0d067ffed02de1bac5857e0f3b1780afe470ab6c85ca8eb43fd87\"" Jan 13 20:40:43.008881 containerd[1618]: time="2025-01-13T20:40:43.008859053Z" level=info msg="StartContainer for \"9a2408728cc0d067ffed02de1bac5857e0f3b1780afe470ab6c85ca8eb43fd87\"" Jan 13 20:40:43.063044 containerd[1618]: time="2025-01-13T20:40:43.062978232Z" level=info msg="StartContainer for \"9a2408728cc0d067ffed02de1bac5857e0f3b1780afe470ab6c85ca8eb43fd87\" returns successfully" Jan 13 20:40:43.083935 containerd[1618]: time="2025-01-13T20:40:43.083872397Z" level=info msg="shim disconnected" id=9a2408728cc0d067ffed02de1bac5857e0f3b1780afe470ab6c85ca8eb43fd87 namespace=k8s.io Jan 13 20:40:43.083935 containerd[1618]: time="2025-01-13T20:40:43.083928123Z" level=warning msg="cleaning up after shim disconnected" id=9a2408728cc0d067ffed02de1bac5857e0f3b1780afe470ab6c85ca8eb43fd87 namespace=k8s.io Jan 13 20:40:43.083935 containerd[1618]: time="2025-01-13T20:40:43.083937110Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:43.598930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a2408728cc0d067ffed02de1bac5857e0f3b1780afe470ab6c85ca8eb43fd87-rootfs.mount: Deactivated successfully. Jan 13 20:40:43.985273 kubelet[2859]: E0113 20:40:43.985148 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:43.990946 containerd[1618]: time="2025-01-13T20:40:43.990895514Z" level=info msg="CreateContainer within sandbox \"24824792b09d293128474f555a58f5df6d1dcd7cb271942ca308bcda42f07c98\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:40:44.006852 containerd[1618]: time="2025-01-13T20:40:44.006810466Z" level=info msg="CreateContainer within sandbox \"24824792b09d293128474f555a58f5df6d1dcd7cb271942ca308bcda42f07c98\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"56186ccdb22fe99ee3d50992c204d7080f5cbd700b1677ff5ed2c4ed40dc79f8\"" Jan 13 20:40:44.007452 containerd[1618]: time="2025-01-13T20:40:44.007426633Z" level=info msg="StartContainer for \"56186ccdb22fe99ee3d50992c204d7080f5cbd700b1677ff5ed2c4ed40dc79f8\"" Jan 13 20:40:44.071624 containerd[1618]: time="2025-01-13T20:40:44.071583136Z" level=info msg="StartContainer for \"56186ccdb22fe99ee3d50992c204d7080f5cbd700b1677ff5ed2c4ed40dc79f8\" returns successfully" Jan 13 20:40:44.487807 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 20:40:44.990201 kubelet[2859]: E0113 20:40:44.990154 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:46.747391 kubelet[2859]: E0113 20:40:46.747355 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:47.642584 systemd-networkd[1244]: lxc_health: Link UP Jan 13 20:40:47.648482 systemd-networkd[1244]: lxc_health: Gained carrier Jan 13 20:40:47.731836 kubelet[2859]: E0113 20:40:47.731791 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:48.747997 kubelet[2859]: E0113 20:40:48.747948 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:48.765834 kubelet[2859]: I0113 20:40:48.765172 2859 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2dftn" podStartSLOduration=8.765121834 podStartE2EDuration="8.765121834s" podCreationTimestamp="2025-01-13 20:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:40:45.006123431 +0000 UTC m=+95.375222923" watchObservedRunningTime="2025-01-13 20:40:48.765121834 +0000 UTC m=+99.134221316" Jan 13 20:40:48.998500 kubelet[2859]: E0113 20:40:48.998223 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:49.311150 systemd-networkd[1244]: lxc_health: Gained IPv6LL Jan 13 20:40:50.000867 kubelet[2859]: E0113 20:40:50.000824 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:40:55.391337 sshd[4752]: Connection closed by 10.0.0.1 port 49320 Jan 13 20:40:55.391799 sshd-session[4740]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:55.395603 systemd[1]: sshd@28-10.0.0.71:22-10.0.0.1:49320.service: Deactivated successfully. Jan 13 20:40:55.397924 systemd-logind[1594]: Session 29 logged out. Waiting for processes to exit. Jan 13 20:40:55.397938 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 20:40:55.399083 systemd-logind[1594]: Removed session 29. Jan 13 20:40:56.731275 kubelet[2859]: E0113 20:40:56.731228 2859 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"