Dec 13 13:32:28.924817 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:32:28.924837 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:32:28.924848 kernel: BIOS-provided physical RAM map: Dec 13 13:32:28.924855 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 13:32:28.924861 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 13:32:28.924868 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 13:32:28.924875 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 13:32:28.924882 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 13:32:28.924888 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 13 13:32:28.924908 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 13 13:32:28.924915 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Dec 13 13:32:28.924926 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Dec 13 13:32:28.924933 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Dec 13 13:32:28.924940 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Dec 13 13:32:28.924948 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Dec 13 13:32:28.924955 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 13:32:28.924964 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Dec 13 13:32:28.924971 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Dec 13 13:32:28.924978 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Dec 13 13:32:28.924985 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Dec 13 13:32:28.924992 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Dec 13 13:32:28.924999 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 13:32:28.925006 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 13:32:28.925013 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 13:32:28.925020 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 13 13:32:28.925027 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 13:32:28.925034 kernel: NX (Execute Disable) protection: active Dec 13 13:32:28.925043 kernel: APIC: Static calls initialized Dec 13 13:32:28.925050 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Dec 13 13:32:28.925057 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Dec 13 13:32:28.925064 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Dec 13 13:32:28.925071 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Dec 13 13:32:28.925077 kernel: extended physical RAM map: Dec 13 13:32:28.925084 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 13:32:28.925091 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 13:32:28.925098 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 13:32:28.925106 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 13:32:28.925112 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 13:32:28.925119 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 13 13:32:28.925129 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 13 13:32:28.925146 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Dec 13 13:32:28.925164 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Dec 13 13:32:28.925171 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Dec 13 13:32:28.925179 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Dec 13 13:32:28.925186 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Dec 13 13:32:28.925196 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Dec 13 13:32:28.925203 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Dec 13 13:32:28.925210 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Dec 13 13:32:28.925218 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Dec 13 13:32:28.925225 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 13:32:28.925232 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Dec 13 13:32:28.925239 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Dec 13 13:32:28.925247 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Dec 13 13:32:28.925254 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Dec 13 13:32:28.925263 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Dec 13 13:32:28.925271 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 13:32:28.925278 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 13:32:28.925285 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 13:32:28.925293 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 13 13:32:28.925300 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 13:32:28.925307 kernel: efi: EFI v2.7 by EDK II Dec 13 13:32:28.925314 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Dec 13 13:32:28.925322 kernel: random: crng init done Dec 13 13:32:28.925329 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Dec 13 13:32:28.925337 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Dec 13 13:32:28.925344 kernel: secureboot: Secure boot disabled Dec 13 13:32:28.925353 kernel: SMBIOS 2.8 present. Dec 13 13:32:28.925361 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 13 13:32:28.925368 kernel: Hypervisor detected: KVM Dec 13 13:32:28.925375 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 13:32:28.925383 kernel: kvm-clock: using sched offset of 2645145395 cycles Dec 13 13:32:28.925390 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 13:32:28.925398 kernel: tsc: Detected 2794.748 MHz processor Dec 13 13:32:28.925406 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:32:28.925413 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:32:28.925421 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Dec 13 13:32:28.925430 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 13:32:28.925438 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:32:28.925446 kernel: Using GB pages for direct mapping Dec 13 13:32:28.925453 kernel: ACPI: Early table checksum verification disabled Dec 13 13:32:28.925461 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 13:32:28.925468 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 13:32:28.925476 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:28.925483 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:28.925491 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 13:32:28.925500 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:28.925508 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:28.925515 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:28.925523 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:32:28.925530 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 13:32:28.925538 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 13:32:28.925545 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 13:32:28.925553 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 13:32:28.925560 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 13:32:28.925570 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 13:32:28.925578 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 13:32:28.925585 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 13:32:28.925593 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 13:32:28.925600 kernel: No NUMA configuration found Dec 13 13:32:28.925608 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Dec 13 13:32:28.925615 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Dec 13 13:32:28.925623 kernel: Zone ranges: Dec 13 13:32:28.925630 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:32:28.925640 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Dec 13 13:32:28.925647 kernel: Normal empty Dec 13 13:32:28.925655 kernel: Movable zone start for each node Dec 13 13:32:28.925662 kernel: Early memory node ranges Dec 13 13:32:28.925670 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 13:32:28.925677 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 13:32:28.925684 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 13:32:28.925692 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Dec 13 13:32:28.925699 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Dec 13 13:32:28.925706 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Dec 13 13:32:28.925716 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Dec 13 13:32:28.925724 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Dec 13 13:32:28.925731 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Dec 13 13:32:28.925738 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:32:28.925746 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 13:32:28.925761 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 13:32:28.925770 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:32:28.925778 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Dec 13 13:32:28.925786 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Dec 13 13:32:28.925794 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 13:32:28.925801 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 13 13:32:28.925809 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Dec 13 13:32:28.925819 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 13:32:28.925826 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 13:32:28.925834 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:32:28.925842 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 13:32:28.925850 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 13:32:28.925860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:32:28.925867 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 13:32:28.925875 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 13:32:28.925883 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:32:28.925913 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 13:32:28.925920 kernel: TSC deadline timer available Dec 13 13:32:28.925928 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 13:32:28.925936 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 13:32:28.925944 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 13:32:28.925954 kernel: kvm-guest: setup PV sched yield Dec 13 13:32:28.925962 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 13 13:32:28.925969 kernel: Booting paravirtualized kernel on KVM Dec 13 13:32:28.925978 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:32:28.925985 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 13:32:28.925993 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 13:32:28.926001 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 13:32:28.926008 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 13:32:28.926016 kernel: kvm-guest: PV spinlocks enabled Dec 13 13:32:28.926026 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 13:32:28.926035 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:32:28.926043 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:32:28.926051 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:32:28.926068 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:32:28.926083 kernel: Fallback order for Node 0: 0 Dec 13 13:32:28.926099 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Dec 13 13:32:28.926121 kernel: Policy zone: DMA32 Dec 13 13:32:28.926145 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:32:28.926162 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 177824K reserved, 0K cma-reserved) Dec 13 13:32:28.926177 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 13:32:28.926193 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:32:28.926201 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:32:28.926223 kernel: Dynamic Preempt: voluntary Dec 13 13:32:28.926231 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:32:28.926239 kernel: rcu: RCU event tracing is enabled. Dec 13 13:32:28.926248 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 13:32:28.926258 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:32:28.926266 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:32:28.926274 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:32:28.926282 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:32:28.926289 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 13:32:28.926297 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 13:32:28.926305 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:32:28.926312 kernel: Console: colour dummy device 80x25 Dec 13 13:32:28.926320 kernel: printk: console [ttyS0] enabled Dec 13 13:32:28.926330 kernel: ACPI: Core revision 20230628 Dec 13 13:32:28.926338 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 13:32:28.926345 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:32:28.926353 kernel: x2apic enabled Dec 13 13:32:28.926361 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:32:28.926368 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 13:32:28.926376 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 13:32:28.926384 kernel: kvm-guest: setup PV IPIs Dec 13 13:32:28.926392 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 13:32:28.926402 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 13:32:28.926409 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 13:32:28.926417 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 13:32:28.926425 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 13:32:28.926432 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 13:32:28.926440 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:32:28.926448 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:32:28.926456 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:32:28.926463 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:32:28.926473 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 13:32:28.926481 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 13:32:28.926489 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 13:32:28.926496 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 13:32:28.926504 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 13:32:28.926512 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 13:32:28.926520 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 13:32:28.926528 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:32:28.926538 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:32:28.926546 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:32:28.926553 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:32:28.926561 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 13:32:28.926569 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:32:28.926576 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:32:28.926584 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:32:28.926592 kernel: landlock: Up and running. Dec 13 13:32:28.926599 kernel: SELinux: Initializing. Dec 13 13:32:28.926609 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:32:28.926617 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:32:28.926625 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 13:32:28.926632 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:32:28.926640 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:32:28.926648 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:32:28.926656 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 13:32:28.926663 kernel: ... version: 0 Dec 13 13:32:28.926671 kernel: ... bit width: 48 Dec 13 13:32:28.926681 kernel: ... generic registers: 6 Dec 13 13:32:28.926688 kernel: ... value mask: 0000ffffffffffff Dec 13 13:32:28.926696 kernel: ... max period: 00007fffffffffff Dec 13 13:32:28.926704 kernel: ... fixed-purpose events: 0 Dec 13 13:32:28.926711 kernel: ... event mask: 000000000000003f Dec 13 13:32:28.926719 kernel: signal: max sigframe size: 1776 Dec 13 13:32:28.926726 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:32:28.926734 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:32:28.926742 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:32:28.926751 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:32:28.926759 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 13:32:28.926767 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 13:32:28.926774 kernel: smpboot: Max logical packages: 1 Dec 13 13:32:28.926782 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 13:32:28.926790 kernel: devtmpfs: initialized Dec 13 13:32:28.926797 kernel: x86/mm: Memory block size: 128MB Dec 13 13:32:28.926805 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 13:32:28.926813 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 13:32:28.926820 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Dec 13 13:32:28.926830 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 13:32:28.926838 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Dec 13 13:32:28.926846 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 13:32:28.926854 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:32:28.926862 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 13:32:28.926869 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:32:28.926877 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:32:28.926885 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:32:28.926906 kernel: audit: type=2000 audit(1734096748.511:1): state=initialized audit_enabled=0 res=1 Dec 13 13:32:28.926913 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:32:28.926921 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:32:28.926929 kernel: cpuidle: using governor menu Dec 13 13:32:28.926936 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:32:28.926944 kernel: dca service started, version 1.12.1 Dec 13 13:32:28.926952 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Dec 13 13:32:28.926959 kernel: PCI: Using configuration type 1 for base access Dec 13 13:32:28.926967 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:32:28.926977 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:32:28.926985 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:32:28.926992 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:32:28.927000 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:32:28.927008 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:32:28.927015 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:32:28.927023 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:32:28.927031 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:32:28.927038 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:32:28.927048 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:32:28.927056 kernel: ACPI: Interpreter enabled Dec 13 13:32:28.927064 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 13:32:28.927071 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:32:28.927079 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:32:28.927087 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 13:32:28.927095 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 13:32:28.927102 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:32:28.927307 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:32:28.927515 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 13:32:28.927684 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 13:32:28.927696 kernel: PCI host bridge to bus 0000:00 Dec 13 13:32:28.927841 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:32:28.927973 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 13:32:28.928086 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:32:28.928211 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 13 13:32:28.928323 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 13 13:32:28.928433 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 13 13:32:28.928553 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:32:28.928700 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 13:32:28.928841 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 13:32:28.929125 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 13:32:28.929277 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 13:32:28.929401 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 13:32:28.929523 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 13:32:28.929644 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 13:32:28.929775 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:32:28.929919 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 13:32:28.930085 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 13:32:28.930217 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Dec 13 13:32:28.930348 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 13:32:28.930470 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 13:32:28.930592 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 13:32:28.930713 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Dec 13 13:32:28.930847 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 13:32:28.930991 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 13:32:28.931116 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 13:32:28.931248 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 13 13:32:28.931369 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 13:32:28.931498 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 13:32:28.931619 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 13:32:28.931749 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 13:32:28.931878 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 13:32:28.932056 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 13:32:28.932219 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 13:32:28.932342 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 13:32:28.932353 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 13:32:28.932361 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 13:32:28.932369 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:32:28.932380 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 13:32:28.932388 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 13:32:28.932396 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 13:32:28.932403 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 13:32:28.932411 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 13:32:28.932419 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 13:32:28.932426 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 13:32:28.932434 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 13:32:28.932442 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 13:32:28.932452 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 13:32:28.932460 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 13:32:28.932468 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 13:32:28.932475 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 13:32:28.932483 kernel: iommu: Default domain type: Translated Dec 13 13:32:28.932491 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:32:28.932498 kernel: efivars: Registered efivars operations Dec 13 13:32:28.932506 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:32:28.932514 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:32:28.932524 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 13:32:28.932531 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Dec 13 13:32:28.932539 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Dec 13 13:32:28.932546 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Dec 13 13:32:28.932554 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Dec 13 13:32:28.932561 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Dec 13 13:32:28.932569 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Dec 13 13:32:28.932576 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Dec 13 13:32:28.932697 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 13:32:28.932821 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 13:32:28.933012 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 13:32:28.933024 kernel: vgaarb: loaded Dec 13 13:32:28.933032 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 13:32:28.933040 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 13:32:28.933048 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 13:32:28.933056 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:32:28.933063 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:32:28.933071 kernel: pnp: PnP ACPI init Dec 13 13:32:28.933226 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 13 13:32:28.933239 kernel: pnp: PnP ACPI: found 6 devices Dec 13 13:32:28.933247 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:32:28.933255 kernel: NET: Registered PF_INET protocol family Dec 13 13:32:28.933282 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:32:28.933293 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:32:28.933301 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:32:28.933310 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:32:28.933320 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:32:28.933328 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:32:28.933336 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:32:28.933344 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:32:28.933352 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:32:28.933360 kernel: NET: Registered PF_XDP protocol family Dec 13 13:32:28.933484 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 13:32:28.933606 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 13:32:28.933721 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 13:32:28.933830 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 13:32:28.933955 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 13:32:28.934067 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 13 13:32:28.934186 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 13 13:32:28.934297 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 13 13:32:28.934307 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:32:28.934315 kernel: Initialise system trusted keyrings Dec 13 13:32:28.934347 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:32:28.934355 kernel: Key type asymmetric registered Dec 13 13:32:28.934363 kernel: Asymmetric key parser 'x509' registered Dec 13 13:32:28.934371 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:32:28.934379 kernel: io scheduler mq-deadline registered Dec 13 13:32:28.934387 kernel: io scheduler kyber registered Dec 13 13:32:28.934395 kernel: io scheduler bfq registered Dec 13 13:32:28.934403 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:32:28.934412 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 13:32:28.934423 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 13:32:28.934433 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 13:32:28.934441 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:32:28.934450 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:32:28.934458 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 13:32:28.934466 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:32:28.934476 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:32:28.934604 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 13:32:28.934719 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 13:32:28.934730 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 13:32:28.934840 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T13:32:28 UTC (1734096748) Dec 13 13:32:28.935007 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 13:32:28.935018 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 13:32:28.935026 kernel: efifb: probing for efifb Dec 13 13:32:28.935038 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 13 13:32:28.935046 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 13 13:32:28.935054 kernel: efifb: scrolling: redraw Dec 13 13:32:28.935062 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 13:32:28.935071 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 13:32:28.935079 kernel: fb0: EFI VGA frame buffer device Dec 13 13:32:28.935087 kernel: pstore: Using crash dump compression: deflate Dec 13 13:32:28.935095 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 13:32:28.935103 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:32:28.935113 kernel: Segment Routing with IPv6 Dec 13 13:32:28.935121 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:32:28.935129 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:32:28.935147 kernel: Key type dns_resolver registered Dec 13 13:32:28.935155 kernel: IPI shorthand broadcast: enabled Dec 13 13:32:28.935163 kernel: sched_clock: Marking stable (661002588, 208676846)->(921649051, -51969617) Dec 13 13:32:28.935172 kernel: registered taskstats version 1 Dec 13 13:32:28.935180 kernel: Loading compiled-in X.509 certificates Dec 13 13:32:28.935188 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:32:28.935198 kernel: Key type .fscrypt registered Dec 13 13:32:28.935206 kernel: Key type fscrypt-provisioning registered Dec 13 13:32:28.935214 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:32:28.935222 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:32:28.935230 kernel: ima: No architecture policies found Dec 13 13:32:28.935238 kernel: clk: Disabling unused clocks Dec 13 13:32:28.935247 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:32:28.935255 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:32:28.935263 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:32:28.935273 kernel: Run /init as init process Dec 13 13:32:28.935281 kernel: with arguments: Dec 13 13:32:28.935289 kernel: /init Dec 13 13:32:28.935297 kernel: with environment: Dec 13 13:32:28.935305 kernel: HOME=/ Dec 13 13:32:28.935313 kernel: TERM=linux Dec 13 13:32:28.935321 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:32:28.935331 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:32:28.935344 systemd[1]: Detected virtualization kvm. Dec 13 13:32:28.935353 systemd[1]: Detected architecture x86-64. Dec 13 13:32:28.935361 systemd[1]: Running in initrd. Dec 13 13:32:28.935370 systemd[1]: No hostname configured, using default hostname. Dec 13 13:32:28.935378 systemd[1]: Hostname set to . Dec 13 13:32:28.935387 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:32:28.935396 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:32:28.935404 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:32:28.935415 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:32:28.935425 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:32:28.935433 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:32:28.935442 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:32:28.935451 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:32:28.935461 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:32:28.935472 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:32:28.935481 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:32:28.935489 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:32:28.935498 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:32:28.935506 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:32:28.935515 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:32:28.935523 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:32:28.935532 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:32:28.935540 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:32:28.935551 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:32:28.935560 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:32:28.935568 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:32:28.935577 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:32:28.935586 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:32:28.935594 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:32:28.935603 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:32:28.935612 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:32:28.935620 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:32:28.935631 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:32:28.935640 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:32:28.935648 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:32:28.935657 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:32:28.935666 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:32:28.935675 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:32:28.935683 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:32:28.935711 systemd-journald[193]: Collecting audit messages is disabled. Dec 13 13:32:28.935734 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:32:28.935743 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:32:28.935751 systemd-journald[193]: Journal started Dec 13 13:32:28.935770 systemd-journald[193]: Runtime Journal (/run/log/journal/7d1a085cce2c4b5caff2c21c651dbb96) is 6.0M, max 48.2M, 42.2M free. Dec 13 13:32:28.927804 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 13:32:28.942926 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:32:28.944915 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:32:28.945044 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:32:28.949372 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:32:28.953044 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:32:28.958912 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:32:28.960629 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 13:32:28.960910 kernel: Bridge firewalling registered Dec 13 13:32:28.961723 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:32:28.969720 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:32:28.970468 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:32:28.972228 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:32:28.973794 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:32:28.977835 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:32:29.002076 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:32:29.005803 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:32:29.013449 dracut-cmdline[228]: dracut-dracut-053 Dec 13 13:32:29.023866 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:32:29.057318 systemd-resolved[233]: Positive Trust Anchors: Dec 13 13:32:29.057332 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:32:29.057362 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:32:29.067969 systemd-resolved[233]: Defaulting to hostname 'linux'. Dec 13 13:32:29.069852 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:32:29.070031 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:32:29.103925 kernel: SCSI subsystem initialized Dec 13 13:32:29.113920 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:32:29.123920 kernel: iscsi: registered transport (tcp) Dec 13 13:32:29.145921 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:32:29.145986 kernel: QLogic iSCSI HBA Driver Dec 13 13:32:29.196585 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:32:29.225183 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:32:29.252995 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:32:29.253069 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:32:29.254245 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:32:29.298941 kernel: raid6: avx2x4 gen() 23945 MB/s Dec 13 13:32:29.315936 kernel: raid6: avx2x2 gen() 22671 MB/s Dec 13 13:32:29.340936 kernel: raid6: avx2x1 gen() 18544 MB/s Dec 13 13:32:29.341025 kernel: raid6: using algorithm avx2x4 gen() 23945 MB/s Dec 13 13:32:29.358355 kernel: raid6: .... xor() 6743 MB/s, rmw enabled Dec 13 13:32:29.358392 kernel: raid6: using avx2x2 recovery algorithm Dec 13 13:32:29.381958 kernel: xor: automatically using best checksumming function avx Dec 13 13:32:29.536932 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:32:29.550811 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:32:29.571127 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:32:29.583474 systemd-udevd[414]: Using default interface naming scheme 'v255'. Dec 13 13:32:29.591390 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:32:29.610060 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:32:29.623309 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Dec 13 13:32:29.659346 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:32:29.674082 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:32:29.736547 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:32:29.748390 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:32:29.761042 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:32:29.763026 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:32:29.764337 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:32:29.769124 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:32:29.775943 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 13:32:29.811873 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:32:29.811909 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 13:32:29.812059 kernel: libata version 3.00 loaded. Dec 13 13:32:29.812071 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:32:29.812082 kernel: GPT:9289727 != 19775487 Dec 13 13:32:29.812093 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:32:29.812103 kernel: GPT:9289727 != 19775487 Dec 13 13:32:29.812121 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:32:29.812132 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:32:29.783134 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:32:29.797356 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:32:29.814918 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 13:32:29.833875 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 13:32:29.833910 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 13:32:29.834059 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 13:32:29.834215 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 13:32:29.834227 kernel: scsi host0: ahci Dec 13 13:32:29.834380 kernel: scsi host1: ahci Dec 13 13:32:29.834525 kernel: AES CTR mode by8 optimization enabled Dec 13 13:32:29.834536 kernel: scsi host2: ahci Dec 13 13:32:29.834681 kernel: scsi host3: ahci Dec 13 13:32:29.834824 kernel: scsi host4: ahci Dec 13 13:32:29.835000 kernel: scsi host5: ahci Dec 13 13:32:29.835153 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 13:32:29.835165 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 13:32:29.835175 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 13:32:29.835186 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 13:32:29.835196 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 13:32:29.835210 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 13:32:29.817497 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:32:29.817776 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:32:29.820795 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:32:29.823366 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:32:29.823666 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:32:29.849702 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (479) Dec 13 13:32:29.849724 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (460) Dec 13 13:32:29.826128 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:32:29.838743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:32:29.855387 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:32:29.871150 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:32:29.886087 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:32:29.892164 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:32:29.905203 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:32:29.910130 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:32:29.925143 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:32:29.928673 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:32:29.943322 disk-uuid[556]: Primary Header is updated. Dec 13 13:32:29.943322 disk-uuid[556]: Secondary Entries is updated. Dec 13 13:32:29.943322 disk-uuid[556]: Secondary Header is updated. Dec 13 13:32:29.947922 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:32:29.952732 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:32:30.144653 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 13:32:30.144721 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 13:32:30.144733 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 13:32:30.144743 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 13:32:30.145920 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 13:32:30.146921 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 13:32:30.148041 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 13:32:30.148071 kernel: ata3.00: applying bridge limits Dec 13 13:32:30.148935 kernel: ata3.00: configured for UDMA/100 Dec 13 13:32:30.148993 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 13:32:30.199913 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 13:32:30.213530 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 13:32:30.213548 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 13:32:30.963540 disk-uuid[561]: The operation has completed successfully. Dec 13 13:32:30.964747 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:32:30.998049 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:32:30.998174 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:32:31.021009 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:32:31.024323 sh[592]: Success Dec 13 13:32:31.036937 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 13:32:31.078405 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:32:31.100411 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:32:31.103281 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:32:31.115438 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:32:31.115489 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:32:31.115501 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:32:31.116458 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:32:31.117194 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:32:31.122594 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:32:31.124213 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:32:31.129043 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:32:31.131582 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:32:31.140918 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:32:31.140948 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:32:31.140959 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:32:31.143935 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:32:31.152794 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:32:31.154615 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:32:31.164619 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:32:31.172034 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:32:31.221640 ignition[682]: Ignition 2.20.0 Dec 13 13:32:31.221651 ignition[682]: Stage: fetch-offline Dec 13 13:32:31.221690 ignition[682]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:31.221702 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:32:31.221807 ignition[682]: parsed url from cmdline: "" Dec 13 13:32:31.221812 ignition[682]: no config URL provided Dec 13 13:32:31.221818 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:32:31.221828 ignition[682]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:32:31.221860 ignition[682]: op(1): [started] loading QEMU firmware config module Dec 13 13:32:31.221867 ignition[682]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 13:32:31.230770 ignition[682]: op(1): [finished] loading QEMU firmware config module Dec 13 13:32:31.249283 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:32:31.261024 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:32:31.278463 ignition[682]: parsing config with SHA512: 9b26753b00f3adaa93c7e0d09d7cbd8cc9aef8babd7711b26643ade852555304fd2797de58cd3ffac74ebee55f68a21914bfb91449b819ad6a822c3f126b010d Dec 13 13:32:31.284197 systemd-networkd[780]: lo: Link UP Dec 13 13:32:31.284207 systemd-networkd[780]: lo: Gained carrier Dec 13 13:32:31.285175 unknown[682]: fetched base config from "system" Dec 13 13:32:31.285186 unknown[682]: fetched user config from "qemu" Dec 13 13:32:31.285853 systemd-networkd[780]: Enumeration completed Dec 13 13:32:31.287427 ignition[682]: fetch-offline: fetch-offline passed Dec 13 13:32:31.286300 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:32:31.289326 ignition[682]: Ignition finished successfully Dec 13 13:32:31.286336 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:32:31.286340 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:32:31.287357 systemd-networkd[780]: eth0: Link UP Dec 13 13:32:31.287361 systemd-networkd[780]: eth0: Gained carrier Dec 13 13:32:31.287368 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:32:31.289400 systemd[1]: Reached target network.target - Network. Dec 13 13:32:31.294037 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:32:31.297526 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:32:31.303940 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:32:31.304115 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:32:31.318415 ignition[783]: Ignition 2.20.0 Dec 13 13:32:31.318430 ignition[783]: Stage: kargs Dec 13 13:32:31.318641 ignition[783]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:31.318655 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:32:31.319714 ignition[783]: kargs: kargs passed Dec 13 13:32:31.319764 ignition[783]: Ignition finished successfully Dec 13 13:32:31.324008 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:32:31.335236 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:32:31.348148 ignition[793]: Ignition 2.20.0 Dec 13 13:32:31.348165 ignition[793]: Stage: disks Dec 13 13:32:31.348402 ignition[793]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:31.348418 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:32:31.349587 ignition[793]: disks: disks passed Dec 13 13:32:31.351673 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:32:31.349643 ignition[793]: Ignition finished successfully Dec 13 13:32:31.353614 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:32:31.355742 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:32:31.357913 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:32:31.360141 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:32:31.362512 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:32:31.375322 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:32:31.396136 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 13:32:31.475838 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:32:31.483000 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:32:31.583937 kernel: EXT4-fs (vda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:32:31.584736 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:32:31.589345 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:32:31.603971 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:32:31.605682 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:32:31.606760 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:32:31.616035 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (813) Dec 13 13:32:31.616056 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:32:31.616078 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:32:31.616093 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:32:31.606798 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:32:31.620159 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:32:31.606818 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:32:31.614604 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:32:31.616809 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:32:31.621331 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:32:31.657222 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:32:31.661412 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:32:31.665612 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:32:31.669587 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:32:31.758314 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:32:31.772010 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:32:31.773742 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:32:31.780939 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:32:31.799474 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:32:31.801821 ignition[926]: INFO : Ignition 2.20.0 Dec 13 13:32:31.801821 ignition[926]: INFO : Stage: mount Dec 13 13:32:31.801821 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:31.801821 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:32:31.807159 ignition[926]: INFO : mount: mount passed Dec 13 13:32:31.807159 ignition[926]: INFO : Ignition finished successfully Dec 13 13:32:31.804800 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:32:31.817101 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:32:32.114877 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:32:32.124249 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:32:32.130922 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (941) Dec 13 13:32:32.130955 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:32:32.132658 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:32:32.132680 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:32:32.135912 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:32:32.136986 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:32:32.159271 ignition[958]: INFO : Ignition 2.20.0 Dec 13 13:32:32.159271 ignition[958]: INFO : Stage: files Dec 13 13:32:32.161477 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:32.161477 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:32:32.161477 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:32:32.161477 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:32:32.161477 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:32:32.168370 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:32:32.168370 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:32:32.168370 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:32:32.168370 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:32:32.168370 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 13:32:32.164452 unknown[958]: wrote ssh authorized keys file for user: core Dec 13 13:32:32.206068 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:32:32.275823 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:32:32.278465 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:32:32.278465 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 13:32:32.746689 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 13:32:32.796061 systemd-networkd[780]: eth0: Gained IPv6LL Dec 13 13:32:32.841160 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:32:32.841160 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:32:32.841160 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:32:32.841160 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:32:32.841160 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:32:32.841160 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:32:32.841160 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:32:32.841160 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:32:32.841160 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:32:32.841160 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:32:32.841160 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:32:32.841160 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:32:32.841160 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:32:32.841160 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:32:32.841160 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 13:32:33.271462 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 13:32:33.583595 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:32:33.583595 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 13:32:33.587654 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:32:33.587654 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:32:33.587654 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 13:32:33.587654 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 13:32:33.587654 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:32:33.587654 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:32:33.587654 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 13:32:33.587654 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:32:33.607636 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:32:33.611646 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:32:33.613235 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:32:33.613235 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:32:33.613235 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:32:33.613235 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:32:33.613235 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:32:33.613235 ignition[958]: INFO : files: files passed Dec 13 13:32:33.613235 ignition[958]: INFO : Ignition finished successfully Dec 13 13:32:33.614444 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:32:33.626017 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:32:33.627720 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:32:33.629519 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:32:33.629622 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:32:33.636423 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 13:32:33.638853 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:32:33.638853 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:32:33.641936 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:32:33.644711 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:32:33.644879 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:32:33.652026 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:32:33.676964 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:32:33.677087 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:32:33.678173 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:32:33.680339 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:32:33.683229 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:32:33.685686 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:32:33.704298 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:32:33.716002 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:32:33.727131 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:32:33.727264 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:32:33.729450 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:32:33.731621 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:32:33.731724 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:32:33.736352 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:32:33.736478 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:32:33.738356 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:32:33.741072 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:32:33.743230 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:32:33.744336 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:32:33.746393 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:32:33.748300 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:32:33.750592 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:32:33.753439 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:32:33.754373 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:32:33.754478 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:32:33.758734 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:32:33.758864 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:32:33.760855 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:32:33.760950 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:32:33.763075 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:32:33.763179 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:32:33.768256 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:32:33.768364 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:32:33.769391 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:32:33.771351 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:32:33.776936 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:32:33.777086 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:32:33.779630 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:32:33.781300 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:32:33.781386 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:32:33.783033 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:32:33.783113 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:32:33.784746 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:32:33.784850 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:32:33.786566 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:32:33.786665 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:32:33.801036 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:32:33.802606 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:32:33.803779 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:32:33.803888 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:32:33.804844 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:32:33.804962 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:32:33.812923 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:32:33.813949 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:32:33.820427 ignition[1012]: INFO : Ignition 2.20.0 Dec 13 13:32:33.820427 ignition[1012]: INFO : Stage: umount Dec 13 13:32:33.822176 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:32:33.822176 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:32:33.825021 ignition[1012]: INFO : umount: umount passed Dec 13 13:32:33.825917 ignition[1012]: INFO : Ignition finished successfully Dec 13 13:32:33.828985 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:32:33.829124 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:32:33.831077 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:32:33.831506 systemd[1]: Stopped target network.target - Network. Dec 13 13:32:33.831822 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:32:33.831870 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:32:33.834637 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:32:33.834687 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:32:33.835466 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:32:33.835510 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:32:33.838295 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:32:33.838340 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:32:33.839491 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:32:33.842418 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:32:33.846332 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:32:33.846458 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:32:33.847507 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:32:33.847561 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:32:33.856243 systemd-networkd[780]: eth0: DHCPv6 lease lost Dec 13 13:32:33.859688 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:32:33.859834 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:32:33.862502 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:32:33.862555 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:32:33.873096 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:32:33.874096 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:32:33.874171 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:32:33.876393 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:32:33.876444 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:32:33.878581 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:32:33.878631 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:32:33.881167 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:32:33.894381 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:32:33.894516 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:32:33.903758 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:32:33.918860 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:32:33.921637 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:32:33.921692 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:32:33.924807 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:32:33.924850 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:32:33.927769 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:32:33.927824 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:32:33.930875 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:32:33.931805 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:32:33.933927 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:32:33.934906 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:32:33.953034 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:32:33.954191 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:32:33.955344 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:32:33.957657 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:32:33.957709 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:32:33.962641 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:32:33.962691 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:32:33.965969 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:32:33.966971 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:32:33.969468 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:32:33.970557 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:32:34.082100 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:32:34.083153 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:32:34.085593 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:32:34.087752 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:32:34.088792 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:32:34.106067 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:32:34.113126 systemd[1]: Switching root. Dec 13 13:32:34.153155 systemd-journald[193]: Journal stopped Dec 13 13:32:35.902515 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 13 13:32:35.902575 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:32:35.902589 kernel: SELinux: policy capability open_perms=1 Dec 13 13:32:35.902600 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:32:35.902612 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:32:35.902626 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:32:35.902643 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:32:35.902655 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:32:35.902670 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:32:35.902682 kernel: audit: type=1403 audit(1734096755.128:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:32:35.902694 systemd[1]: Successfully loaded SELinux policy in 39.201ms. Dec 13 13:32:35.902712 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.376ms. Dec 13 13:32:35.902726 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:32:35.902738 systemd[1]: Detected virtualization kvm. Dec 13 13:32:35.902753 systemd[1]: Detected architecture x86-64. Dec 13 13:32:35.902765 systemd[1]: Detected first boot. Dec 13 13:32:35.902777 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:32:35.902789 zram_generator::config[1058]: No configuration found. Dec 13 13:32:35.902802 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:32:35.902818 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:32:35.902830 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:32:35.902843 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:32:35.902858 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:32:35.902878 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:32:35.903814 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:32:35.903838 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:32:35.903851 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:32:35.903864 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:32:35.903876 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:32:35.903888 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:32:35.903915 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:32:35.903928 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:32:35.903943 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:32:35.903965 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:32:35.903977 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:32:35.903989 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:32:35.904002 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:32:35.904014 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:32:35.904026 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:32:35.904038 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:32:35.910920 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:32:35.910960 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:32:35.910973 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:32:35.910986 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:32:35.910998 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:32:35.911011 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:32:35.911023 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:32:35.911039 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:32:35.911051 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:32:35.911063 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:32:35.911075 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:32:35.911087 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:32:35.911100 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:32:35.911112 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:32:35.911124 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:32:35.911137 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:32:35.911152 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:32:35.911165 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:32:35.911177 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:32:35.911189 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:32:35.911202 systemd[1]: Reached target machines.target - Containers. Dec 13 13:32:35.911214 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:32:35.911227 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:32:35.911239 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:32:35.911251 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:32:35.911266 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:32:35.911278 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:32:35.911290 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:32:35.911302 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:32:35.911315 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:32:35.911327 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:32:35.911339 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:32:35.911351 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:32:35.911366 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:32:35.911378 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:32:35.911390 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:32:35.911405 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:32:35.911417 kernel: loop: module loaded Dec 13 13:32:35.911430 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:32:35.911442 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:32:35.911456 kernel: fuse: init (API version 7.39) Dec 13 13:32:35.911467 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:32:35.911482 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:32:35.911495 systemd[1]: Stopped verity-setup.service. Dec 13 13:32:35.911507 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:32:35.911550 systemd-journald[1121]: Collecting audit messages is disabled. Dec 13 13:32:35.911573 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:32:35.911585 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:32:35.911598 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:32:35.911613 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:32:35.911625 systemd-journald[1121]: Journal started Dec 13 13:32:35.911647 systemd-journald[1121]: Runtime Journal (/run/log/journal/7d1a085cce2c4b5caff2c21c651dbb96) is 6.0M, max 48.2M, 42.2M free. Dec 13 13:32:35.678519 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:32:35.696558 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:32:35.697155 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:32:35.924914 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:32:35.915760 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:32:35.917340 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:32:35.918886 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:32:35.920825 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:32:35.921048 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:32:35.923015 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:32:35.923216 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:32:35.925010 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:32:35.925209 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:32:35.927313 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:32:35.927548 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:32:35.928913 kernel: ACPI: bus type drm_connector registered Dec 13 13:32:35.929886 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:32:35.930129 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:32:35.932282 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:32:35.932485 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:32:35.934292 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:32:35.936138 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:32:35.938155 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:32:35.953061 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:32:35.966041 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:32:35.968518 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:32:35.969842 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:32:35.969875 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:32:35.972057 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:32:35.974552 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:32:35.982155 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:32:35.983615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:32:35.989025 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:32:35.995066 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:32:35.996551 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:32:35.997776 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:32:35.999190 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:32:36.001020 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:32:36.012979 systemd-journald[1121]: Time spent on flushing to /var/log/journal/7d1a085cce2c4b5caff2c21c651dbb96 is 22.292ms for 1040 entries. Dec 13 13:32:36.012979 systemd-journald[1121]: System Journal (/var/log/journal/7d1a085cce2c4b5caff2c21c651dbb96) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:32:36.123620 systemd-journald[1121]: Received client request to flush runtime journal. Dec 13 13:32:36.123675 kernel: loop0: detected capacity change from 0 to 141000 Dec 13 13:32:36.123706 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:32:36.123726 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 13:32:36.005153 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:32:36.007891 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:32:36.011499 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:32:36.014694 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:32:36.017218 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:32:36.027408 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:32:36.031797 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:32:36.053664 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 13:32:36.071733 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:32:36.074812 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Dec 13 13:32:36.074829 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Dec 13 13:32:36.082431 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:32:36.099383 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:32:36.101264 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:32:36.110173 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:32:36.112104 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:32:36.117104 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:32:36.133397 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:32:36.188291 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:32:36.201319 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:32:36.207371 kernel: loop2: detected capacity change from 0 to 138184 Dec 13 13:32:36.220971 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Dec 13 13:32:36.220993 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Dec 13 13:32:36.226510 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:32:36.272933 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:32:36.273753 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:32:36.307923 kernel: loop3: detected capacity change from 0 to 141000 Dec 13 13:32:36.320923 kernel: loop4: detected capacity change from 0 to 211296 Dec 13 13:32:36.330980 kernel: loop5: detected capacity change from 0 to 138184 Dec 13 13:32:36.341522 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 13:32:36.343079 (sd-merge)[1200]: Merged extensions into '/usr'. Dec 13 13:32:36.347718 systemd[1]: Reloading requested from client PID 1164 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:32:36.347734 systemd[1]: Reloading... Dec 13 13:32:36.410926 zram_generator::config[1226]: No configuration found. Dec 13 13:32:36.439987 ldconfig[1154]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:32:36.532287 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:32:36.581356 systemd[1]: Reloading finished in 233 ms. Dec 13 13:32:36.613971 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:32:36.615481 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:32:36.631077 systemd[1]: Starting ensure-sysext.service... Dec 13 13:32:36.633155 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:32:36.640705 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:32:36.640719 systemd[1]: Reloading... Dec 13 13:32:36.674014 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:32:36.674655 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:32:36.675668 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:32:36.677488 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Dec 13 13:32:36.677611 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Dec 13 13:32:36.684472 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:32:36.684572 systemd-tmpfiles[1264]: Skipping /boot Dec 13 13:32:36.700017 zram_generator::config[1294]: No configuration found. Dec 13 13:32:36.700398 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:32:36.700411 systemd-tmpfiles[1264]: Skipping /boot Dec 13 13:32:36.807837 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:32:36.856196 systemd[1]: Reloading finished in 215 ms. Dec 13 13:32:36.875040 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:32:36.887606 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:32:36.905327 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:32:36.908663 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:32:36.911711 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:32:36.915797 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:32:36.919563 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:32:36.922724 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:32:36.925993 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:32:36.926164 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:32:36.928304 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:32:36.930871 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:32:36.933470 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:32:36.934674 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:32:36.939214 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:32:36.940335 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:32:36.942253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:32:36.942456 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:32:36.948493 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:32:36.948693 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:32:36.951191 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:32:36.952727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:32:36.953083 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:32:36.956392 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:32:36.961216 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:32:36.961408 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:32:36.971188 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:32:36.974207 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:32:36.976386 systemd-udevd[1340]: Using default interface naming scheme 'v255'. Dec 13 13:32:36.978620 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:32:36.979731 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:32:36.979835 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:32:36.981207 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:32:36.984805 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:32:36.984995 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:32:36.986663 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:32:36.986827 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:32:36.989867 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:32:36.990052 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:32:36.991491 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:32:36.999718 augenrules[1370]: No rules Dec 13 13:32:37.002872 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:32:37.003432 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:32:37.006218 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:32:37.006452 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:32:37.012566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:32:37.016637 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:32:37.020142 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:32:37.024758 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:32:37.027021 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:32:37.031050 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:32:37.032129 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:32:37.032479 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:32:37.034544 systemd[1]: Finished ensure-sysext.service. Dec 13 13:32:37.035701 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:32:37.045364 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:32:37.045557 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:32:37.047233 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:32:37.047422 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:32:37.049007 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:32:37.049179 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:32:37.051012 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:32:37.051185 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:32:37.058966 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1385) Dec 13 13:32:37.062601 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1385) Dec 13 13:32:37.062873 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:32:37.080144 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1383) Dec 13 13:32:37.085165 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:32:37.086380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:32:37.086447 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:32:37.091154 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:32:37.094052 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:32:37.094260 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:32:37.098225 systemd-resolved[1337]: Positive Trust Anchors: Dec 13 13:32:37.098233 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:32:37.098265 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:32:37.104176 systemd-resolved[1337]: Defaulting to hostname 'linux'. Dec 13 13:32:37.106137 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:32:37.108491 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:32:37.140946 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 13:32:37.144096 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:32:37.151139 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 13:32:37.152561 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 13:32:37.152740 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 13:32:37.152963 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 13:32:37.156300 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:32:37.161009 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 13:32:37.164923 kernel: ACPI: button: Power Button [PWRF] Dec 13 13:32:37.170728 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:32:37.186806 systemd-networkd[1414]: lo: Link UP Dec 13 13:32:37.186818 systemd-networkd[1414]: lo: Gained carrier Dec 13 13:32:37.190334 systemd-networkd[1414]: Enumeration completed Dec 13 13:32:37.190523 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:32:37.190788 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:32:37.190793 systemd-networkd[1414]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:32:37.192043 systemd[1]: Reached target network.target - Network. Dec 13 13:32:37.192857 systemd-networkd[1414]: eth0: Link UP Dec 13 13:32:37.192862 systemd-networkd[1414]: eth0: Gained carrier Dec 13 13:32:37.192875 systemd-networkd[1414]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:32:37.201042 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:32:37.207348 systemd-networkd[1414]: eth0: DHCPv4 address 10.0.0.150/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:32:37.208234 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. Dec 13 13:32:37.213072 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:32:37.216887 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:32:38.010215 systemd-timesyncd[1416]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 13:32:38.010308 systemd-timesyncd[1416]: Initial clock synchronization to Fri 2024-12-13 13:32:38.010127 UTC. Dec 13 13:32:38.010655 systemd-resolved[1337]: Clock change detected. Flushing caches. Dec 13 13:32:38.062290 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:32:38.062379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:32:38.073859 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:32:38.074147 kernel: kvm_amd: TSC scaling supported Dec 13 13:32:38.074168 kernel: kvm_amd: Nested Virtualization enabled Dec 13 13:32:38.074199 kernel: kvm_amd: Nested Paging enabled Dec 13 13:32:38.074215 kernel: kvm_amd: LBR virtualization supported Dec 13 13:32:38.074228 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 13:32:38.074142 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:32:38.075311 kernel: kvm_amd: Virtual GIF supported Dec 13 13:32:38.093156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:32:38.098017 kernel: EDAC MC: Ver: 3.0.0 Dec 13 13:32:38.130435 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:32:38.143145 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:32:38.144730 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:32:38.152976 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:32:38.183090 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:32:38.184600 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:32:38.185720 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:32:38.186878 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:32:38.188132 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:32:38.189557 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:32:38.190713 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:32:38.191937 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:32:38.193160 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:32:38.193183 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:32:38.194072 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:32:38.195737 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:32:38.198368 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:32:38.206247 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:32:38.208483 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:32:38.210044 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:32:38.211162 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:32:38.212112 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:32:38.213060 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:32:38.213090 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:32:38.213979 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:32:38.215973 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:32:38.219034 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:32:38.219381 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:32:38.222128 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:32:38.223675 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:32:38.225905 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:32:38.229139 jq[1451]: false Dec 13 13:32:38.234091 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:32:38.236875 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:32:38.240824 extend-filesystems[1452]: Found loop3 Dec 13 13:32:38.240824 extend-filesystems[1452]: Found loop4 Dec 13 13:32:38.240824 extend-filesystems[1452]: Found loop5 Dec 13 13:32:38.240824 extend-filesystems[1452]: Found sr0 Dec 13 13:32:38.240824 extend-filesystems[1452]: Found vda Dec 13 13:32:38.240824 extend-filesystems[1452]: Found vda1 Dec 13 13:32:38.240824 extend-filesystems[1452]: Found vda2 Dec 13 13:32:38.240824 extend-filesystems[1452]: Found vda3 Dec 13 13:32:38.240824 extend-filesystems[1452]: Found usr Dec 13 13:32:38.240824 extend-filesystems[1452]: Found vda4 Dec 13 13:32:38.240824 extend-filesystems[1452]: Found vda6 Dec 13 13:32:38.240824 extend-filesystems[1452]: Found vda7 Dec 13 13:32:38.240824 extend-filesystems[1452]: Found vda9 Dec 13 13:32:38.240824 extend-filesystems[1452]: Checking size of /dev/vda9 Dec 13 13:32:38.240237 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:32:38.255257 dbus-daemon[1450]: [system] SELinux support is enabled Dec 13 13:32:38.269239 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 13:32:38.269323 extend-filesystems[1452]: Resized partition /dev/vda9 Dec 13 13:32:38.244795 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:32:38.273232 extend-filesystems[1471]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:32:38.246197 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:32:38.246634 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:32:38.248016 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:32:38.252106 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:32:38.254001 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:32:38.265571 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:32:38.276464 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:32:38.276676 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:32:38.277054 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:32:38.277262 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:32:38.278891 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1389) Dec 13 13:32:38.278942 jq[1467]: true Dec 13 13:32:38.280539 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:32:38.280749 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:32:38.287067 update_engine[1465]: I20241213 13:32:38.284931 1465 main.cc:92] Flatcar Update Engine starting Dec 13 13:32:38.297115 update_engine[1465]: I20241213 13:32:38.297058 1465 update_check_scheduler.cc:74] Next update check in 9m30s Dec 13 13:32:38.301015 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 13:32:38.304363 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:32:38.326693 jq[1476]: true Dec 13 13:32:38.326883 extend-filesystems[1471]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:32:38.326883 extend-filesystems[1471]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:32:38.326883 extend-filesystems[1471]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 13:32:38.331324 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:32:38.332766 extend-filesystems[1452]: Resized filesystem in /dev/vda9 Dec 13 13:32:38.334133 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:32:38.334361 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:32:38.337201 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:32:38.338749 tar[1475]: linux-amd64/helm Dec 13 13:32:38.337230 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:32:38.338661 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:32:38.338664 systemd-logind[1464]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 13:32:38.338678 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:32:38.338684 systemd-logind[1464]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:32:38.339133 systemd-logind[1464]: New seat seat0. Dec 13 13:32:38.350166 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:32:38.352210 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:32:38.376093 bash[1506]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:32:38.377183 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:32:38.380283 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:32:38.384252 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:32:38.457334 sshd_keygen[1474]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:32:38.481886 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:32:38.492331 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:32:38.493594 containerd[1483]: time="2024-12-13T13:32:38.493037347Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:32:38.501969 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:32:38.502293 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:32:38.505362 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:32:38.520497 containerd[1483]: time="2024-12-13T13:32:38.520456708Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:32:38.520513 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:32:38.523505 containerd[1483]: time="2024-12-13T13:32:38.522217671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:32:38.523505 containerd[1483]: time="2024-12-13T13:32:38.522246255Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:32:38.523505 containerd[1483]: time="2024-12-13T13:32:38.522263146Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:32:38.523505 containerd[1483]: time="2024-12-13T13:32:38.522420902Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:32:38.523505 containerd[1483]: time="2024-12-13T13:32:38.522436081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:32:38.523505 containerd[1483]: time="2024-12-13T13:32:38.522499830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:32:38.523505 containerd[1483]: time="2024-12-13T13:32:38.522511582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:32:38.523505 containerd[1483]: time="2024-12-13T13:32:38.522692071Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:32:38.523505 containerd[1483]: time="2024-12-13T13:32:38.522705746Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:32:38.523505 containerd[1483]: time="2024-12-13T13:32:38.522718991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:32:38.523505 containerd[1483]: time="2024-12-13T13:32:38.522729100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:32:38.523763 containerd[1483]: time="2024-12-13T13:32:38.522818939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:32:38.523763 containerd[1483]: time="2024-12-13T13:32:38.523079297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:32:38.523763 containerd[1483]: time="2024-12-13T13:32:38.523196166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:32:38.523763 containerd[1483]: time="2024-12-13T13:32:38.523208259Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:32:38.523763 containerd[1483]: time="2024-12-13T13:32:38.523303147Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:32:38.523763 containerd[1483]: time="2024-12-13T13:32:38.523362638Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:32:38.528665 containerd[1483]: time="2024-12-13T13:32:38.528646858Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:32:38.528749 containerd[1483]: time="2024-12-13T13:32:38.528734933Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:32:38.528828 containerd[1483]: time="2024-12-13T13:32:38.528814813Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:32:38.528884 containerd[1483]: time="2024-12-13T13:32:38.528872711Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:32:38.528932 containerd[1483]: time="2024-12-13T13:32:38.528921102Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:32:38.529104 containerd[1483]: time="2024-12-13T13:32:38.529088366Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:32:38.529404 containerd[1483]: time="2024-12-13T13:32:38.529387076Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:32:38.529560 containerd[1483]: time="2024-12-13T13:32:38.529544561Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:32:38.529627 containerd[1483]: time="2024-12-13T13:32:38.529613080Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:32:38.529677 containerd[1483]: time="2024-12-13T13:32:38.529665538Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:32:38.529722 containerd[1483]: time="2024-12-13T13:32:38.529711585Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:32:38.529767 containerd[1483]: time="2024-12-13T13:32:38.529756729Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:32:38.529811 containerd[1483]: time="2024-12-13T13:32:38.529801173Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:32:38.529856 containerd[1483]: time="2024-12-13T13:32:38.529846047Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:32:38.529919 containerd[1483]: time="2024-12-13T13:32:38.529907091Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:32:38.529970 containerd[1483]: time="2024-12-13T13:32:38.529959580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:32:38.530044 containerd[1483]: time="2024-12-13T13:32:38.530033108Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:32:38.530090 containerd[1483]: time="2024-12-13T13:32:38.530080226Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:32:38.530142 containerd[1483]: time="2024-12-13T13:32:38.530131833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.530198 containerd[1483]: time="2024-12-13T13:32:38.530187808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.530253 containerd[1483]: time="2024-12-13T13:32:38.530241118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.530306 containerd[1483]: time="2024-12-13T13:32:38.530294608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.530357 containerd[1483]: time="2024-12-13T13:32:38.530343089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.530428 containerd[1483]: time="2024-12-13T13:32:38.530412669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.530477 containerd[1483]: time="2024-12-13T13:32:38.530466771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.530532 containerd[1483]: time="2024-12-13T13:32:38.530519780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.530578 containerd[1483]: time="2024-12-13T13:32:38.530568061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.530627 containerd[1483]: time="2024-12-13T13:32:38.530616351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.530671 containerd[1483]: time="2024-12-13T13:32:38.530661025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.530715 containerd[1483]: time="2024-12-13T13:32:38.530704697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.530759 containerd[1483]: time="2024-12-13T13:32:38.530749311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.530819 containerd[1483]: time="2024-12-13T13:32:38.530807039Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:32:38.530873 containerd[1483]: time="2024-12-13T13:32:38.530862794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.530921 containerd[1483]: time="2024-12-13T13:32:38.530911044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.530965 containerd[1483]: time="2024-12-13T13:32:38.530955477Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:32:38.531075 containerd[1483]: time="2024-12-13T13:32:38.531062648Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:32:38.531189 containerd[1483]: time="2024-12-13T13:32:38.531175129Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:32:38.531235 containerd[1483]: time="2024-12-13T13:32:38.531224682Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:32:38.531295 containerd[1483]: time="2024-12-13T13:32:38.531281880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:32:38.531338 containerd[1483]: time="2024-12-13T13:32:38.531327776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.531383 containerd[1483]: time="2024-12-13T13:32:38.531372860Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:32:38.532535 containerd[1483]: time="2024-12-13T13:32:38.531415871Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:32:38.532535 containerd[1483]: time="2024-12-13T13:32:38.531428014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:32:38.532595 containerd[1483]: time="2024-12-13T13:32:38.531673153Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:32:38.532595 containerd[1483]: time="2024-12-13T13:32:38.531709181Z" level=info msg="Connect containerd service" Dec 13 13:32:38.532595 containerd[1483]: time="2024-12-13T13:32:38.531728527Z" level=info msg="using legacy CRI server" Dec 13 13:32:38.532595 containerd[1483]: time="2024-12-13T13:32:38.531734218Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:32:38.532595 containerd[1483]: time="2024-12-13T13:32:38.531828655Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:32:38.532595 containerd[1483]: time="2024-12-13T13:32:38.532357306Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:32:38.532926 containerd[1483]: time="2024-12-13T13:32:38.532874356Z" level=info msg="Start subscribing containerd event" Dec 13 13:32:38.532959 containerd[1483]: time="2024-12-13T13:32:38.532943215Z" level=info msg="Start recovering state" Dec 13 13:32:38.533114 containerd[1483]: time="2024-12-13T13:32:38.533096573Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:32:38.533232 containerd[1483]: time="2024-12-13T13:32:38.533139193Z" level=info msg="Start event monitor" Dec 13 13:32:38.533268 containerd[1483]: time="2024-12-13T13:32:38.533205437Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:32:38.533289 containerd[1483]: time="2024-12-13T13:32:38.533240964Z" level=info msg="Start snapshots syncer" Dec 13 13:32:38.533313 containerd[1483]: time="2024-12-13T13:32:38.533288082Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:32:38.533313 containerd[1483]: time="2024-12-13T13:32:38.533301908Z" level=info msg="Start streaming server" Dec 13 13:32:38.533415 containerd[1483]: time="2024-12-13T13:32:38.533392618Z" level=info msg="containerd successfully booted in 0.041346s" Dec 13 13:32:38.536675 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:32:38.539358 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:32:38.541065 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:32:38.542502 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:32:38.714747 tar[1475]: linux-amd64/LICENSE Dec 13 13:32:38.714797 tar[1475]: linux-amd64/README.md Dec 13 13:32:38.730169 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:32:38.778493 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:32:38.780924 systemd[1]: Started sshd@0-10.0.0.150:22-10.0.0.1:50904.service - OpenSSH per-connection server daemon (10.0.0.1:50904). Dec 13 13:32:38.828103 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 50904 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:32:38.829939 sshd-session[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:32:38.838416 systemd-logind[1464]: New session 1 of user core. Dec 13 13:32:38.839723 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:32:38.851202 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:32:38.863562 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:32:38.875239 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:32:38.878930 (systemd)[1545]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:32:38.988628 systemd[1545]: Queued start job for default target default.target. Dec 13 13:32:38.998234 systemd[1545]: Created slice app.slice - User Application Slice. Dec 13 13:32:38.998260 systemd[1545]: Reached target paths.target - Paths. Dec 13 13:32:38.998274 systemd[1545]: Reached target timers.target - Timers. Dec 13 13:32:38.999811 systemd[1545]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:32:39.011223 systemd[1545]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:32:39.011348 systemd[1545]: Reached target sockets.target - Sockets. Dec 13 13:32:39.011367 systemd[1545]: Reached target basic.target - Basic System. Dec 13 13:32:39.011405 systemd[1545]: Reached target default.target - Main User Target. Dec 13 13:32:39.011436 systemd[1545]: Startup finished in 126ms. Dec 13 13:32:39.011843 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:32:39.014408 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:32:39.075875 systemd[1]: Started sshd@1-10.0.0.150:22-10.0.0.1:48788.service - OpenSSH per-connection server daemon (10.0.0.1:48788). Dec 13 13:32:39.112385 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 48788 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:32:39.113849 sshd-session[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:32:39.117592 systemd-logind[1464]: New session 2 of user core. Dec 13 13:32:39.129115 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:32:39.182686 sshd[1558]: Connection closed by 10.0.0.1 port 48788 Dec 13 13:32:39.182951 sshd-session[1556]: pam_unix(sshd:session): session closed for user core Dec 13 13:32:39.204692 systemd[1]: sshd@1-10.0.0.150:22-10.0.0.1:48788.service: Deactivated successfully. Dec 13 13:32:39.206550 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:32:39.207929 systemd-logind[1464]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:32:39.215216 systemd[1]: Started sshd@2-10.0.0.150:22-10.0.0.1:48802.service - OpenSSH per-connection server daemon (10.0.0.1:48802). Dec 13 13:32:39.217402 systemd-logind[1464]: Removed session 2. Dec 13 13:32:39.246902 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 48802 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:32:39.248616 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:32:39.252535 systemd-logind[1464]: New session 3 of user core. Dec 13 13:32:39.268253 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:32:39.322586 sshd[1565]: Connection closed by 10.0.0.1 port 48802 Dec 13 13:32:39.322995 sshd-session[1563]: pam_unix(sshd:session): session closed for user core Dec 13 13:32:39.326959 systemd[1]: sshd@2-10.0.0.150:22-10.0.0.1:48802.service: Deactivated successfully. Dec 13 13:32:39.328633 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:32:39.329245 systemd-logind[1464]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:32:39.330005 systemd-logind[1464]: Removed session 3. Dec 13 13:32:39.473105 systemd-networkd[1414]: eth0: Gained IPv6LL Dec 13 13:32:39.476213 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:32:39.478215 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:32:39.492176 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 13:32:39.494641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:32:39.496973 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:32:39.515698 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 13:32:39.516030 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 13:32:39.517740 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:32:39.520830 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:32:40.083802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:32:40.085455 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:32:40.087715 systemd[1]: Startup finished in 809ms (kernel) + 6.411s (initrd) + 4.203s (userspace) = 11.424s. Dec 13 13:32:40.088802 (kubelet)[1591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:32:40.096361 agetty[1536]: failed to open credentials directory Dec 13 13:32:40.097230 agetty[1534]: failed to open credentials directory Dec 13 13:32:40.539446 kubelet[1591]: E1213 13:32:40.539358 1591 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:32:40.544097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:32:40.544297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:32:49.334032 systemd[1]: Started sshd@3-10.0.0.150:22-10.0.0.1:35928.service - OpenSSH per-connection server daemon (10.0.0.1:35928). Dec 13 13:32:49.370117 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 35928 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:32:49.371879 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:32:49.375922 systemd-logind[1464]: New session 4 of user core. Dec 13 13:32:49.385113 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:32:49.438183 sshd[1607]: Connection closed by 10.0.0.1 port 35928 Dec 13 13:32:49.438644 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Dec 13 13:32:49.453833 systemd[1]: sshd@3-10.0.0.150:22-10.0.0.1:35928.service: Deactivated successfully. Dec 13 13:32:49.455592 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:32:49.456949 systemd-logind[1464]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:32:49.469210 systemd[1]: Started sshd@4-10.0.0.150:22-10.0.0.1:35932.service - OpenSSH per-connection server daemon (10.0.0.1:35932). Dec 13 13:32:49.470037 systemd-logind[1464]: Removed session 4. Dec 13 13:32:49.502192 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 35932 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:32:49.503530 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:32:49.507183 systemd-logind[1464]: New session 5 of user core. Dec 13 13:32:49.519097 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:32:49.568885 sshd[1614]: Connection closed by 10.0.0.1 port 35932 Dec 13 13:32:49.569279 sshd-session[1612]: pam_unix(sshd:session): session closed for user core Dec 13 13:32:49.576630 systemd[1]: sshd@4-10.0.0.150:22-10.0.0.1:35932.service: Deactivated successfully. Dec 13 13:32:49.578274 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:32:49.579688 systemd-logind[1464]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:32:49.587345 systemd[1]: Started sshd@5-10.0.0.150:22-10.0.0.1:35934.service - OpenSSH per-connection server daemon (10.0.0.1:35934). Dec 13 13:32:49.588767 systemd-logind[1464]: Removed session 5. Dec 13 13:32:49.619869 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 35934 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:32:49.621315 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:32:49.625627 systemd-logind[1464]: New session 6 of user core. Dec 13 13:32:49.639159 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:32:49.692260 sshd[1621]: Connection closed by 10.0.0.1 port 35934 Dec 13 13:32:49.692612 sshd-session[1619]: pam_unix(sshd:session): session closed for user core Dec 13 13:32:49.710873 systemd[1]: sshd@5-10.0.0.150:22-10.0.0.1:35934.service: Deactivated successfully. Dec 13 13:32:49.712766 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:32:49.714476 systemd-logind[1464]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:32:49.723217 systemd[1]: Started sshd@6-10.0.0.150:22-10.0.0.1:35940.service - OpenSSH per-connection server daemon (10.0.0.1:35940). Dec 13 13:32:49.724072 systemd-logind[1464]: Removed session 6. Dec 13 13:32:49.755181 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 35940 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:32:49.756625 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:32:49.760628 systemd-logind[1464]: New session 7 of user core. Dec 13 13:32:49.770184 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:32:49.830897 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:32:49.831366 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:32:49.849092 sudo[1629]: pam_unix(sudo:session): session closed for user root Dec 13 13:32:49.850815 sshd[1628]: Connection closed by 10.0.0.1 port 35940 Dec 13 13:32:49.851261 sshd-session[1626]: pam_unix(sshd:session): session closed for user core Dec 13 13:32:49.869396 systemd[1]: sshd@6-10.0.0.150:22-10.0.0.1:35940.service: Deactivated successfully. Dec 13 13:32:49.871455 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:32:49.873585 systemd-logind[1464]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:32:49.874950 systemd[1]: Started sshd@7-10.0.0.150:22-10.0.0.1:35956.service - OpenSSH per-connection server daemon (10.0.0.1:35956). Dec 13 13:32:49.875968 systemd-logind[1464]: Removed session 7. Dec 13 13:32:49.913544 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 35956 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:32:49.915030 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:32:49.919027 systemd-logind[1464]: New session 8 of user core. Dec 13 13:32:49.929098 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:32:49.982722 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:32:49.983074 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:32:49.986765 sudo[1638]: pam_unix(sudo:session): session closed for user root Dec 13 13:32:49.993255 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:32:49.993593 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:32:50.013232 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:32:50.042543 augenrules[1660]: No rules Dec 13 13:32:50.044376 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:32:50.044637 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:32:50.045885 sudo[1637]: pam_unix(sudo:session): session closed for user root Dec 13 13:32:50.047338 sshd[1636]: Connection closed by 10.0.0.1 port 35956 Dec 13 13:32:50.047696 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Dec 13 13:32:50.058781 systemd[1]: sshd@7-10.0.0.150:22-10.0.0.1:35956.service: Deactivated successfully. Dec 13 13:32:50.060617 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:32:50.061927 systemd-logind[1464]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:32:50.072275 systemd[1]: Started sshd@8-10.0.0.150:22-10.0.0.1:35960.service - OpenSSH per-connection server daemon (10.0.0.1:35960). Dec 13 13:32:50.073076 systemd-logind[1464]: Removed session 8. Dec 13 13:32:50.103274 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 35960 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:32:50.104570 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:32:50.108352 systemd-logind[1464]: New session 9 of user core. Dec 13 13:32:50.118108 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:32:50.171044 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:32:50.171377 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:32:50.456188 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:32:50.456396 (dockerd)[1691]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:32:50.712755 dockerd[1691]: time="2024-12-13T13:32:50.712603195Z" level=info msg="Starting up" Dec 13 13:32:50.714878 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:32:50.721245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:32:50.991937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:32:50.996521 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:32:51.231930 kubelet[1723]: E1213 13:32:51.231870 1723 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:32:51.239715 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:32:51.239935 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:32:51.309363 dockerd[1691]: time="2024-12-13T13:32:51.309317227Z" level=info msg="Loading containers: start." Dec 13 13:32:51.899013 kernel: Initializing XFRM netlink socket Dec 13 13:32:51.980711 systemd-networkd[1414]: docker0: Link UP Dec 13 13:32:52.281381 dockerd[1691]: time="2024-12-13T13:32:52.281339496Z" level=info msg="Loading containers: done." Dec 13 13:32:52.295208 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3725933327-merged.mount: Deactivated successfully. Dec 13 13:32:52.365404 dockerd[1691]: time="2024-12-13T13:32:52.365336330Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:32:52.365517 dockerd[1691]: time="2024-12-13T13:32:52.365489076Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:32:52.365667 dockerd[1691]: time="2024-12-13T13:32:52.365639478Z" level=info msg="Daemon has completed initialization" Dec 13 13:32:52.408958 dockerd[1691]: time="2024-12-13T13:32:52.408892523Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:32:52.409162 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:32:53.193846 containerd[1483]: time="2024-12-13T13:32:53.193793763Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 13:32:53.907444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1750504589.mount: Deactivated successfully. Dec 13 13:32:55.157302 containerd[1483]: time="2024-12-13T13:32:55.157236058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:55.158783 containerd[1483]: time="2024-12-13T13:32:55.158710714Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 13:32:55.160354 containerd[1483]: time="2024-12-13T13:32:55.160292991Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:55.163384 containerd[1483]: time="2024-12-13T13:32:55.163340516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:55.164552 containerd[1483]: time="2024-12-13T13:32:55.164506683Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 1.970669939s" Dec 13 13:32:55.164552 containerd[1483]: time="2024-12-13T13:32:55.164540837Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 13:32:55.187530 containerd[1483]: time="2024-12-13T13:32:55.187486609Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 13:32:56.920321 containerd[1483]: time="2024-12-13T13:32:56.920254126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:56.921055 containerd[1483]: time="2024-12-13T13:32:56.921014122Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 13:32:56.922322 containerd[1483]: time="2024-12-13T13:32:56.922275768Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:56.925640 containerd[1483]: time="2024-12-13T13:32:56.925589432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:56.926580 containerd[1483]: time="2024-12-13T13:32:56.926542560Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.739020434s" Dec 13 13:32:56.926580 containerd[1483]: time="2024-12-13T13:32:56.926574730Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 13:32:56.950560 containerd[1483]: time="2024-12-13T13:32:56.950523653Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 13:32:59.741212 containerd[1483]: time="2024-12-13T13:32:59.741145174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:59.742043 containerd[1483]: time="2024-12-13T13:32:59.741967326Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 13:32:59.743341 containerd[1483]: time="2024-12-13T13:32:59.743289085Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:59.745827 containerd[1483]: time="2024-12-13T13:32:59.745798551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:32:59.746824 containerd[1483]: time="2024-12-13T13:32:59.746787225Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 2.796233085s" Dec 13 13:32:59.746824 containerd[1483]: time="2024-12-13T13:32:59.746818373Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 13:32:59.771533 containerd[1483]: time="2024-12-13T13:32:59.771454004Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 13:33:00.999250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2357105023.mount: Deactivated successfully. Dec 13 13:33:01.277351 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:33:01.285136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:01.431125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:01.436166 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:33:01.630744 containerd[1483]: time="2024-12-13T13:33:01.630648768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:01.631461 containerd[1483]: time="2024-12-13T13:33:01.631392493Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 13:33:01.632506 containerd[1483]: time="2024-12-13T13:33:01.632453452Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:01.634318 containerd[1483]: time="2024-12-13T13:33:01.634280388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:01.634938 containerd[1483]: time="2024-12-13T13:33:01.634905921Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.86342115s" Dec 13 13:33:01.634938 containerd[1483]: time="2024-12-13T13:33:01.634933513Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 13:33:01.646119 kubelet[2009]: E1213 13:33:01.646071 2009 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:33:01.650501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:33:01.650680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:33:01.660801 containerd[1483]: time="2024-12-13T13:33:01.660765778Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:33:02.222307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount220257871.mount: Deactivated successfully. Dec 13 13:33:03.290672 containerd[1483]: time="2024-12-13T13:33:03.290606120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:03.291456 containerd[1483]: time="2024-12-13T13:33:03.291395200Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 13:33:03.292810 containerd[1483]: time="2024-12-13T13:33:03.292779216Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:03.298652 containerd[1483]: time="2024-12-13T13:33:03.296607726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:03.299517 containerd[1483]: time="2024-12-13T13:33:03.299475383Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.638669099s" Dec 13 13:33:03.299581 containerd[1483]: time="2024-12-13T13:33:03.299520668Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 13:33:03.324157 containerd[1483]: time="2024-12-13T13:33:03.324109371Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:33:03.866710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3646562940.mount: Deactivated successfully. Dec 13 13:33:03.872907 containerd[1483]: time="2024-12-13T13:33:03.872864593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:03.873587 containerd[1483]: time="2024-12-13T13:33:03.873523358Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 13:33:03.874677 containerd[1483]: time="2024-12-13T13:33:03.874649681Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:03.877024 containerd[1483]: time="2024-12-13T13:33:03.876997594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:03.877786 containerd[1483]: time="2024-12-13T13:33:03.877748001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 553.534245ms" Dec 13 13:33:03.877825 containerd[1483]: time="2024-12-13T13:33:03.877784880Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 13:33:03.908997 containerd[1483]: time="2024-12-13T13:33:03.908958382Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 13:33:04.468366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1033308171.mount: Deactivated successfully. Dec 13 13:33:06.658296 containerd[1483]: time="2024-12-13T13:33:06.658238623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:06.659169 containerd[1483]: time="2024-12-13T13:33:06.659105108Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 13:33:06.660491 containerd[1483]: time="2024-12-13T13:33:06.660450712Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:06.663213 containerd[1483]: time="2024-12-13T13:33:06.663171003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:06.664374 containerd[1483]: time="2024-12-13T13:33:06.664333884Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.755344745s" Dec 13 13:33:06.664414 containerd[1483]: time="2024-12-13T13:33:06.664374500Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 13:33:08.776401 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:08.790204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:08.807004 systemd[1]: Reloading requested from client PID 2214 ('systemctl') (unit session-9.scope)... Dec 13 13:33:08.807019 systemd[1]: Reloading... Dec 13 13:33:08.891047 zram_generator::config[2259]: No configuration found. Dec 13 13:33:09.152230 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:33:09.227498 systemd[1]: Reloading finished in 420 ms. Dec 13 13:33:09.284152 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:09.287054 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:33:09.287304 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:09.288844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:09.434202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:09.439396 (kubelet)[2303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:33:09.493713 kubelet[2303]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:33:09.493713 kubelet[2303]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:33:09.493713 kubelet[2303]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:33:09.494705 kubelet[2303]: I1213 13:33:09.494651 2303 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:33:10.086924 kubelet[2303]: I1213 13:33:10.086878 2303 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:33:10.086924 kubelet[2303]: I1213 13:33:10.086913 2303 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:33:10.087173 kubelet[2303]: I1213 13:33:10.087152 2303 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:33:10.102710 kubelet[2303]: E1213 13:33:10.102680 2303 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:10.103912 kubelet[2303]: I1213 13:33:10.103889 2303 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:33:10.115563 kubelet[2303]: I1213 13:33:10.115523 2303 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:33:10.116911 kubelet[2303]: I1213 13:33:10.116871 2303 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:33:10.117129 kubelet[2303]: I1213 13:33:10.117104 2303 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:33:10.117285 kubelet[2303]: I1213 13:33:10.117137 2303 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:33:10.117285 kubelet[2303]: I1213 13:33:10.117156 2303 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:33:10.117285 kubelet[2303]: I1213 13:33:10.117252 2303 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:33:10.117383 kubelet[2303]: I1213 13:33:10.117367 2303 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:33:10.117423 kubelet[2303]: I1213 13:33:10.117385 2303 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:33:10.117449 kubelet[2303]: I1213 13:33:10.117433 2303 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:33:10.117474 kubelet[2303]: I1213 13:33:10.117452 2303 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:33:10.118856 kubelet[2303]: I1213 13:33:10.118810 2303 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:33:10.119625 kubelet[2303]: W1213 13:33:10.119540 2303 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:10.119625 kubelet[2303]: E1213 13:33:10.119596 2303 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:10.119760 kubelet[2303]: W1213 13:33:10.119710 2303 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.150:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:10.119839 kubelet[2303]: E1213 13:33:10.119767 2303 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.150:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:10.121366 kubelet[2303]: I1213 13:33:10.121340 2303 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:33:10.122395 kubelet[2303]: W1213 13:33:10.122371 2303 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:33:10.123099 kubelet[2303]: I1213 13:33:10.122941 2303 server.go:1256] "Started kubelet" Dec 13 13:33:10.124155 kubelet[2303]: I1213 13:33:10.124124 2303 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:33:10.126276 kubelet[2303]: I1213 13:33:10.124530 2303 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:33:10.126276 kubelet[2303]: I1213 13:33:10.125268 2303 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:33:10.126276 kubelet[2303]: I1213 13:33:10.126081 2303 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:33:10.126276 kubelet[2303]: I1213 13:33:10.126282 2303 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:33:10.129322 kubelet[2303]: I1213 13:33:10.129241 2303 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:33:10.129322 kubelet[2303]: I1213 13:33:10.129321 2303 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:33:10.129416 kubelet[2303]: I1213 13:33:10.129361 2303 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:33:10.130203 kubelet[2303]: W1213 13:33:10.129484 2303 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:10.130203 kubelet[2303]: E1213 13:33:10.129523 2303 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:10.130203 kubelet[2303]: E1213 13:33:10.130179 2303 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.150:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.150:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810bfd7cd6bf1dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:33:10.122918365 +0000 UTC m=+0.675499564,LastTimestamp:2024-12-13 13:33:10.122918365 +0000 UTC m=+0.675499564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:33:10.130562 kubelet[2303]: E1213 13:33:10.130484 2303 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:33:10.130626 kubelet[2303]: I1213 13:33:10.130594 2303 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:33:10.130936 kubelet[2303]: E1213 13:33:10.130766 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="200ms" Dec 13 13:33:10.131455 kubelet[2303]: I1213 13:33:10.131440 2303 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:33:10.131541 kubelet[2303]: I1213 13:33:10.131510 2303 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:33:10.142434 kubelet[2303]: I1213 13:33:10.142397 2303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:33:10.144004 kubelet[2303]: I1213 13:33:10.143965 2303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:33:10.144160 kubelet[2303]: I1213 13:33:10.144120 2303 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:33:10.144160 kubelet[2303]: I1213 13:33:10.144148 2303 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:33:10.145038 kubelet[2303]: E1213 13:33:10.144398 2303 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:33:10.146352 kubelet[2303]: W1213 13:33:10.145757 2303 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:10.146352 kubelet[2303]: E1213 13:33:10.145792 2303 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:10.147452 kubelet[2303]: I1213 13:33:10.147427 2303 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:33:10.147452 kubelet[2303]: I1213 13:33:10.147446 2303 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:33:10.147513 kubelet[2303]: I1213 13:33:10.147460 2303 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:33:10.230212 kubelet[2303]: I1213 13:33:10.230169 2303 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:10.230731 kubelet[2303]: E1213 13:33:10.230691 2303 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Dec 13 13:33:10.244872 kubelet[2303]: E1213 13:33:10.244822 2303 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:33:10.331624 kubelet[2303]: E1213 13:33:10.331582 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="400ms" Dec 13 13:33:10.434200 kubelet[2303]: I1213 13:33:10.434101 2303 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:10.434537 kubelet[2303]: E1213 13:33:10.434496 2303 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Dec 13 13:33:10.445019 kubelet[2303]: E1213 13:33:10.444968 2303 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:33:10.451153 kubelet[2303]: I1213 13:33:10.451116 2303 policy_none.go:49] "None policy: Start" Dec 13 13:33:10.451889 kubelet[2303]: I1213 13:33:10.451863 2303 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:33:10.451889 kubelet[2303]: I1213 13:33:10.451891 2303 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:33:10.459937 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:33:10.474752 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:33:10.477879 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:33:10.488842 kubelet[2303]: I1213 13:33:10.488805 2303 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:33:10.489110 kubelet[2303]: I1213 13:33:10.489076 2303 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:33:10.490250 kubelet[2303]: E1213 13:33:10.490219 2303 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 13:33:10.696488 kubelet[2303]: E1213 13:33:10.696403 2303 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.150:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.150:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810bfd7cd6bf1dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 13:33:10.122918365 +0000 UTC m=+0.675499564,LastTimestamp:2024-12-13 13:33:10.122918365 +0000 UTC m=+0.675499564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 13:33:10.732964 kubelet[2303]: E1213 13:33:10.732926 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="800ms" Dec 13 13:33:10.836557 kubelet[2303]: I1213 13:33:10.836524 2303 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:10.836886 kubelet[2303]: E1213 13:33:10.836862 2303 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Dec 13 13:33:10.845994 kubelet[2303]: I1213 13:33:10.845958 2303 topology_manager.go:215] "Topology Admit Handler" podUID="53ad465898021844518d14617722dfc5" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:33:10.847076 kubelet[2303]: I1213 13:33:10.847040 2303 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:33:10.847979 kubelet[2303]: I1213 13:33:10.847952 2303 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:33:10.853720 systemd[1]: Created slice kubepods-burstable-pod53ad465898021844518d14617722dfc5.slice - libcontainer container kubepods-burstable-pod53ad465898021844518d14617722dfc5.slice. Dec 13 13:33:10.871298 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Dec 13 13:33:10.874602 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Dec 13 13:33:10.934396 kubelet[2303]: I1213 13:33:10.934362 2303 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53ad465898021844518d14617722dfc5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"53ad465898021844518d14617722dfc5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:10.934459 kubelet[2303]: I1213 13:33:10.934410 2303 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:10.934459 kubelet[2303]: I1213 13:33:10.934435 2303 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:10.934513 kubelet[2303]: I1213 13:33:10.934470 2303 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:33:10.934513 kubelet[2303]: I1213 13:33:10.934495 2303 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53ad465898021844518d14617722dfc5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"53ad465898021844518d14617722dfc5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:10.934562 kubelet[2303]: I1213 13:33:10.934516 2303 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53ad465898021844518d14617722dfc5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"53ad465898021844518d14617722dfc5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:10.934562 kubelet[2303]: I1213 13:33:10.934554 2303 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:10.934631 kubelet[2303]: I1213 13:33:10.934579 2303 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:10.934631 kubelet[2303]: I1213 13:33:10.934599 2303 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:11.035253 kubelet[2303]: W1213 13:33:11.035199 2303 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.150:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:11.035253 kubelet[2303]: E1213 13:33:11.035256 2303 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.150:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:11.168690 kubelet[2303]: E1213 13:33:11.168657 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:11.169267 containerd[1483]: time="2024-12-13T13:33:11.169220979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:53ad465898021844518d14617722dfc5,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:11.173459 kubelet[2303]: E1213 13:33:11.173429 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:11.173820 containerd[1483]: time="2024-12-13T13:33:11.173783960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:11.177196 kubelet[2303]: E1213 13:33:11.177169 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:11.177526 containerd[1483]: time="2024-12-13T13:33:11.177504207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:11.225204 kubelet[2303]: W1213 13:33:11.225144 2303 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:11.225204 kubelet[2303]: E1213 13:33:11.225197 2303 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:11.534242 kubelet[2303]: E1213 13:33:11.534205 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.150:6443: connect: connection refused" interval="1.6s" Dec 13 13:33:11.602805 kubelet[2303]: W1213 13:33:11.602749 2303 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:11.602805 kubelet[2303]: E1213 13:33:11.602801 2303 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:11.634378 kubelet[2303]: W1213 13:33:11.634332 2303 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:11.634378 kubelet[2303]: E1213 13:33:11.634369 2303 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:11.638353 kubelet[2303]: I1213 13:33:11.638315 2303 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:11.638656 kubelet[2303]: E1213 13:33:11.638635 2303 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.150:6443/api/v1/nodes\": dial tcp 10.0.0.150:6443: connect: connection refused" node="localhost" Dec 13 13:33:11.955549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount933245409.mount: Deactivated successfully. Dec 13 13:33:11.961194 containerd[1483]: time="2024-12-13T13:33:11.961151383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:11.964794 containerd[1483]: time="2024-12-13T13:33:11.964740026Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 13:33:11.965585 containerd[1483]: time="2024-12-13T13:33:11.965531491Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:11.966463 containerd[1483]: time="2024-12-13T13:33:11.966423209Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:11.967329 containerd[1483]: time="2024-12-13T13:33:11.967277044Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:11.968092 containerd[1483]: time="2024-12-13T13:33:11.968003493Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:33:11.968938 containerd[1483]: time="2024-12-13T13:33:11.968901974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:33:11.970480 containerd[1483]: time="2024-12-13T13:33:11.970438214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:33:11.973639 containerd[1483]: time="2024-12-13T13:33:11.973605136Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 799.762822ms" Dec 13 13:33:11.975464 containerd[1483]: time="2024-12-13T13:33:11.975428428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 806.11956ms" Dec 13 13:33:11.978062 containerd[1483]: time="2024-12-13T13:33:11.978015793Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 800.461038ms" Dec 13 13:33:12.208906 containerd[1483]: time="2024-12-13T13:33:12.208497276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:12.208906 containerd[1483]: time="2024-12-13T13:33:12.208556771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:12.208906 containerd[1483]: time="2024-12-13T13:33:12.208586277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:12.208906 containerd[1483]: time="2024-12-13T13:33:12.208673094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:12.209785 containerd[1483]: time="2024-12-13T13:33:12.207905848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:12.209785 containerd[1483]: time="2024-12-13T13:33:12.209748593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:12.209785 containerd[1483]: time="2024-12-13T13:33:12.209763742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:12.210028 containerd[1483]: time="2024-12-13T13:33:12.209831943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:12.220883 containerd[1483]: time="2024-12-13T13:33:12.220519940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:12.220883 containerd[1483]: time="2024-12-13T13:33:12.220583883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:12.220883 containerd[1483]: time="2024-12-13T13:33:12.220599553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:12.220883 containerd[1483]: time="2024-12-13T13:33:12.220691189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:12.235395 systemd[1]: Started cri-containerd-cbf853a847f664ce5cf4dcea9a9f86c7cc96ad0c848fd10d17b7d48d7b54af68.scope - libcontainer container cbf853a847f664ce5cf4dcea9a9f86c7cc96ad0c848fd10d17b7d48d7b54af68. Dec 13 13:33:12.240219 systemd[1]: Started cri-containerd-55aa87ec5df2a1bfdc2129656926240c881bf49ba8e284b80bf28faab67580e4.scope - libcontainer container 55aa87ec5df2a1bfdc2129656926240c881bf49ba8e284b80bf28faab67580e4. Dec 13 13:33:12.258180 systemd[1]: Started cri-containerd-1043b12292f387aed75bad53942fb456c67581c3bdf73bce14d57e8f8ede2902.scope - libcontainer container 1043b12292f387aed75bad53942fb456c67581c3bdf73bce14d57e8f8ede2902. Dec 13 13:33:12.275793 kubelet[2303]: E1213 13:33:12.275758 2303 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.150:6443: connect: connection refused Dec 13 13:33:12.300208 containerd[1483]: time="2024-12-13T13:33:12.300166449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1043b12292f387aed75bad53942fb456c67581c3bdf73bce14d57e8f8ede2902\"" Dec 13 13:33:12.301839 kubelet[2303]: E1213 13:33:12.301817 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:12.307081 containerd[1483]: time="2024-12-13T13:33:12.307042353Z" level=info msg="CreateContainer within sandbox \"1043b12292f387aed75bad53942fb456c67581c3bdf73bce14d57e8f8ede2902\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:33:12.310513 containerd[1483]: time="2024-12-13T13:33:12.310489273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:53ad465898021844518d14617722dfc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbf853a847f664ce5cf4dcea9a9f86c7cc96ad0c848fd10d17b7d48d7b54af68\"" Dec 13 13:33:12.310745 containerd[1483]: time="2024-12-13T13:33:12.310722271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"55aa87ec5df2a1bfdc2129656926240c881bf49ba8e284b80bf28faab67580e4\"" Dec 13 13:33:12.311303 kubelet[2303]: E1213 13:33:12.311259 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:12.311896 kubelet[2303]: E1213 13:33:12.311874 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:12.313848 containerd[1483]: time="2024-12-13T13:33:12.313809159Z" level=info msg="CreateContainer within sandbox \"55aa87ec5df2a1bfdc2129656926240c881bf49ba8e284b80bf28faab67580e4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:33:12.314044 containerd[1483]: time="2024-12-13T13:33:12.314012840Z" level=info msg="CreateContainer within sandbox \"cbf853a847f664ce5cf4dcea9a9f86c7cc96ad0c848fd10d17b7d48d7b54af68\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:33:12.328043 containerd[1483]: time="2024-12-13T13:33:12.328009301Z" level=info msg="CreateContainer within sandbox \"1043b12292f387aed75bad53942fb456c67581c3bdf73bce14d57e8f8ede2902\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aed0803c489127e6f379eea3a6c7d6f0b52258d6e73be4ebbb35e82b23b552fb\"" Dec 13 13:33:12.328546 containerd[1483]: time="2024-12-13T13:33:12.328518129Z" level=info msg="StartContainer for \"aed0803c489127e6f379eea3a6c7d6f0b52258d6e73be4ebbb35e82b23b552fb\"" Dec 13 13:33:12.336427 containerd[1483]: time="2024-12-13T13:33:12.336389609Z" level=info msg="CreateContainer within sandbox \"cbf853a847f664ce5cf4dcea9a9f86c7cc96ad0c848fd10d17b7d48d7b54af68\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"40aa9a59205cf374b05a236379999fc8f24dbc9beb4f865a49df787cc9f9fedd\"" Dec 13 13:33:12.337169 containerd[1483]: time="2024-12-13T13:33:12.337055600Z" level=info msg="StartContainer for \"40aa9a59205cf374b05a236379999fc8f24dbc9beb4f865a49df787cc9f9fedd\"" Dec 13 13:33:12.340814 containerd[1483]: time="2024-12-13T13:33:12.340333234Z" level=info msg="CreateContainer within sandbox \"55aa87ec5df2a1bfdc2129656926240c881bf49ba8e284b80bf28faab67580e4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7d6359b5b4b15f82308efb550ae6ba645453beb4bb5c5e483f21efdd232c7505\"" Dec 13 13:33:12.340861 containerd[1483]: time="2024-12-13T13:33:12.340848705Z" level=info msg="StartContainer for \"7d6359b5b4b15f82308efb550ae6ba645453beb4bb5c5e483f21efdd232c7505\"" Dec 13 13:33:12.355978 systemd[1]: Started cri-containerd-aed0803c489127e6f379eea3a6c7d6f0b52258d6e73be4ebbb35e82b23b552fb.scope - libcontainer container aed0803c489127e6f379eea3a6c7d6f0b52258d6e73be4ebbb35e82b23b552fb. Dec 13 13:33:12.375116 systemd[1]: Started cri-containerd-40aa9a59205cf374b05a236379999fc8f24dbc9beb4f865a49df787cc9f9fedd.scope - libcontainer container 40aa9a59205cf374b05a236379999fc8f24dbc9beb4f865a49df787cc9f9fedd. Dec 13 13:33:12.378204 systemd[1]: Started cri-containerd-7d6359b5b4b15f82308efb550ae6ba645453beb4bb5c5e483f21efdd232c7505.scope - libcontainer container 7d6359b5b4b15f82308efb550ae6ba645453beb4bb5c5e483f21efdd232c7505. Dec 13 13:33:12.411262 containerd[1483]: time="2024-12-13T13:33:12.411221508Z" level=info msg="StartContainer for \"aed0803c489127e6f379eea3a6c7d6f0b52258d6e73be4ebbb35e82b23b552fb\" returns successfully" Dec 13 13:33:12.425767 containerd[1483]: time="2024-12-13T13:33:12.425629570Z" level=info msg="StartContainer for \"40aa9a59205cf374b05a236379999fc8f24dbc9beb4f865a49df787cc9f9fedd\" returns successfully" Dec 13 13:33:12.431626 containerd[1483]: time="2024-12-13T13:33:12.431551029Z" level=info msg="StartContainer for \"7d6359b5b4b15f82308efb550ae6ba645453beb4bb5c5e483f21efdd232c7505\" returns successfully" Dec 13 13:33:13.196024 kubelet[2303]: E1213 13:33:13.195979 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:13.198996 kubelet[2303]: E1213 13:33:13.197709 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:13.201012 kubelet[2303]: E1213 13:33:13.200970 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:13.240485 kubelet[2303]: I1213 13:33:13.240443 2303 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:13.714168 kubelet[2303]: E1213 13:33:13.714123 2303 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 13:33:13.820278 kubelet[2303]: I1213 13:33:13.820229 2303 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:33:13.826082 kubelet[2303]: E1213 13:33:13.826050 2303 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:33:13.926958 kubelet[2303]: E1213 13:33:13.926904 2303 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:33:14.027410 kubelet[2303]: E1213 13:33:14.027379 2303 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:33:14.128076 kubelet[2303]: E1213 13:33:14.128040 2303 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:33:14.202357 kubelet[2303]: E1213 13:33:14.202331 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:14.228144 kubelet[2303]: E1213 13:33:14.228104 2303 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 13:33:15.191068 kubelet[2303]: I1213 13:33:15.191035 2303 apiserver.go:52] "Watching apiserver" Dec 13 13:33:15.229788 kubelet[2303]: I1213 13:33:15.229759 2303 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:33:16.529726 systemd[1]: Reloading requested from client PID 2588 ('systemctl') (unit session-9.scope)... Dec 13 13:33:16.529748 systemd[1]: Reloading... Dec 13 13:33:16.605024 zram_generator::config[2630]: No configuration found. Dec 13 13:33:16.721995 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:33:16.812475 systemd[1]: Reloading finished in 282 ms. Dec 13 13:33:16.858391 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:16.879585 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:33:16.879865 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:16.879915 systemd[1]: kubelet.service: Consumed 1.148s CPU time, 116.4M memory peak, 0B memory swap peak. Dec 13 13:33:16.894202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:33:17.044757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:33:17.049937 (kubelet)[2672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:33:17.090172 kubelet[2672]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:33:17.090561 kubelet[2672]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:33:17.090561 kubelet[2672]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:33:17.090686 kubelet[2672]: I1213 13:33:17.090638 2672 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:33:17.095914 kubelet[2672]: I1213 13:33:17.095883 2672 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:33:17.095914 kubelet[2672]: I1213 13:33:17.095907 2672 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:33:17.096089 kubelet[2672]: I1213 13:33:17.096071 2672 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:33:17.097372 kubelet[2672]: I1213 13:33:17.097355 2672 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:33:17.099003 kubelet[2672]: I1213 13:33:17.098961 2672 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:33:17.105781 kubelet[2672]: I1213 13:33:17.105746 2672 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:33:17.106028 kubelet[2672]: I1213 13:33:17.106011 2672 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:33:17.106209 kubelet[2672]: I1213 13:33:17.106178 2672 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:33:17.106345 kubelet[2672]: I1213 13:33:17.106215 2672 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:33:17.106345 kubelet[2672]: I1213 13:33:17.106236 2672 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:33:17.106345 kubelet[2672]: I1213 13:33:17.106283 2672 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:33:17.106448 kubelet[2672]: I1213 13:33:17.106395 2672 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:33:17.106448 kubelet[2672]: I1213 13:33:17.106410 2672 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:33:17.106448 kubelet[2672]: I1213 13:33:17.106439 2672 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:33:17.106555 kubelet[2672]: I1213 13:33:17.106458 2672 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:33:17.107573 kubelet[2672]: I1213 13:33:17.106861 2672 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:33:17.107573 kubelet[2672]: I1213 13:33:17.107101 2672 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:33:17.107771 kubelet[2672]: I1213 13:33:17.107756 2672 server.go:1256] "Started kubelet" Dec 13 13:33:17.108525 kubelet[2672]: I1213 13:33:17.108511 2672 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:33:17.115696 kubelet[2672]: I1213 13:33:17.115665 2672 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:33:17.117439 kubelet[2672]: I1213 13:33:17.117413 2672 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:33:17.117925 kubelet[2672]: I1213 13:33:17.117861 2672 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:33:17.119199 kubelet[2672]: I1213 13:33:17.119181 2672 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:33:17.120393 kubelet[2672]: E1213 13:33:17.119629 2672 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:33:17.120393 kubelet[2672]: I1213 13:33:17.119798 2672 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:33:17.120393 kubelet[2672]: I1213 13:33:17.119885 2672 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:33:17.120393 kubelet[2672]: I1213 13:33:17.120079 2672 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:33:17.120753 kubelet[2672]: I1213 13:33:17.120734 2672 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:33:17.120832 kubelet[2672]: I1213 13:33:17.120822 2672 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:33:17.121858 kubelet[2672]: I1213 13:33:17.121842 2672 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:33:17.129556 kubelet[2672]: I1213 13:33:17.129525 2672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:33:17.130796 kubelet[2672]: I1213 13:33:17.130766 2672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:33:17.130796 kubelet[2672]: I1213 13:33:17.130797 2672 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:33:17.130912 kubelet[2672]: I1213 13:33:17.130817 2672 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:33:17.130912 kubelet[2672]: E1213 13:33:17.130867 2672 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:33:17.159802 kubelet[2672]: I1213 13:33:17.159756 2672 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:33:17.159802 kubelet[2672]: I1213 13:33:17.159783 2672 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:33:17.159802 kubelet[2672]: I1213 13:33:17.159806 2672 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:33:17.159977 kubelet[2672]: I1213 13:33:17.159964 2672 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:33:17.160033 kubelet[2672]: I1213 13:33:17.160019 2672 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:33:17.160085 kubelet[2672]: I1213 13:33:17.160036 2672 policy_none.go:49] "None policy: Start" Dec 13 13:33:17.160683 kubelet[2672]: I1213 13:33:17.160667 2672 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:33:17.160719 kubelet[2672]: I1213 13:33:17.160696 2672 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:33:17.160945 kubelet[2672]: I1213 13:33:17.160926 2672 state_mem.go:75] "Updated machine memory state" Dec 13 13:33:17.165314 kubelet[2672]: I1213 13:33:17.165297 2672 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:33:17.166179 kubelet[2672]: I1213 13:33:17.166166 2672 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:33:17.231715 kubelet[2672]: I1213 13:33:17.231681 2672 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 13:33:17.231875 kubelet[2672]: I1213 13:33:17.231763 2672 topology_manager.go:215] "Topology Admit Handler" podUID="53ad465898021844518d14617722dfc5" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 13:33:17.231875 kubelet[2672]: I1213 13:33:17.231791 2672 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 13:33:17.272274 kubelet[2672]: I1213 13:33:17.272247 2672 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 13:33:17.304528 kubelet[2672]: I1213 13:33:17.304413 2672 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 13:33:17.304528 kubelet[2672]: I1213 13:33:17.304496 2672 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 13:33:17.339619 sudo[2709]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 13:33:17.339956 sudo[2709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 13:33:17.421551 kubelet[2672]: I1213 13:33:17.421338 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 13:33:17.421551 kubelet[2672]: I1213 13:33:17.421381 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53ad465898021844518d14617722dfc5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"53ad465898021844518d14617722dfc5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:17.421551 kubelet[2672]: I1213 13:33:17.421404 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53ad465898021844518d14617722dfc5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"53ad465898021844518d14617722dfc5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:17.421551 kubelet[2672]: I1213 13:33:17.421428 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53ad465898021844518d14617722dfc5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"53ad465898021844518d14617722dfc5\") " pod="kube-system/kube-apiserver-localhost" Dec 13 13:33:17.421551 kubelet[2672]: I1213 13:33:17.421453 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:17.421826 kubelet[2672]: I1213 13:33:17.421477 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:17.421826 kubelet[2672]: I1213 13:33:17.421501 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:17.421826 kubelet[2672]: I1213 13:33:17.421528 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:17.421826 kubelet[2672]: I1213 13:33:17.421552 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 13:33:17.604969 kubelet[2672]: E1213 13:33:17.604531 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:17.604969 kubelet[2672]: E1213 13:33:17.604885 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:17.604969 kubelet[2672]: E1213 13:33:17.604926 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:17.793179 sudo[2709]: pam_unix(sudo:session): session closed for user root Dec 13 13:33:18.106949 kubelet[2672]: I1213 13:33:18.106849 2672 apiserver.go:52] "Watching apiserver" Dec 13 13:33:18.120681 kubelet[2672]: I1213 13:33:18.120652 2672 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:33:18.144285 kubelet[2672]: E1213 13:33:18.144244 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:18.144426 kubelet[2672]: E1213 13:33:18.144304 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:18.144426 kubelet[2672]: E1213 13:33:18.144324 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:18.164317 kubelet[2672]: I1213 13:33:18.164288 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.164224678 podStartE2EDuration="1.164224678s" podCreationTimestamp="2024-12-13 13:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:18.159453274 +0000 UTC m=+1.104997157" watchObservedRunningTime="2024-12-13 13:33:18.164224678 +0000 UTC m=+1.109768560" Dec 13 13:33:18.165034 kubelet[2672]: I1213 13:33:18.164686 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.164664547 podStartE2EDuration="1.164664547s" podCreationTimestamp="2024-12-13 13:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:18.164567582 +0000 UTC m=+1.110111474" watchObservedRunningTime="2024-12-13 13:33:18.164664547 +0000 UTC m=+1.110208429" Dec 13 13:33:18.170735 kubelet[2672]: I1213 13:33:18.170424 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.170403006 podStartE2EDuration="1.170403006s" podCreationTimestamp="2024-12-13 13:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:18.170337891 +0000 UTC m=+1.115881784" watchObservedRunningTime="2024-12-13 13:33:18.170403006 +0000 UTC m=+1.115946888" Dec 13 13:33:19.144670 kubelet[2672]: E1213 13:33:19.144633 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:19.204396 sudo[1671]: pam_unix(sudo:session): session closed for user root Dec 13 13:33:19.206005 sshd[1670]: Connection closed by 10.0.0.1 port 35960 Dec 13 13:33:19.206423 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:19.210547 systemd[1]: sshd@8-10.0.0.150:22-10.0.0.1:35960.service: Deactivated successfully. Dec 13 13:33:19.212601 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:33:19.212782 systemd[1]: session-9.scope: Consumed 4.527s CPU time, 188.0M memory peak, 0B memory swap peak. Dec 13 13:33:19.213249 systemd-logind[1464]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:33:19.214060 systemd-logind[1464]: Removed session 9. Dec 13 13:33:20.885137 kubelet[2672]: E1213 13:33:20.885101 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:23.667777 update_engine[1465]: I20241213 13:33:23.667708 1465 update_attempter.cc:509] Updating boot flags... Dec 13 13:33:23.819787 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2757) Dec 13 13:33:23.856102 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2760) Dec 13 13:33:23.893117 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2760) Dec 13 13:33:24.717404 kubelet[2672]: E1213 13:33:24.717376 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:25.151653 kubelet[2672]: E1213 13:33:25.151624 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:26.154279 kubelet[2672]: E1213 13:33:26.154228 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:27.154640 kubelet[2672]: E1213 13:33:27.154592 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:30.889763 kubelet[2672]: E1213 13:33:30.889731 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:31.161484 kubelet[2672]: E1213 13:33:31.161359 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:31.779760 kubelet[2672]: I1213 13:33:31.779724 2672 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:33:31.780185 containerd[1483]: time="2024-12-13T13:33:31.780128928Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:33:31.780585 kubelet[2672]: I1213 13:33:31.780350 2672 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:33:32.775011 kubelet[2672]: I1213 13:33:32.773014 2672 topology_manager.go:215] "Topology Admit Handler" podUID="b23191e4-cf41-4681-8907-b76c9dab0341" podNamespace="kube-system" podName="kube-proxy-k65qn" Dec 13 13:33:32.779759 kubelet[2672]: I1213 13:33:32.779715 2672 topology_manager.go:215] "Topology Admit Handler" podUID="bfe1746f-38e1-402b-9078-07776488c8e1" podNamespace="kube-system" podName="cilium-kqlw5" Dec 13 13:33:32.782546 systemd[1]: Created slice kubepods-besteffort-podb23191e4_cf41_4681_8907_b76c9dab0341.slice - libcontainer container kubepods-besteffort-podb23191e4_cf41_4681_8907_b76c9dab0341.slice. Dec 13 13:33:32.800477 systemd[1]: Created slice kubepods-burstable-podbfe1746f_38e1_402b_9078_07776488c8e1.slice - libcontainer container kubepods-burstable-podbfe1746f_38e1_402b_9078_07776488c8e1.slice. Dec 13 13:33:32.824719 kubelet[2672]: I1213 13:33:32.824541 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-etc-cni-netd\") pod \"cilium-kqlw5\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " pod="kube-system/cilium-kqlw5" Dec 13 13:33:32.824719 kubelet[2672]: I1213 13:33:32.824617 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b23191e4-cf41-4681-8907-b76c9dab0341-xtables-lock\") pod \"kube-proxy-k65qn\" (UID: \"b23191e4-cf41-4681-8907-b76c9dab0341\") " pod="kube-system/kube-proxy-k65qn" Dec 13 13:33:32.824719 kubelet[2672]: I1213 13:33:32.824643 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqpf9\" (UniqueName: \"kubernetes.io/projected/bfe1746f-38e1-402b-9078-07776488c8e1-kube-api-access-cqpf9\") pod \"cilium-kqlw5\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " pod="kube-system/cilium-kqlw5" Dec 13 13:33:32.824719 kubelet[2672]: I1213 13:33:32.824701 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-host-proc-sys-net\") pod \"cilium-kqlw5\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " pod="kube-system/cilium-kqlw5" Dec 13 13:33:32.824719 kubelet[2672]: I1213 13:33:32.824722 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b23191e4-cf41-4681-8907-b76c9dab0341-kube-proxy\") pod \"kube-proxy-k65qn\" (UID: \"b23191e4-cf41-4681-8907-b76c9dab0341\") " pod="kube-system/kube-proxy-k65qn" Dec 13 13:33:32.824966 kubelet[2672]: I1213 13:33:32.824779 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfe1746f-38e1-402b-9078-07776488c8e1-cilium-config-path\") pod \"cilium-kqlw5\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " pod="kube-system/cilium-kqlw5" Dec 13 13:33:32.824966 kubelet[2672]: I1213 13:33:32.824799 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zhcs\" (UniqueName: \"kubernetes.io/projected/b23191e4-cf41-4681-8907-b76c9dab0341-kube-api-access-8zhcs\") pod \"kube-proxy-k65qn\" (UID: \"b23191e4-cf41-4681-8907-b76c9dab0341\") " pod="kube-system/kube-proxy-k65qn" Dec 13 13:33:32.824966 kubelet[2672]: I1213 13:33:32.824931 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-cni-path\") pod \"cilium-kqlw5\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " pod="kube-system/cilium-kqlw5" Dec 13 13:33:32.824966 kubelet[2672]: I1213 13:33:32.824950 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-lib-modules\") pod \"cilium-kqlw5\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " pod="kube-system/cilium-kqlw5" Dec 13 13:33:32.825230 kubelet[2672]: I1213 13:33:32.825041 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfe1746f-38e1-402b-9078-07776488c8e1-hubble-tls\") pod \"cilium-kqlw5\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " pod="kube-system/cilium-kqlw5" Dec 13 13:33:32.825230 kubelet[2672]: I1213 13:33:32.825071 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-cilium-run\") pod \"cilium-kqlw5\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " pod="kube-system/cilium-kqlw5" Dec 13 13:33:32.825230 kubelet[2672]: I1213 13:33:32.825163 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-hostproc\") pod \"cilium-kqlw5\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " pod="kube-system/cilium-kqlw5" Dec 13 13:33:32.825230 kubelet[2672]: I1213 13:33:32.825214 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-cilium-cgroup\") pod \"cilium-kqlw5\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " pod="kube-system/cilium-kqlw5" Dec 13 13:33:32.825388 kubelet[2672]: I1213 13:33:32.825240 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-host-proc-sys-kernel\") pod \"cilium-kqlw5\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " pod="kube-system/cilium-kqlw5" Dec 13 13:33:32.825388 kubelet[2672]: I1213 13:33:32.825265 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-bpf-maps\") pod \"cilium-kqlw5\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " pod="kube-system/cilium-kqlw5" Dec 13 13:33:32.825388 kubelet[2672]: I1213 13:33:32.825283 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfe1746f-38e1-402b-9078-07776488c8e1-clustermesh-secrets\") pod \"cilium-kqlw5\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " pod="kube-system/cilium-kqlw5" Dec 13 13:33:32.825388 kubelet[2672]: I1213 13:33:32.825339 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-xtables-lock\") pod \"cilium-kqlw5\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " pod="kube-system/cilium-kqlw5" Dec 13 13:33:32.825388 kubelet[2672]: I1213 13:33:32.825358 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b23191e4-cf41-4681-8907-b76c9dab0341-lib-modules\") pod \"kube-proxy-k65qn\" (UID: \"b23191e4-cf41-4681-8907-b76c9dab0341\") " pod="kube-system/kube-proxy-k65qn" Dec 13 13:33:32.922359 kubelet[2672]: I1213 13:33:32.921978 2672 topology_manager.go:215] "Topology Admit Handler" podUID="c744140a-9647-4cca-9ad6-afda8fa04dc7" podNamespace="kube-system" podName="cilium-operator-5cc964979-n9mfc" Dec 13 13:33:32.952140 systemd[1]: Created slice kubepods-besteffort-podc744140a_9647_4cca_9ad6_afda8fa04dc7.slice - libcontainer container kubepods-besteffort-podc744140a_9647_4cca_9ad6_afda8fa04dc7.slice. Dec 13 13:33:33.026686 kubelet[2672]: I1213 13:33:33.026643 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c744140a-9647-4cca-9ad6-afda8fa04dc7-cilium-config-path\") pod \"cilium-operator-5cc964979-n9mfc\" (UID: \"c744140a-9647-4cca-9ad6-afda8fa04dc7\") " pod="kube-system/cilium-operator-5cc964979-n9mfc" Dec 13 13:33:33.026785 kubelet[2672]: I1213 13:33:33.026696 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wt86\" (UniqueName: \"kubernetes.io/projected/c744140a-9647-4cca-9ad6-afda8fa04dc7-kube-api-access-6wt86\") pod \"cilium-operator-5cc964979-n9mfc\" (UID: \"c744140a-9647-4cca-9ad6-afda8fa04dc7\") " pod="kube-system/cilium-operator-5cc964979-n9mfc" Dec 13 13:33:33.095884 kubelet[2672]: E1213 13:33:33.095857 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:33.096434 containerd[1483]: time="2024-12-13T13:33:33.096399591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k65qn,Uid:b23191e4-cf41-4681-8907-b76c9dab0341,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:33.105971 kubelet[2672]: E1213 13:33:33.105940 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:33.106560 containerd[1483]: time="2024-12-13T13:33:33.106339686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kqlw5,Uid:bfe1746f-38e1-402b-9078-07776488c8e1,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:33.119382 containerd[1483]: time="2024-12-13T13:33:33.119227034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:33.119382 containerd[1483]: time="2024-12-13T13:33:33.119313658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:33.119573 containerd[1483]: time="2024-12-13T13:33:33.119368190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:33.120300 containerd[1483]: time="2024-12-13T13:33:33.120257369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:33.131319 containerd[1483]: time="2024-12-13T13:33:33.131172384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:33.131401 containerd[1483]: time="2024-12-13T13:33:33.131273675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:33.131541 containerd[1483]: time="2024-12-13T13:33:33.131291328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:33.131765 containerd[1483]: time="2024-12-13T13:33:33.131632062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:33.142300 systemd[1]: Started cri-containerd-7a72eab2f5babe966a4a69317e4307b88cc64bdb65be5398fc5a2d709f6dca53.scope - libcontainer container 7a72eab2f5babe966a4a69317e4307b88cc64bdb65be5398fc5a2d709f6dca53. Dec 13 13:33:33.148709 systemd[1]: Started cri-containerd-40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2.scope - libcontainer container 40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2. Dec 13 13:33:33.166812 containerd[1483]: time="2024-12-13T13:33:33.166777897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k65qn,Uid:b23191e4-cf41-4681-8907-b76c9dab0341,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a72eab2f5babe966a4a69317e4307b88cc64bdb65be5398fc5a2d709f6dca53\"" Dec 13 13:33:33.167414 kubelet[2672]: E1213 13:33:33.167390 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:33.171051 containerd[1483]: time="2024-12-13T13:33:33.170935416Z" level=info msg="CreateContainer within sandbox \"7a72eab2f5babe966a4a69317e4307b88cc64bdb65be5398fc5a2d709f6dca53\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:33:33.172615 containerd[1483]: time="2024-12-13T13:33:33.172583146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kqlw5,Uid:bfe1746f-38e1-402b-9078-07776488c8e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2\"" Dec 13 13:33:33.173381 kubelet[2672]: E1213 13:33:33.173361 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:33.175035 containerd[1483]: time="2024-12-13T13:33:33.175002313Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 13:33:33.188641 containerd[1483]: time="2024-12-13T13:33:33.188606033Z" level=info msg="CreateContainer within sandbox \"7a72eab2f5babe966a4a69317e4307b88cc64bdb65be5398fc5a2d709f6dca53\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5f16d621a5ebdc79e4015d2dcdc31987264506390deaff990ce911be6526b42f\"" Dec 13 13:33:33.189083 containerd[1483]: time="2024-12-13T13:33:33.189031306Z" level=info msg="StartContainer for \"5f16d621a5ebdc79e4015d2dcdc31987264506390deaff990ce911be6526b42f\"" Dec 13 13:33:33.217101 systemd[1]: Started cri-containerd-5f16d621a5ebdc79e4015d2dcdc31987264506390deaff990ce911be6526b42f.scope - libcontainer container 5f16d621a5ebdc79e4015d2dcdc31987264506390deaff990ce911be6526b42f. Dec 13 13:33:33.246293 containerd[1483]: time="2024-12-13T13:33:33.246265259Z" level=info msg="StartContainer for \"5f16d621a5ebdc79e4015d2dcdc31987264506390deaff990ce911be6526b42f\" returns successfully" Dec 13 13:33:33.254747 kubelet[2672]: E1213 13:33:33.254712 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:33.255248 containerd[1483]: time="2024-12-13T13:33:33.255218780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-n9mfc,Uid:c744140a-9647-4cca-9ad6-afda8fa04dc7,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:33.279871 containerd[1483]: time="2024-12-13T13:33:33.279610105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:33.279871 containerd[1483]: time="2024-12-13T13:33:33.279672112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:33.279871 containerd[1483]: time="2024-12-13T13:33:33.279685788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:33.279871 containerd[1483]: time="2024-12-13T13:33:33.279774806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:33.300157 systemd[1]: Started cri-containerd-f046ed1d01401bb8aafb05c4eda671d71e41f314516b36bcd1eb4a697b14d7bd.scope - libcontainer container f046ed1d01401bb8aafb05c4eda671d71e41f314516b36bcd1eb4a697b14d7bd. Dec 13 13:33:33.337679 containerd[1483]: time="2024-12-13T13:33:33.337625112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-n9mfc,Uid:c744140a-9647-4cca-9ad6-afda8fa04dc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f046ed1d01401bb8aafb05c4eda671d71e41f314516b36bcd1eb4a697b14d7bd\"" Dec 13 13:33:33.338480 kubelet[2672]: E1213 13:33:33.338459 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:34.166956 kubelet[2672]: E1213 13:33:34.166923 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:34.174416 kubelet[2672]: I1213 13:33:34.174382 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-k65qn" podStartSLOduration=2.174344264 podStartE2EDuration="2.174344264s" podCreationTimestamp="2024-12-13 13:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:34.174195593 +0000 UTC m=+17.119739485" watchObservedRunningTime="2024-12-13 13:33:34.174344264 +0000 UTC m=+17.119888136" Dec 13 13:33:41.479280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3703260777.mount: Deactivated successfully. Dec 13 13:33:44.003120 containerd[1483]: time="2024-12-13T13:33:44.003075710Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:44.004099 containerd[1483]: time="2024-12-13T13:33:44.004001822Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734747" Dec 13 13:33:44.005424 containerd[1483]: time="2024-12-13T13:33:44.005392950Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:44.006782 containerd[1483]: time="2024-12-13T13:33:44.006757196Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.831720207s" Dec 13 13:33:44.006828 containerd[1483]: time="2024-12-13T13:33:44.006782333Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 13:33:44.007248 containerd[1483]: time="2024-12-13T13:33:44.007228834Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 13:33:44.009624 containerd[1483]: time="2024-12-13T13:33:44.009590216Z" level=info msg="CreateContainer within sandbox \"40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:33:44.025461 containerd[1483]: time="2024-12-13T13:33:44.025420200Z" level=info msg="CreateContainer within sandbox \"40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087\"" Dec 13 13:33:44.025886 containerd[1483]: time="2024-12-13T13:33:44.025863154Z" level=info msg="StartContainer for \"51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087\"" Dec 13 13:33:44.059102 systemd[1]: Started cri-containerd-51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087.scope - libcontainer container 51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087. Dec 13 13:33:44.081552 containerd[1483]: time="2024-12-13T13:33:44.081507124Z" level=info msg="StartContainer for \"51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087\" returns successfully" Dec 13 13:33:44.091054 systemd[1]: cri-containerd-51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087.scope: Deactivated successfully. Dec 13 13:33:44.604127 containerd[1483]: time="2024-12-13T13:33:44.604061611Z" level=info msg="shim disconnected" id=51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087 namespace=k8s.io Dec 13 13:33:44.604127 containerd[1483]: time="2024-12-13T13:33:44.604120692Z" level=warning msg="cleaning up after shim disconnected" id=51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087 namespace=k8s.io Dec 13 13:33:44.604127 containerd[1483]: time="2024-12-13T13:33:44.604131793Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:33:44.618867 systemd[1]: Started sshd@9-10.0.0.150:22-10.0.0.1:33194.service - OpenSSH per-connection server daemon (10.0.0.1:33194). Dec 13 13:33:44.655885 sshd[3144]: Accepted publickey for core from 10.0.0.1 port 33194 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:33:44.657359 sshd-session[3144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:44.661291 systemd-logind[1464]: New session 10 of user core. Dec 13 13:33:44.667125 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:33:44.699518 kubelet[2672]: E1213 13:33:44.699487 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:44.701750 containerd[1483]: time="2024-12-13T13:33:44.701442407Z" level=info msg="CreateContainer within sandbox \"40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:33:44.715219 containerd[1483]: time="2024-12-13T13:33:44.715164284Z" level=info msg="CreateContainer within sandbox \"40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e\"" Dec 13 13:33:44.715821 containerd[1483]: time="2024-12-13T13:33:44.715685725Z" level=info msg="StartContainer for \"46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e\"" Dec 13 13:33:44.746115 systemd[1]: Started cri-containerd-46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e.scope - libcontainer container 46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e. Dec 13 13:33:44.776671 containerd[1483]: time="2024-12-13T13:33:44.776593651Z" level=info msg="StartContainer for \"46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e\" returns successfully" Dec 13 13:33:44.789264 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:33:44.789578 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:33:44.789644 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:33:44.798383 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:33:44.798614 systemd[1]: cri-containerd-46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e.scope: Deactivated successfully. Dec 13 13:33:44.801878 sshd[3146]: Connection closed by 10.0.0.1 port 33194 Dec 13 13:33:44.802294 sshd-session[3144]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:44.806190 systemd[1]: sshd@9-10.0.0.150:22-10.0.0.1:33194.service: Deactivated successfully. Dec 13 13:33:44.808310 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:33:44.809899 systemd-logind[1464]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:33:44.811804 systemd-logind[1464]: Removed session 10. Dec 13 13:33:44.826748 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:33:44.827711 containerd[1483]: time="2024-12-13T13:33:44.827641223Z" level=info msg="shim disconnected" id=46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e namespace=k8s.io Dec 13 13:33:44.827711 containerd[1483]: time="2024-12-13T13:33:44.827688662Z" level=warning msg="cleaning up after shim disconnected" id=46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e namespace=k8s.io Dec 13 13:33:44.827711 containerd[1483]: time="2024-12-13T13:33:44.827697389Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:33:45.020634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087-rootfs.mount: Deactivated successfully. Dec 13 13:33:45.662552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount73493919.mount: Deactivated successfully. Dec 13 13:33:45.703208 kubelet[2672]: E1213 13:33:45.703170 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:45.705178 containerd[1483]: time="2024-12-13T13:33:45.705129800Z" level=info msg="CreateContainer within sandbox \"40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:33:45.724270 containerd[1483]: time="2024-12-13T13:33:45.724229409Z" level=info msg="CreateContainer within sandbox \"40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358\"" Dec 13 13:33:45.724833 containerd[1483]: time="2024-12-13T13:33:45.724754767Z" level=info msg="StartContainer for \"176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358\"" Dec 13 13:33:45.751112 systemd[1]: Started cri-containerd-176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358.scope - libcontainer container 176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358. Dec 13 13:33:45.781740 containerd[1483]: time="2024-12-13T13:33:45.781695095Z" level=info msg="StartContainer for \"176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358\" returns successfully" Dec 13 13:33:45.783560 systemd[1]: cri-containerd-176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358.scope: Deactivated successfully. Dec 13 13:33:45.888873 containerd[1483]: time="2024-12-13T13:33:45.888807791Z" level=info msg="shim disconnected" id=176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358 namespace=k8s.io Dec 13 13:33:45.888873 containerd[1483]: time="2024-12-13T13:33:45.888879486Z" level=warning msg="cleaning up after shim disconnected" id=176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358 namespace=k8s.io Dec 13 13:33:45.889141 containerd[1483]: time="2024-12-13T13:33:45.888891248Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:33:45.988316 containerd[1483]: time="2024-12-13T13:33:45.988210293Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:45.989097 containerd[1483]: time="2024-12-13T13:33:45.989054461Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907221" Dec 13 13:33:45.990208 containerd[1483]: time="2024-12-13T13:33:45.990167795Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:33:45.991557 containerd[1483]: time="2024-12-13T13:33:45.991524668Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.984269875s" Dec 13 13:33:45.991607 containerd[1483]: time="2024-12-13T13:33:45.991561597Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 13:33:45.993257 containerd[1483]: time="2024-12-13T13:33:45.993229655Z" level=info msg="CreateContainer within sandbox \"f046ed1d01401bb8aafb05c4eda671d71e41f314516b36bcd1eb4a697b14d7bd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 13:33:46.006726 containerd[1483]: time="2024-12-13T13:33:46.006687619Z" level=info msg="CreateContainer within sandbox \"f046ed1d01401bb8aafb05c4eda671d71e41f314516b36bcd1eb4a697b14d7bd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8\"" Dec 13 13:33:46.007278 containerd[1483]: time="2024-12-13T13:33:46.007246069Z" level=info msg="StartContainer for \"bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8\"" Dec 13 13:33:46.030710 systemd[1]: run-containerd-runc-k8s.io-bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8-runc.MxW8VW.mount: Deactivated successfully. Dec 13 13:33:46.039110 systemd[1]: Started cri-containerd-bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8.scope - libcontainer container bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8. Dec 13 13:33:46.064382 containerd[1483]: time="2024-12-13T13:33:46.064342941Z" level=info msg="StartContainer for \"bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8\" returns successfully" Dec 13 13:33:46.713007 kubelet[2672]: E1213 13:33:46.710017 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:46.715290 containerd[1483]: time="2024-12-13T13:33:46.715250918Z" level=info msg="CreateContainer within sandbox \"40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:33:46.716369 kubelet[2672]: E1213 13:33:46.716315 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:46.836541 kubelet[2672]: I1213 13:33:46.836501 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-n9mfc" podStartSLOduration=2.183546094 podStartE2EDuration="14.836457382s" podCreationTimestamp="2024-12-13 13:33:32 +0000 UTC" firstStartedPulling="2024-12-13 13:33:33.338836488 +0000 UTC m=+16.284380371" lastFinishedPulling="2024-12-13 13:33:45.991747777 +0000 UTC m=+28.937291659" observedRunningTime="2024-12-13 13:33:46.835976387 +0000 UTC m=+29.781520289" watchObservedRunningTime="2024-12-13 13:33:46.836457382 +0000 UTC m=+29.782001264" Dec 13 13:33:46.837251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3096142158.mount: Deactivated successfully. Dec 13 13:33:46.839271 containerd[1483]: time="2024-12-13T13:33:46.839228113Z" level=info msg="CreateContainer within sandbox \"40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829\"" Dec 13 13:33:46.839825 containerd[1483]: time="2024-12-13T13:33:46.839767177Z" level=info msg="StartContainer for \"31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829\"" Dec 13 13:33:46.864112 systemd[1]: Started cri-containerd-31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829.scope - libcontainer container 31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829. Dec 13 13:33:46.886973 systemd[1]: cri-containerd-31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829.scope: Deactivated successfully. Dec 13 13:33:46.889290 containerd[1483]: time="2024-12-13T13:33:46.889257323Z" level=info msg="StartContainer for \"31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829\" returns successfully" Dec 13 13:33:46.925834 containerd[1483]: time="2024-12-13T13:33:46.925771466Z" level=info msg="shim disconnected" id=31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829 namespace=k8s.io Dec 13 13:33:46.926086 containerd[1483]: time="2024-12-13T13:33:46.926068594Z" level=warning msg="cleaning up after shim disconnected" id=31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829 namespace=k8s.io Dec 13 13:33:46.926086 containerd[1483]: time="2024-12-13T13:33:46.926081358Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:33:47.719411 kubelet[2672]: E1213 13:33:47.719378 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:47.719837 kubelet[2672]: E1213 13:33:47.719811 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:47.722024 containerd[1483]: time="2024-12-13T13:33:47.721911691Z" level=info msg="CreateContainer within sandbox \"40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:33:47.739204 containerd[1483]: time="2024-12-13T13:33:47.739158751Z" level=info msg="CreateContainer within sandbox \"40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb\"" Dec 13 13:33:47.740565 containerd[1483]: time="2024-12-13T13:33:47.739618056Z" level=info msg="StartContainer for \"449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb\"" Dec 13 13:33:47.781108 systemd[1]: Started cri-containerd-449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb.scope - libcontainer container 449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb. Dec 13 13:33:47.808635 containerd[1483]: time="2024-12-13T13:33:47.808598885Z" level=info msg="StartContainer for \"449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb\" returns successfully" Dec 13 13:33:47.915951 kubelet[2672]: I1213 13:33:47.915623 2672 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:33:47.933966 kubelet[2672]: I1213 13:33:47.933466 2672 topology_manager.go:215] "Topology Admit Handler" podUID="3fdfc5d2-6e9f-47f6-a750-671869481bc8" podNamespace="kube-system" podName="coredns-76f75df574-qc94g" Dec 13 13:33:47.934453 kubelet[2672]: I1213 13:33:47.934309 2672 topology_manager.go:215] "Topology Admit Handler" podUID="94d5b1be-2d93-4b66-9d2e-ae994aacc564" podNamespace="kube-system" podName="coredns-76f75df574-4bfc2" Dec 13 13:33:47.943189 systemd[1]: Created slice kubepods-burstable-pod3fdfc5d2_6e9f_47f6_a750_671869481bc8.slice - libcontainer container kubepods-burstable-pod3fdfc5d2_6e9f_47f6_a750_671869481bc8.slice. Dec 13 13:33:47.953308 systemd[1]: Created slice kubepods-burstable-pod94d5b1be_2d93_4b66_9d2e_ae994aacc564.slice - libcontainer container kubepods-burstable-pod94d5b1be_2d93_4b66_9d2e_ae994aacc564.slice. Dec 13 13:33:48.022853 kubelet[2672]: I1213 13:33:48.022758 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94d5b1be-2d93-4b66-9d2e-ae994aacc564-config-volume\") pod \"coredns-76f75df574-4bfc2\" (UID: \"94d5b1be-2d93-4b66-9d2e-ae994aacc564\") " pod="kube-system/coredns-76f75df574-4bfc2" Dec 13 13:33:48.022853 kubelet[2672]: I1213 13:33:48.022799 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z74w4\" (UniqueName: \"kubernetes.io/projected/94d5b1be-2d93-4b66-9d2e-ae994aacc564-kube-api-access-z74w4\") pod \"coredns-76f75df574-4bfc2\" (UID: \"94d5b1be-2d93-4b66-9d2e-ae994aacc564\") " pod="kube-system/coredns-76f75df574-4bfc2" Dec 13 13:33:48.022853 kubelet[2672]: I1213 13:33:48.022822 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fdfc5d2-6e9f-47f6-a750-671869481bc8-config-volume\") pod \"coredns-76f75df574-qc94g\" (UID: \"3fdfc5d2-6e9f-47f6-a750-671869481bc8\") " pod="kube-system/coredns-76f75df574-qc94g" Dec 13 13:33:48.022853 kubelet[2672]: I1213 13:33:48.022844 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccqwn\" (UniqueName: \"kubernetes.io/projected/3fdfc5d2-6e9f-47f6-a750-671869481bc8-kube-api-access-ccqwn\") pod \"coredns-76f75df574-qc94g\" (UID: \"3fdfc5d2-6e9f-47f6-a750-671869481bc8\") " pod="kube-system/coredns-76f75df574-qc94g" Dec 13 13:33:48.252415 kubelet[2672]: E1213 13:33:48.252358 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:48.253029 containerd[1483]: time="2024-12-13T13:33:48.252970430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qc94g,Uid:3fdfc5d2-6e9f-47f6-a750-671869481bc8,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:48.256623 kubelet[2672]: E1213 13:33:48.256595 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:48.257909 containerd[1483]: time="2024-12-13T13:33:48.257809578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4bfc2,Uid:94d5b1be-2d93-4b66-9d2e-ae994aacc564,Namespace:kube-system,Attempt:0,}" Dec 13 13:33:48.724236 kubelet[2672]: E1213 13:33:48.724207 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:48.735558 kubelet[2672]: I1213 13:33:48.735511 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kqlw5" podStartSLOduration=5.902298231 podStartE2EDuration="16.735473514s" podCreationTimestamp="2024-12-13 13:33:32 +0000 UTC" firstStartedPulling="2024-12-13 13:33:33.173914369 +0000 UTC m=+16.119458251" lastFinishedPulling="2024-12-13 13:33:44.007089652 +0000 UTC m=+26.952633534" observedRunningTime="2024-12-13 13:33:48.735107255 +0000 UTC m=+31.680651137" watchObservedRunningTime="2024-12-13 13:33:48.735473514 +0000 UTC m=+31.681017396" Dec 13 13:33:49.725945 kubelet[2672]: E1213 13:33:49.725906 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:49.818517 systemd[1]: Started sshd@10-10.0.0.150:22-10.0.0.1:55786.service - OpenSSH per-connection server daemon (10.0.0.1:55786). Dec 13 13:33:49.859534 sshd[3535]: Accepted publickey for core from 10.0.0.1 port 55786 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:33:49.860999 sshd-session[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:49.864762 systemd-logind[1464]: New session 11 of user core. Dec 13 13:33:49.872104 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:33:49.955736 systemd-networkd[1414]: cilium_host: Link UP Dec 13 13:33:49.955927 systemd-networkd[1414]: cilium_net: Link UP Dec 13 13:33:49.955930 systemd-networkd[1414]: cilium_net: Gained carrier Dec 13 13:33:49.956170 systemd-networkd[1414]: cilium_host: Gained carrier Dec 13 13:33:50.005627 sshd[3537]: Connection closed by 10.0.0.1 port 55786 Dec 13 13:33:50.005881 sshd-session[3535]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:50.009167 systemd[1]: sshd@10-10.0.0.150:22-10.0.0.1:55786.service: Deactivated successfully. Dec 13 13:33:50.011283 systemd-logind[1464]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:33:50.011362 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:33:50.013232 systemd-logind[1464]: Removed session 11. Dec 13 13:33:50.058061 systemd-networkd[1414]: cilium_net: Gained IPv6LL Dec 13 13:33:50.083378 systemd-networkd[1414]: cilium_vxlan: Link UP Dec 13 13:33:50.083387 systemd-networkd[1414]: cilium_vxlan: Gained carrier Dec 13 13:33:50.282013 kernel: NET: Registered PF_ALG protocol family Dec 13 13:33:50.409184 systemd-networkd[1414]: cilium_host: Gained IPv6LL Dec 13 13:33:50.727388 kubelet[2672]: E1213 13:33:50.727293 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:50.916011 systemd-networkd[1414]: lxc_health: Link UP Dec 13 13:33:50.919527 systemd-networkd[1414]: lxc_health: Gained carrier Dec 13 13:33:51.333756 systemd-networkd[1414]: lxc38dad750ff2e: Link UP Dec 13 13:33:51.337603 kernel: eth0: renamed from tmp149b0 Dec 13 13:33:51.355568 systemd-networkd[1414]: lxc50b6aaf3c598: Link UP Dec 13 13:33:51.359113 kernel: eth0: renamed from tmp385e7 Dec 13 13:33:51.365043 systemd-networkd[1414]: lxc38dad750ff2e: Gained carrier Dec 13 13:33:51.365480 systemd-networkd[1414]: cilium_vxlan: Gained IPv6LL Dec 13 13:33:51.366691 systemd-networkd[1414]: lxc50b6aaf3c598: Gained carrier Dec 13 13:33:51.728870 kubelet[2672]: E1213 13:33:51.728754 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:52.730248 kubelet[2672]: E1213 13:33:52.730210 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:52.881568 systemd-networkd[1414]: lxc_health: Gained IPv6LL Dec 13 13:33:53.265218 systemd-networkd[1414]: lxc50b6aaf3c598: Gained IPv6LL Dec 13 13:33:53.329244 systemd-networkd[1414]: lxc38dad750ff2e: Gained IPv6LL Dec 13 13:33:53.731941 kubelet[2672]: E1213 13:33:53.731908 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:54.564466 containerd[1483]: time="2024-12-13T13:33:54.564282742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:54.564873 containerd[1483]: time="2024-12-13T13:33:54.564446519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:54.564873 containerd[1483]: time="2024-12-13T13:33:54.564472728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:54.564873 containerd[1483]: time="2024-12-13T13:33:54.564543621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:54.565817 containerd[1483]: time="2024-12-13T13:33:54.565712377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:33:54.565817 containerd[1483]: time="2024-12-13T13:33:54.565772359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:33:54.565817 containerd[1483]: time="2024-12-13T13:33:54.565783150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:54.566652 containerd[1483]: time="2024-12-13T13:33:54.566527449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:33:54.587127 systemd[1]: Started cri-containerd-149b07ff6fc0fff436c8737c3c9c6c939ff30028089c4101592f4a1312ee73ac.scope - libcontainer container 149b07ff6fc0fff436c8737c3c9c6c939ff30028089c4101592f4a1312ee73ac. Dec 13 13:33:54.591764 systemd[1]: Started cri-containerd-385e796568856f5dd6630fe318ad6c1b0a206eb107400060d7ddbbc38ce00909.scope - libcontainer container 385e796568856f5dd6630fe318ad6c1b0a206eb107400060d7ddbbc38ce00909. Dec 13 13:33:54.599029 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:33:54.603737 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:33:54.624118 containerd[1483]: time="2024-12-13T13:33:54.624066318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qc94g,Uid:3fdfc5d2-6e9f-47f6-a750-671869481bc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"149b07ff6fc0fff436c8737c3c9c6c939ff30028089c4101592f4a1312ee73ac\"" Dec 13 13:33:54.624573 kubelet[2672]: E1213 13:33:54.624546 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:54.627122 containerd[1483]: time="2024-12-13T13:33:54.626399931Z" level=info msg="CreateContainer within sandbox \"149b07ff6fc0fff436c8737c3c9c6c939ff30028089c4101592f4a1312ee73ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:33:54.627794 containerd[1483]: time="2024-12-13T13:33:54.627758544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4bfc2,Uid:94d5b1be-2d93-4b66-9d2e-ae994aacc564,Namespace:kube-system,Attempt:0,} returns sandbox id \"385e796568856f5dd6630fe318ad6c1b0a206eb107400060d7ddbbc38ce00909\"" Dec 13 13:33:54.628396 kubelet[2672]: E1213 13:33:54.628362 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:54.630247 containerd[1483]: time="2024-12-13T13:33:54.630229156Z" level=info msg="CreateContainer within sandbox \"385e796568856f5dd6630fe318ad6c1b0a206eb107400060d7ddbbc38ce00909\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:33:55.018737 systemd[1]: Started sshd@11-10.0.0.150:22-10.0.0.1:55794.service - OpenSSH per-connection server daemon (10.0.0.1:55794). Dec 13 13:33:55.061212 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 55794 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:33:55.062789 sshd-session[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:33:55.066641 systemd-logind[1464]: New session 12 of user core. Dec 13 13:33:55.075101 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:33:55.146688 containerd[1483]: time="2024-12-13T13:33:55.146634769Z" level=info msg="CreateContainer within sandbox \"149b07ff6fc0fff436c8737c3c9c6c939ff30028089c4101592f4a1312ee73ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"44df57c5be86098c83b9bfd221d8c4a83765af8e1f206d6f0d574555829ecbf9\"" Dec 13 13:33:55.147599 containerd[1483]: time="2024-12-13T13:33:55.147173081Z" level=info msg="StartContainer for \"44df57c5be86098c83b9bfd221d8c4a83765af8e1f206d6f0d574555829ecbf9\"" Dec 13 13:33:55.148257 containerd[1483]: time="2024-12-13T13:33:55.148207814Z" level=info msg="CreateContainer within sandbox \"385e796568856f5dd6630fe318ad6c1b0a206eb107400060d7ddbbc38ce00909\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77aeeeda1fa67b15dfd3cfc6ca084b7a9793377c7d640790a3d8156d25ec862f\"" Dec 13 13:33:55.148658 containerd[1483]: time="2024-12-13T13:33:55.148626481Z" level=info msg="StartContainer for \"77aeeeda1fa67b15dfd3cfc6ca084b7a9793377c7d640790a3d8156d25ec862f\"" Dec 13 13:33:55.177107 systemd[1]: Started cri-containerd-44df57c5be86098c83b9bfd221d8c4a83765af8e1f206d6f0d574555829ecbf9.scope - libcontainer container 44df57c5be86098c83b9bfd221d8c4a83765af8e1f206d6f0d574555829ecbf9. Dec 13 13:33:55.178280 systemd[1]: Started cri-containerd-77aeeeda1fa67b15dfd3cfc6ca084b7a9793377c7d640790a3d8156d25ec862f.scope - libcontainer container 77aeeeda1fa67b15dfd3cfc6ca084b7a9793377c7d640790a3d8156d25ec862f. Dec 13 13:33:55.218567 sshd[4015]: Connection closed by 10.0.0.1 port 55794 Dec 13 13:33:55.218900 sshd-session[4013]: pam_unix(sshd:session): session closed for user core Dec 13 13:33:55.223132 systemd[1]: sshd@11-10.0.0.150:22-10.0.0.1:55794.service: Deactivated successfully. Dec 13 13:33:55.225278 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:33:55.225884 systemd-logind[1464]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:33:55.226728 systemd-logind[1464]: Removed session 12. Dec 13 13:33:55.289049 containerd[1483]: time="2024-12-13T13:33:55.288901737Z" level=info msg="StartContainer for \"44df57c5be86098c83b9bfd221d8c4a83765af8e1f206d6f0d574555829ecbf9\" returns successfully" Dec 13 13:33:55.289049 containerd[1483]: time="2024-12-13T13:33:55.288931122Z" level=info msg="StartContainer for \"77aeeeda1fa67b15dfd3cfc6ca084b7a9793377c7d640790a3d8156d25ec862f\" returns successfully" Dec 13 13:33:55.570248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3281172585.mount: Deactivated successfully. Dec 13 13:33:55.737416 kubelet[2672]: E1213 13:33:55.737380 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:55.740379 kubelet[2672]: E1213 13:33:55.740309 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:55.747282 kubelet[2672]: I1213 13:33:55.747245 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qc94g" podStartSLOduration=23.747205612 podStartE2EDuration="23.747205612s" podCreationTimestamp="2024-12-13 13:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:55.746925646 +0000 UTC m=+38.692469528" watchObservedRunningTime="2024-12-13 13:33:55.747205612 +0000 UTC m=+38.692749494" Dec 13 13:33:56.743754 kubelet[2672]: E1213 13:33:56.743718 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:56.744205 kubelet[2672]: E1213 13:33:56.743769 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:57.745596 kubelet[2672]: E1213 13:33:57.745564 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:33:57.746037 kubelet[2672]: E1213 13:33:57.745865 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:00.230018 systemd[1]: Started sshd@12-10.0.0.150:22-10.0.0.1:36700.service - OpenSSH per-connection server daemon (10.0.0.1:36700). Dec 13 13:34:00.269340 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 36700 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:00.270735 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:00.274518 systemd-logind[1464]: New session 13 of user core. Dec 13 13:34:00.285101 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:34:00.394510 sshd[4121]: Connection closed by 10.0.0.1 port 36700 Dec 13 13:34:00.394944 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:00.404684 systemd[1]: sshd@12-10.0.0.150:22-10.0.0.1:36700.service: Deactivated successfully. Dec 13 13:34:00.406416 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:34:00.408199 systemd-logind[1464]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:34:00.417371 systemd[1]: Started sshd@13-10.0.0.150:22-10.0.0.1:36716.service - OpenSSH per-connection server daemon (10.0.0.1:36716). Dec 13 13:34:00.418272 systemd-logind[1464]: Removed session 13. Dec 13 13:34:00.448933 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 36716 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:00.450201 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:00.453909 systemd-logind[1464]: New session 14 of user core. Dec 13 13:34:00.463098 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:34:00.601207 sshd[4136]: Connection closed by 10.0.0.1 port 36716 Dec 13 13:34:00.601801 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:00.611332 systemd[1]: sshd@13-10.0.0.150:22-10.0.0.1:36716.service: Deactivated successfully. Dec 13 13:34:00.613461 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:34:00.617251 systemd-logind[1464]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:34:00.623352 systemd[1]: Started sshd@14-10.0.0.150:22-10.0.0.1:36720.service - OpenSSH per-connection server daemon (10.0.0.1:36720). Dec 13 13:34:00.624392 systemd-logind[1464]: Removed session 14. Dec 13 13:34:00.657842 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 36720 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:00.659240 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:00.663069 systemd-logind[1464]: New session 15 of user core. Dec 13 13:34:00.672096 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:34:00.779432 sshd[4148]: Connection closed by 10.0.0.1 port 36720 Dec 13 13:34:00.779790 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:00.784304 systemd[1]: sshd@14-10.0.0.150:22-10.0.0.1:36720.service: Deactivated successfully. Dec 13 13:34:00.786698 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:34:00.787424 systemd-logind[1464]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:34:00.788308 systemd-logind[1464]: Removed session 15. Dec 13 13:34:05.798712 systemd[1]: Started sshd@15-10.0.0.150:22-10.0.0.1:36730.service - OpenSSH per-connection server daemon (10.0.0.1:36730). Dec 13 13:34:05.834823 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 36730 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:05.836178 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:05.839763 systemd-logind[1464]: New session 16 of user core. Dec 13 13:34:05.849096 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:34:05.954782 sshd[4166]: Connection closed by 10.0.0.1 port 36730 Dec 13 13:34:05.955167 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:05.959250 systemd[1]: sshd@15-10.0.0.150:22-10.0.0.1:36730.service: Deactivated successfully. Dec 13 13:34:05.961398 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:34:05.962267 systemd-logind[1464]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:34:05.963176 systemd-logind[1464]: Removed session 16. Dec 13 13:34:10.970041 systemd[1]: Started sshd@16-10.0.0.150:22-10.0.0.1:34628.service - OpenSSH per-connection server daemon (10.0.0.1:34628). Dec 13 13:34:11.005474 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 34628 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:11.006861 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:11.010412 systemd-logind[1464]: New session 17 of user core. Dec 13 13:34:11.019102 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:34:11.121569 sshd[4180]: Connection closed by 10.0.0.1 port 34628 Dec 13 13:34:11.121890 sshd-session[4178]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:11.132665 systemd[1]: sshd@16-10.0.0.150:22-10.0.0.1:34628.service: Deactivated successfully. Dec 13 13:34:11.134377 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:34:11.135766 systemd-logind[1464]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:34:11.144522 systemd[1]: Started sshd@17-10.0.0.150:22-10.0.0.1:34634.service - OpenSSH per-connection server daemon (10.0.0.1:34634). Dec 13 13:34:11.145408 systemd-logind[1464]: Removed session 17. Dec 13 13:34:11.175674 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 34634 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:11.176916 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:11.180727 systemd-logind[1464]: New session 18 of user core. Dec 13 13:34:11.191103 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:34:11.354113 sshd[4194]: Connection closed by 10.0.0.1 port 34634 Dec 13 13:34:11.354545 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:11.367867 systemd[1]: sshd@17-10.0.0.150:22-10.0.0.1:34634.service: Deactivated successfully. Dec 13 13:34:11.369822 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:34:11.371503 systemd-logind[1464]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:34:11.380209 systemd[1]: Started sshd@18-10.0.0.150:22-10.0.0.1:34636.service - OpenSSH per-connection server daemon (10.0.0.1:34636). Dec 13 13:34:11.380962 systemd-logind[1464]: Removed session 18. Dec 13 13:34:11.415663 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 34636 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:11.416933 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:11.420594 systemd-logind[1464]: New session 19 of user core. Dec 13 13:34:11.426098 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:34:12.770163 sshd[4207]: Connection closed by 10.0.0.1 port 34636 Dec 13 13:34:12.771282 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:12.785304 systemd[1]: sshd@18-10.0.0.150:22-10.0.0.1:34636.service: Deactivated successfully. Dec 13 13:34:12.787934 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:34:12.789384 systemd-logind[1464]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:34:12.797296 systemd[1]: Started sshd@19-10.0.0.150:22-10.0.0.1:34640.service - OpenSSH per-connection server daemon (10.0.0.1:34640). Dec 13 13:34:12.798268 systemd-logind[1464]: Removed session 19. Dec 13 13:34:12.830528 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 34640 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:12.832145 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:12.836016 systemd-logind[1464]: New session 20 of user core. Dec 13 13:34:12.845090 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:34:13.064932 sshd[4230]: Connection closed by 10.0.0.1 port 34640 Dec 13 13:34:13.066559 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:13.074936 systemd[1]: sshd@19-10.0.0.150:22-10.0.0.1:34640.service: Deactivated successfully. Dec 13 13:34:13.076882 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:34:13.078649 systemd-logind[1464]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:34:13.080009 systemd[1]: Started sshd@20-10.0.0.150:22-10.0.0.1:34650.service - OpenSSH per-connection server daemon (10.0.0.1:34650). Dec 13 13:34:13.080757 systemd-logind[1464]: Removed session 20. Dec 13 13:34:13.117294 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 34650 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:13.118812 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:13.123240 systemd-logind[1464]: New session 21 of user core. Dec 13 13:34:13.133205 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 13:34:13.255892 sshd[4243]: Connection closed by 10.0.0.1 port 34650 Dec 13 13:34:13.256308 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:13.260566 systemd[1]: sshd@20-10.0.0.150:22-10.0.0.1:34650.service: Deactivated successfully. Dec 13 13:34:13.262769 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 13:34:13.263392 systemd-logind[1464]: Session 21 logged out. Waiting for processes to exit. Dec 13 13:34:13.264305 systemd-logind[1464]: Removed session 21. Dec 13 13:34:18.271112 systemd[1]: Started sshd@21-10.0.0.150:22-10.0.0.1:42936.service - OpenSSH per-connection server daemon (10.0.0.1:42936). Dec 13 13:34:18.308320 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 42936 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:18.309812 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:18.313367 systemd-logind[1464]: New session 22 of user core. Dec 13 13:34:18.326114 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 13:34:18.431233 sshd[4259]: Connection closed by 10.0.0.1 port 42936 Dec 13 13:34:18.431580 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:18.435602 systemd[1]: sshd@21-10.0.0.150:22-10.0.0.1:42936.service: Deactivated successfully. Dec 13 13:34:18.437679 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 13:34:18.438278 systemd-logind[1464]: Session 22 logged out. Waiting for processes to exit. Dec 13 13:34:18.439093 systemd-logind[1464]: Removed session 22. Dec 13 13:34:23.442737 systemd[1]: Started sshd@22-10.0.0.150:22-10.0.0.1:42948.service - OpenSSH per-connection server daemon (10.0.0.1:42948). Dec 13 13:34:23.478753 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 42948 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:23.480116 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:23.483584 systemd-logind[1464]: New session 23 of user core. Dec 13 13:34:23.491101 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 13:34:23.596874 sshd[4276]: Connection closed by 10.0.0.1 port 42948 Dec 13 13:34:23.597218 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:23.601197 systemd[1]: sshd@22-10.0.0.150:22-10.0.0.1:42948.service: Deactivated successfully. Dec 13 13:34:23.603305 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 13:34:23.603876 systemd-logind[1464]: Session 23 logged out. Waiting for processes to exit. Dec 13 13:34:23.604786 systemd-logind[1464]: Removed session 23. Dec 13 13:34:28.609046 systemd[1]: Started sshd@23-10.0.0.150:22-10.0.0.1:43824.service - OpenSSH per-connection server daemon (10.0.0.1:43824). Dec 13 13:34:28.645792 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 43824 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:28.647317 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:28.651017 systemd-logind[1464]: New session 24 of user core. Dec 13 13:34:28.658127 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 13:34:28.767393 sshd[4290]: Connection closed by 10.0.0.1 port 43824 Dec 13 13:34:28.767764 sshd-session[4288]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:28.771300 systemd[1]: sshd@23-10.0.0.150:22-10.0.0.1:43824.service: Deactivated successfully. Dec 13 13:34:28.773333 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 13:34:28.774030 systemd-logind[1464]: Session 24 logged out. Waiting for processes to exit. Dec 13 13:34:28.774889 systemd-logind[1464]: Removed session 24. Dec 13 13:34:31.133000 kubelet[2672]: E1213 13:34:31.132940 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:33.132374 kubelet[2672]: E1213 13:34:33.132344 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:33.778771 systemd[1]: Started sshd@24-10.0.0.150:22-10.0.0.1:43836.service - OpenSSH per-connection server daemon (10.0.0.1:43836). Dec 13 13:34:33.815369 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 43836 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:33.816799 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:33.820798 systemd-logind[1464]: New session 25 of user core. Dec 13 13:34:33.831109 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 13:34:33.937072 sshd[4307]: Connection closed by 10.0.0.1 port 43836 Dec 13 13:34:33.937479 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:33.950568 systemd[1]: sshd@24-10.0.0.150:22-10.0.0.1:43836.service: Deactivated successfully. Dec 13 13:34:33.952647 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 13:34:33.954358 systemd-logind[1464]: Session 25 logged out. Waiting for processes to exit. Dec 13 13:34:33.959316 systemd[1]: Started sshd@25-10.0.0.150:22-10.0.0.1:43842.service - OpenSSH per-connection server daemon (10.0.0.1:43842). Dec 13 13:34:33.960319 systemd-logind[1464]: Removed session 25. Dec 13 13:34:33.997832 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 43842 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:33.999463 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:34.003498 systemd-logind[1464]: New session 26 of user core. Dec 13 13:34:34.019208 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 13:34:35.544523 kubelet[2672]: I1213 13:34:35.544468 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4bfc2" podStartSLOduration=63.544430058 podStartE2EDuration="1m3.544430058s" podCreationTimestamp="2024-12-13 13:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:33:55.765240307 +0000 UTC m=+38.710784189" watchObservedRunningTime="2024-12-13 13:34:35.544430058 +0000 UTC m=+78.489973940" Dec 13 13:34:35.550007 containerd[1483]: time="2024-12-13T13:34:35.549256549Z" level=info msg="StopContainer for \"bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8\" with timeout 30 (s)" Dec 13 13:34:35.554628 containerd[1483]: time="2024-12-13T13:34:35.554584840Z" level=info msg="Stop container \"bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8\" with signal terminated" Dec 13 13:34:35.568049 systemd[1]: cri-containerd-bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8.scope: Deactivated successfully. Dec 13 13:34:35.581815 containerd[1483]: time="2024-12-13T13:34:35.581775613Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:34:35.582387 containerd[1483]: time="2024-12-13T13:34:35.582357174Z" level=info msg="StopContainer for \"449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb\" with timeout 2 (s)" Dec 13 13:34:35.582692 containerd[1483]: time="2024-12-13T13:34:35.582662888Z" level=info msg="Stop container \"449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb\" with signal terminated" Dec 13 13:34:35.588920 systemd-networkd[1414]: lxc_health: Link DOWN Dec 13 13:34:35.589245 systemd-networkd[1414]: lxc_health: Lost carrier Dec 13 13:34:35.593150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8-rootfs.mount: Deactivated successfully. Dec 13 13:34:35.603378 containerd[1483]: time="2024-12-13T13:34:35.603331078Z" level=info msg="shim disconnected" id=bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8 namespace=k8s.io Dec 13 13:34:35.603378 containerd[1483]: time="2024-12-13T13:34:35.603377026Z" level=warning msg="cleaning up after shim disconnected" id=bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8 namespace=k8s.io Dec 13 13:34:35.603563 containerd[1483]: time="2024-12-13T13:34:35.603385292Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:34:35.615104 systemd[1]: cri-containerd-449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb.scope: Deactivated successfully. Dec 13 13:34:35.615388 systemd[1]: cri-containerd-449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb.scope: Consumed 6.370s CPU time. Dec 13 13:34:35.621117 containerd[1483]: time="2024-12-13T13:34:35.621086633Z" level=info msg="StopContainer for \"bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8\" returns successfully" Dec 13 13:34:35.628019 containerd[1483]: time="2024-12-13T13:34:35.627968492Z" level=info msg="StopPodSandbox for \"f046ed1d01401bb8aafb05c4eda671d71e41f314516b36bcd1eb4a697b14d7bd\"" Dec 13 13:34:35.628069 containerd[1483]: time="2024-12-13T13:34:35.628027975Z" level=info msg="Container to stop \"bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:34:35.630104 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f046ed1d01401bb8aafb05c4eda671d71e41f314516b36bcd1eb4a697b14d7bd-shm.mount: Deactivated successfully. Dec 13 13:34:35.634678 systemd[1]: cri-containerd-f046ed1d01401bb8aafb05c4eda671d71e41f314516b36bcd1eb4a697b14d7bd.scope: Deactivated successfully. Dec 13 13:34:35.650087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb-rootfs.mount: Deactivated successfully. Dec 13 13:34:35.654767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f046ed1d01401bb8aafb05c4eda671d71e41f314516b36bcd1eb4a697b14d7bd-rootfs.mount: Deactivated successfully. Dec 13 13:34:35.656905 containerd[1483]: time="2024-12-13T13:34:35.656855135Z" level=info msg="shim disconnected" id=449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb namespace=k8s.io Dec 13 13:34:35.657040 containerd[1483]: time="2024-12-13T13:34:35.657011905Z" level=warning msg="cleaning up after shim disconnected" id=449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb namespace=k8s.io Dec 13 13:34:35.657040 containerd[1483]: time="2024-12-13T13:34:35.657027284Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:34:35.657865 containerd[1483]: time="2024-12-13T13:34:35.657781194Z" level=info msg="shim disconnected" id=f046ed1d01401bb8aafb05c4eda671d71e41f314516b36bcd1eb4a697b14d7bd namespace=k8s.io Dec 13 13:34:35.657865 containerd[1483]: time="2024-12-13T13:34:35.657832923Z" level=warning msg="cleaning up after shim disconnected" id=f046ed1d01401bb8aafb05c4eda671d71e41f314516b36bcd1eb4a697b14d7bd namespace=k8s.io Dec 13 13:34:35.657865 containerd[1483]: time="2024-12-13T13:34:35.657841409Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:34:35.671324 containerd[1483]: time="2024-12-13T13:34:35.671273975Z" level=info msg="TearDown network for sandbox \"f046ed1d01401bb8aafb05c4eda671d71e41f314516b36bcd1eb4a697b14d7bd\" successfully" Dec 13 13:34:35.671324 containerd[1483]: time="2024-12-13T13:34:35.671307529Z" level=info msg="StopPodSandbox for \"f046ed1d01401bb8aafb05c4eda671d71e41f314516b36bcd1eb4a697b14d7bd\" returns successfully" Dec 13 13:34:35.673287 containerd[1483]: time="2024-12-13T13:34:35.673250562Z" level=info msg="StopContainer for \"449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb\" returns successfully" Dec 13 13:34:35.673705 containerd[1483]: time="2024-12-13T13:34:35.673669431Z" level=info msg="StopPodSandbox for \"40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2\"" Dec 13 13:34:35.681826 containerd[1483]: time="2024-12-13T13:34:35.673716552Z" level=info msg="Container to stop \"46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:34:35.681826 containerd[1483]: time="2024-12-13T13:34:35.681818482Z" level=info msg="Container to stop \"176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:34:35.681826 containerd[1483]: time="2024-12-13T13:34:35.681828491Z" level=info msg="Container to stop \"31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:34:35.682002 containerd[1483]: time="2024-12-13T13:34:35.681837337Z" level=info msg="Container to stop \"449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:34:35.682002 containerd[1483]: time="2024-12-13T13:34:35.681845714Z" level=info msg="Container to stop \"51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:34:35.687635 systemd[1]: cri-containerd-40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2.scope: Deactivated successfully. Dec 13 13:34:35.707951 containerd[1483]: time="2024-12-13T13:34:35.707843256Z" level=info msg="shim disconnected" id=40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2 namespace=k8s.io Dec 13 13:34:35.707951 containerd[1483]: time="2024-12-13T13:34:35.707891789Z" level=warning msg="cleaning up after shim disconnected" id=40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2 namespace=k8s.io Dec 13 13:34:35.707951 containerd[1483]: time="2024-12-13T13:34:35.707900115Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:34:35.723649 containerd[1483]: time="2024-12-13T13:34:35.723595214Z" level=info msg="TearDown network for sandbox \"40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2\" successfully" Dec 13 13:34:35.723649 containerd[1483]: time="2024-12-13T13:34:35.723634899Z" level=info msg="StopPodSandbox for \"40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2\" returns successfully" Dec 13 13:34:35.823257 kubelet[2672]: I1213 13:34:35.823143 2672 scope.go:117] "RemoveContainer" containerID="bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8" Dec 13 13:34:35.829903 containerd[1483]: time="2024-12-13T13:34:35.829863748Z" level=info msg="RemoveContainer for \"bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8\"" Dec 13 13:34:35.833186 containerd[1483]: time="2024-12-13T13:34:35.833158432Z" level=info msg="RemoveContainer for \"bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8\" returns successfully" Dec 13 13:34:35.833352 kubelet[2672]: I1213 13:34:35.833325 2672 scope.go:117] "RemoveContainer" containerID="bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8" Dec 13 13:34:35.833510 containerd[1483]: time="2024-12-13T13:34:35.833468866Z" level=error msg="ContainerStatus for \"bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8\": not found" Dec 13 13:34:35.839047 kubelet[2672]: E1213 13:34:35.839021 2672 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8\": not found" containerID="bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8" Dec 13 13:34:35.839160 kubelet[2672]: I1213 13:34:35.839098 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8"} err="failed to get container status \"bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8\": rpc error: code = NotFound desc = an error occurred when try to find container \"bfd67d5e5303d7ec22591df20c56e93e5e152214a7dcf6d9825441cbe843ecd8\": not found" Dec 13 13:34:35.839160 kubelet[2672]: I1213 13:34:35.839110 2672 scope.go:117] "RemoveContainer" containerID="449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb" Dec 13 13:34:35.839954 containerd[1483]: time="2024-12-13T13:34:35.839919632Z" level=info msg="RemoveContainer for \"449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb\"" Dec 13 13:34:35.843191 containerd[1483]: time="2024-12-13T13:34:35.843159892Z" level=info msg="RemoveContainer for \"449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb\" returns successfully" Dec 13 13:34:35.843316 kubelet[2672]: I1213 13:34:35.843294 2672 scope.go:117] "RemoveContainer" containerID="31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829" Dec 13 13:34:35.844039 containerd[1483]: time="2024-12-13T13:34:35.844017682Z" level=info msg="RemoveContainer for \"31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829\"" Dec 13 13:34:35.846924 containerd[1483]: time="2024-12-13T13:34:35.846897273Z" level=info msg="RemoveContainer for \"31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829\" returns successfully" Dec 13 13:34:35.847103 kubelet[2672]: I1213 13:34:35.847043 2672 scope.go:117] "RemoveContainer" containerID="176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358" Dec 13 13:34:35.847774 containerd[1483]: time="2024-12-13T13:34:35.847748098Z" level=info msg="RemoveContainer for \"176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358\"" Dec 13 13:34:35.848354 kubelet[2672]: I1213 13:34:35.847873 2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-xtables-lock\") pod \"bfe1746f-38e1-402b-9078-07776488c8e1\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " Dec 13 13:34:35.848354 kubelet[2672]: I1213 13:34:35.847903 2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-etc-cni-netd\") pod \"bfe1746f-38e1-402b-9078-07776488c8e1\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " Dec 13 13:34:35.848354 kubelet[2672]: I1213 13:34:35.847928 2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfe1746f-38e1-402b-9078-07776488c8e1-hubble-tls\") pod \"bfe1746f-38e1-402b-9078-07776488c8e1\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " Dec 13 13:34:35.848354 kubelet[2672]: I1213 13:34:35.847913 2672 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bfe1746f-38e1-402b-9078-07776488c8e1" (UID: "bfe1746f-38e1-402b-9078-07776488c8e1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:34:35.848354 kubelet[2672]: I1213 13:34:35.847945 2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-lib-modules\") pod \"bfe1746f-38e1-402b-9078-07776488c8e1\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " Dec 13 13:34:35.848354 kubelet[2672]: I1213 13:34:35.847953 2672 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bfe1746f-38e1-402b-9078-07776488c8e1" (UID: "bfe1746f-38e1-402b-9078-07776488c8e1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:34:35.848546 kubelet[2672]: I1213 13:34:35.847967 2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-hostproc\") pod \"bfe1746f-38e1-402b-9078-07776488c8e1\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " Dec 13 13:34:35.848546 kubelet[2672]: I1213 13:34:35.847995 2672 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bfe1746f-38e1-402b-9078-07776488c8e1" (UID: "bfe1746f-38e1-402b-9078-07776488c8e1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:34:35.848546 kubelet[2672]: I1213 13:34:35.847999 2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-cilium-cgroup\") pod \"bfe1746f-38e1-402b-9078-07776488c8e1\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " Dec 13 13:34:35.848546 kubelet[2672]: I1213 13:34:35.848025 2672 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bfe1746f-38e1-402b-9078-07776488c8e1" (UID: "bfe1746f-38e1-402b-9078-07776488c8e1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:34:35.848546 kubelet[2672]: I1213 13:34:35.848039 2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c744140a-9647-4cca-9ad6-afda8fa04dc7-cilium-config-path\") pod \"c744140a-9647-4cca-9ad6-afda8fa04dc7\" (UID: \"c744140a-9647-4cca-9ad6-afda8fa04dc7\") " Dec 13 13:34:35.848672 kubelet[2672]: I1213 13:34:35.848061 2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wt86\" (UniqueName: \"kubernetes.io/projected/c744140a-9647-4cca-9ad6-afda8fa04dc7-kube-api-access-6wt86\") pod \"c744140a-9647-4cca-9ad6-afda8fa04dc7\" (UID: \"c744140a-9647-4cca-9ad6-afda8fa04dc7\") " Dec 13 13:34:35.848672 kubelet[2672]: I1213 13:34:35.848069 2672 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-hostproc" (OuterVolumeSpecName: "hostproc") pod "bfe1746f-38e1-402b-9078-07776488c8e1" (UID: "bfe1746f-38e1-402b-9078-07776488c8e1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:34:35.848672 kubelet[2672]: I1213 13:34:35.848082 2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-host-proc-sys-net\") pod \"bfe1746f-38e1-402b-9078-07776488c8e1\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " Dec 13 13:34:35.848672 kubelet[2672]: I1213 13:34:35.848103 2672 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bfe1746f-38e1-402b-9078-07776488c8e1" (UID: "bfe1746f-38e1-402b-9078-07776488c8e1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:34:35.848672 kubelet[2672]: I1213 13:34:35.848111 2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-cni-path\") pod \"bfe1746f-38e1-402b-9078-07776488c8e1\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " Dec 13 13:34:35.848790 kubelet[2672]: I1213 13:34:35.848141 2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-host-proc-sys-kernel\") pod \"bfe1746f-38e1-402b-9078-07776488c8e1\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " Dec 13 13:34:35.848790 kubelet[2672]: I1213 13:34:35.848167 2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfe1746f-38e1-402b-9078-07776488c8e1-clustermesh-secrets\") pod \"bfe1746f-38e1-402b-9078-07776488c8e1\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " Dec 13 13:34:35.848790 kubelet[2672]: I1213 13:34:35.848185 2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-cilium-run\") pod \"bfe1746f-38e1-402b-9078-07776488c8e1\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " Dec 13 13:34:35.848790 kubelet[2672]: I1213 13:34:35.848216 2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqpf9\" (UniqueName: \"kubernetes.io/projected/bfe1746f-38e1-402b-9078-07776488c8e1-kube-api-access-cqpf9\") pod \"bfe1746f-38e1-402b-9078-07776488c8e1\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " Dec 13 13:34:35.848790 kubelet[2672]: I1213 13:34:35.848236 2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfe1746f-38e1-402b-9078-07776488c8e1-cilium-config-path\") pod \"bfe1746f-38e1-402b-9078-07776488c8e1\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " Dec 13 13:34:35.848790 kubelet[2672]: I1213 13:34:35.848254 2672 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-bpf-maps\") pod \"bfe1746f-38e1-402b-9078-07776488c8e1\" (UID: \"bfe1746f-38e1-402b-9078-07776488c8e1\") " Dec 13 13:34:35.848927 kubelet[2672]: I1213 13:34:35.848293 2672 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 13:34:35.848927 kubelet[2672]: I1213 13:34:35.848305 2672 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 13:34:35.848927 kubelet[2672]: I1213 13:34:35.848316 2672 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 13:34:35.848927 kubelet[2672]: I1213 13:34:35.848325 2672 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 13:34:35.848927 kubelet[2672]: I1213 13:34:35.848335 2672 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 13:34:35.848927 kubelet[2672]: I1213 13:34:35.848345 2672 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 13:34:35.848927 kubelet[2672]: I1213 13:34:35.848362 2672 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bfe1746f-38e1-402b-9078-07776488c8e1" (UID: "bfe1746f-38e1-402b-9078-07776488c8e1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:34:35.849212 kubelet[2672]: I1213 13:34:35.848384 2672 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-cni-path" (OuterVolumeSpecName: "cni-path") pod "bfe1746f-38e1-402b-9078-07776488c8e1" (UID: "bfe1746f-38e1-402b-9078-07776488c8e1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:34:35.849212 kubelet[2672]: I1213 13:34:35.848399 2672 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bfe1746f-38e1-402b-9078-07776488c8e1" (UID: "bfe1746f-38e1-402b-9078-07776488c8e1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:34:35.851092 kubelet[2672]: I1213 13:34:35.850159 2672 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bfe1746f-38e1-402b-9078-07776488c8e1" (UID: "bfe1746f-38e1-402b-9078-07776488c8e1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:34:35.851568 kubelet[2672]: I1213 13:34:35.851535 2672 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c744140a-9647-4cca-9ad6-afda8fa04dc7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c744140a-9647-4cca-9ad6-afda8fa04dc7" (UID: "c744140a-9647-4cca-9ad6-afda8fa04dc7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:34:35.851608 containerd[1483]: time="2024-12-13T13:34:35.851586132Z" level=info msg="RemoveContainer for \"176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358\" returns successfully" Dec 13 13:34:35.851754 kubelet[2672]: I1213 13:34:35.851732 2672 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c744140a-9647-4cca-9ad6-afda8fa04dc7-kube-api-access-6wt86" (OuterVolumeSpecName: "kube-api-access-6wt86") pod "c744140a-9647-4cca-9ad6-afda8fa04dc7" (UID: "c744140a-9647-4cca-9ad6-afda8fa04dc7"). InnerVolumeSpecName "kube-api-access-6wt86". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:34:35.852412 kubelet[2672]: I1213 13:34:35.852387 2672 scope.go:117] "RemoveContainer" containerID="46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e" Dec 13 13:34:35.853339 containerd[1483]: time="2024-12-13T13:34:35.853304485Z" level=info msg="RemoveContainer for \"46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e\"" Dec 13 13:34:35.853386 kubelet[2672]: I1213 13:34:35.853318 2672 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfe1746f-38e1-402b-9078-07776488c8e1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bfe1746f-38e1-402b-9078-07776488c8e1" (UID: "bfe1746f-38e1-402b-9078-07776488c8e1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 13:34:35.853769 kubelet[2672]: I1213 13:34:35.853738 2672 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfe1746f-38e1-402b-9078-07776488c8e1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bfe1746f-38e1-402b-9078-07776488c8e1" (UID: "bfe1746f-38e1-402b-9078-07776488c8e1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:34:35.853905 kubelet[2672]: I1213 13:34:35.853886 2672 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe1746f-38e1-402b-9078-07776488c8e1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bfe1746f-38e1-402b-9078-07776488c8e1" (UID: "bfe1746f-38e1-402b-9078-07776488c8e1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:34:35.854738 kubelet[2672]: I1213 13:34:35.854707 2672 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe1746f-38e1-402b-9078-07776488c8e1-kube-api-access-cqpf9" (OuterVolumeSpecName: "kube-api-access-cqpf9") pod "bfe1746f-38e1-402b-9078-07776488c8e1" (UID: "bfe1746f-38e1-402b-9078-07776488c8e1"). InnerVolumeSpecName "kube-api-access-cqpf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:34:35.856453 containerd[1483]: time="2024-12-13T13:34:35.856422392Z" level=info msg="RemoveContainer for \"46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e\" returns successfully" Dec 13 13:34:35.856615 kubelet[2672]: I1213 13:34:35.856566 2672 scope.go:117] "RemoveContainer" containerID="51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087" Dec 13 13:34:35.857405 containerd[1483]: time="2024-12-13T13:34:35.857379731Z" level=info msg="RemoveContainer for \"51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087\"" Dec 13 13:34:35.860391 containerd[1483]: time="2024-12-13T13:34:35.860362291Z" level=info msg="RemoveContainer for \"51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087\" returns successfully" Dec 13 13:34:35.860542 kubelet[2672]: I1213 13:34:35.860504 2672 scope.go:117] "RemoveContainer" containerID="449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb" Dec 13 13:34:35.860685 containerd[1483]: time="2024-12-13T13:34:35.860654599Z" level=error msg="ContainerStatus for \"449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb\": not found" Dec 13 13:34:35.860769 kubelet[2672]: E1213 13:34:35.860750 2672 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb\": not found" containerID="449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb" Dec 13 13:34:35.860825 kubelet[2672]: I1213 13:34:35.860793 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb"} err="failed to get container status \"449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"449d44838965af8622592e45c03552c76bf0bebf5fde493017fc83d509e3e5cb\": not found" Dec 13 13:34:35.860825 kubelet[2672]: I1213 13:34:35.860803 2672 scope.go:117] "RemoveContainer" containerID="31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829" Dec 13 13:34:35.860965 containerd[1483]: time="2024-12-13T13:34:35.860928833Z" level=error msg="ContainerStatus for \"31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829\": not found" Dec 13 13:34:35.861036 kubelet[2672]: E1213 13:34:35.861024 2672 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829\": not found" containerID="31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829" Dec 13 13:34:35.861065 kubelet[2672]: I1213 13:34:35.861043 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829"} err="failed to get container status \"31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829\": rpc error: code = NotFound desc = an error occurred when try to find container \"31dd43521de6125eeaa06075b710a95e31e930e68a287ef12912cbb1cb5cf829\": not found" Dec 13 13:34:35.861065 kubelet[2672]: I1213 13:34:35.861052 2672 scope.go:117] "RemoveContainer" containerID="176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358" Dec 13 13:34:35.861219 containerd[1483]: time="2024-12-13T13:34:35.861188619Z" level=error msg="ContainerStatus for \"176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358\": not found" Dec 13 13:34:35.861306 kubelet[2672]: E1213 13:34:35.861288 2672 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358\": not found" containerID="176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358" Dec 13 13:34:35.861448 kubelet[2672]: I1213 13:34:35.861311 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358"} err="failed to get container status \"176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358\": rpc error: code = NotFound desc = an error occurred when try to find container \"176d237315713d5ba02665f2577bab3198ad1bcf96521fdf5e8d01bbe701f358\": not found" Dec 13 13:34:35.861448 kubelet[2672]: I1213 13:34:35.861321 2672 scope.go:117] "RemoveContainer" containerID="46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e" Dec 13 13:34:35.861499 containerd[1483]: time="2024-12-13T13:34:35.861461060Z" level=error msg="ContainerStatus for \"46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e\": not found" Dec 13 13:34:35.861605 kubelet[2672]: E1213 13:34:35.861577 2672 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e\": not found" containerID="46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e" Dec 13 13:34:35.861670 kubelet[2672]: I1213 13:34:35.861611 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e"} err="failed to get container status \"46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e\": rpc error: code = NotFound desc = an error occurred when try to find container \"46225d893cbef2fa12134893f22dd516797d86964058b71cda78e3fff133f17e\": not found" Dec 13 13:34:35.861670 kubelet[2672]: I1213 13:34:35.861629 2672 scope.go:117] "RemoveContainer" containerID="51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087" Dec 13 13:34:35.861801 containerd[1483]: time="2024-12-13T13:34:35.861771724Z" level=error msg="ContainerStatus for \"51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087\": not found" Dec 13 13:34:35.861905 kubelet[2672]: E1213 13:34:35.861890 2672 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087\": not found" containerID="51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087" Dec 13 13:34:35.861940 kubelet[2672]: I1213 13:34:35.861912 2672 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087"} err="failed to get container status \"51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087\": rpc error: code = NotFound desc = an error occurred when try to find container \"51bd9e2430bbffd112634d4fa440bb7f40a7effc6fe3bf49bd5429385296e087\": not found" Dec 13 13:34:35.949206 kubelet[2672]: I1213 13:34:35.949176 2672 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 13:34:35.949206 kubelet[2672]: I1213 13:34:35.949202 2672 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cqpf9\" (UniqueName: \"kubernetes.io/projected/bfe1746f-38e1-402b-9078-07776488c8e1-kube-api-access-cqpf9\") on node \"localhost\" DevicePath \"\"" Dec 13 13:34:35.949285 kubelet[2672]: I1213 13:34:35.949212 2672 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfe1746f-38e1-402b-9078-07776488c8e1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 13:34:35.949285 kubelet[2672]: I1213 13:34:35.949223 2672 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 13:34:35.949285 kubelet[2672]: I1213 13:34:35.949232 2672 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bfe1746f-38e1-402b-9078-07776488c8e1-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 13:34:35.949285 kubelet[2672]: I1213 13:34:35.949241 2672 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c744140a-9647-4cca-9ad6-afda8fa04dc7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 13:34:35.949285 kubelet[2672]: I1213 13:34:35.949250 2672 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 13:34:35.949285 kubelet[2672]: I1213 13:34:35.949259 2672 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bfe1746f-38e1-402b-9078-07776488c8e1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 13:34:35.949285 kubelet[2672]: I1213 13:34:35.949269 2672 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bfe1746f-38e1-402b-9078-07776488c8e1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 13:34:35.949285 kubelet[2672]: I1213 13:34:35.949278 2672 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6wt86\" (UniqueName: \"kubernetes.io/projected/c744140a-9647-4cca-9ad6-afda8fa04dc7-kube-api-access-6wt86\") on node \"localhost\" DevicePath \"\"" Dec 13 13:34:36.129394 systemd[1]: Removed slice kubepods-besteffort-podc744140a_9647_4cca_9ad6_afda8fa04dc7.slice - libcontainer container kubepods-besteffort-podc744140a_9647_4cca_9ad6_afda8fa04dc7.slice. Dec 13 13:34:36.132918 systemd[1]: Removed slice kubepods-burstable-podbfe1746f_38e1_402b_9078_07776488c8e1.slice - libcontainer container kubepods-burstable-podbfe1746f_38e1_402b_9078_07776488c8e1.slice. Dec 13 13:34:36.133283 systemd[1]: kubepods-burstable-podbfe1746f_38e1_402b_9078_07776488c8e1.slice: Consumed 6.464s CPU time. Dec 13 13:34:36.559824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2-rootfs.mount: Deactivated successfully. Dec 13 13:34:36.559950 systemd[1]: var-lib-kubelet-pods-c744140a\x2d9647\x2d4cca\x2d9ad6\x2dafda8fa04dc7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6wt86.mount: Deactivated successfully. Dec 13 13:34:36.560042 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-40ffab38a786fad013c852b7246de4f0a8959ff192558f49a680ca1f41d561b2-shm.mount: Deactivated successfully. Dec 13 13:34:36.560120 systemd[1]: var-lib-kubelet-pods-bfe1746f\x2d38e1\x2d402b\x2d9078\x2d07776488c8e1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcqpf9.mount: Deactivated successfully. Dec 13 13:34:36.560201 systemd[1]: var-lib-kubelet-pods-bfe1746f\x2d38e1\x2d402b\x2d9078\x2d07776488c8e1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 13:34:36.560287 systemd[1]: var-lib-kubelet-pods-bfe1746f\x2d38e1\x2d402b\x2d9078\x2d07776488c8e1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 13:34:37.133575 kubelet[2672]: I1213 13:34:37.133531 2672 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bfe1746f-38e1-402b-9078-07776488c8e1" path="/var/lib/kubelet/pods/bfe1746f-38e1-402b-9078-07776488c8e1/volumes" Dec 13 13:34:37.134402 kubelet[2672]: I1213 13:34:37.134380 2672 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c744140a-9647-4cca-9ad6-afda8fa04dc7" path="/var/lib/kubelet/pods/c744140a-9647-4cca-9ad6-afda8fa04dc7/volumes" Dec 13 13:34:37.183913 kubelet[2672]: E1213 13:34:37.183891 2672 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 13:34:37.519482 sshd[4321]: Connection closed by 10.0.0.1 port 43842 Dec 13 13:34:37.519810 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:37.530893 systemd[1]: sshd@25-10.0.0.150:22-10.0.0.1:43842.service: Deactivated successfully. Dec 13 13:34:37.532846 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 13:34:37.534378 systemd-logind[1464]: Session 26 logged out. Waiting for processes to exit. Dec 13 13:34:37.539221 systemd[1]: Started sshd@26-10.0.0.150:22-10.0.0.1:40310.service - OpenSSH per-connection server daemon (10.0.0.1:40310). Dec 13 13:34:37.540038 systemd-logind[1464]: Removed session 26. Dec 13 13:34:37.571312 sshd[4483]: Accepted publickey for core from 10.0.0.1 port 40310 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:37.572838 sshd-session[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:37.576622 systemd-logind[1464]: New session 27 of user core. Dec 13 13:34:37.586093 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 13:34:38.005039 sshd[4485]: Connection closed by 10.0.0.1 port 40310 Dec 13 13:34:38.007365 sshd-session[4483]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:38.016971 systemd[1]: sshd@26-10.0.0.150:22-10.0.0.1:40310.service: Deactivated successfully. Dec 13 13:34:38.018831 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 13:34:38.021011 kubelet[2672]: I1213 13:34:38.020963 2672 topology_manager.go:215] "Topology Admit Handler" podUID="bcc00bb2-55b7-449a-9af6-dc3c9f0d415a" podNamespace="kube-system" podName="cilium-zbf5p" Dec 13 13:34:38.021083 kubelet[2672]: E1213 13:34:38.021057 2672 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfe1746f-38e1-402b-9078-07776488c8e1" containerName="mount-cgroup" Dec 13 13:34:38.021083 kubelet[2672]: E1213 13:34:38.021068 2672 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c744140a-9647-4cca-9ad6-afda8fa04dc7" containerName="cilium-operator" Dec 13 13:34:38.021083 kubelet[2672]: E1213 13:34:38.021075 2672 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfe1746f-38e1-402b-9078-07776488c8e1" containerName="clean-cilium-state" Dec 13 13:34:38.021083 kubelet[2672]: E1213 13:34:38.021084 2672 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfe1746f-38e1-402b-9078-07776488c8e1" containerName="apply-sysctl-overwrites" Dec 13 13:34:38.021194 kubelet[2672]: E1213 13:34:38.021092 2672 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfe1746f-38e1-402b-9078-07776488c8e1" containerName="mount-bpf-fs" Dec 13 13:34:38.021194 kubelet[2672]: E1213 13:34:38.021099 2672 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfe1746f-38e1-402b-9078-07776488c8e1" containerName="cilium-agent" Dec 13 13:34:38.021194 kubelet[2672]: I1213 13:34:38.021118 2672 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfe1746f-38e1-402b-9078-07776488c8e1" containerName="cilium-agent" Dec 13 13:34:38.021194 kubelet[2672]: I1213 13:34:38.021124 2672 memory_manager.go:354] "RemoveStaleState removing state" podUID="c744140a-9647-4cca-9ad6-afda8fa04dc7" containerName="cilium-operator" Dec 13 13:34:38.022155 systemd-logind[1464]: Session 27 logged out. Waiting for processes to exit. Dec 13 13:34:38.030266 systemd[1]: Started sshd@27-10.0.0.150:22-10.0.0.1:40314.service - OpenSSH per-connection server daemon (10.0.0.1:40314). Dec 13 13:34:38.034552 systemd-logind[1464]: Removed session 27. Dec 13 13:34:38.046804 systemd[1]: Created slice kubepods-burstable-podbcc00bb2_55b7_449a_9af6_dc3c9f0d415a.slice - libcontainer container kubepods-burstable-podbcc00bb2_55b7_449a_9af6_dc3c9f0d415a.slice. Dec 13 13:34:38.061018 kubelet[2672]: I1213 13:34:38.060980 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bcc00bb2-55b7-449a-9af6-dc3c9f0d415a-bpf-maps\") pod \"cilium-zbf5p\" (UID: \"bcc00bb2-55b7-449a-9af6-dc3c9f0d415a\") " pod="kube-system/cilium-zbf5p" Dec 13 13:34:38.061249 kubelet[2672]: I1213 13:34:38.061114 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bcc00bb2-55b7-449a-9af6-dc3c9f0d415a-clustermesh-secrets\") pod \"cilium-zbf5p\" (UID: \"bcc00bb2-55b7-449a-9af6-dc3c9f0d415a\") " pod="kube-system/cilium-zbf5p" Dec 13 13:34:38.061249 kubelet[2672]: I1213 13:34:38.061137 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bcc00bb2-55b7-449a-9af6-dc3c9f0d415a-cilium-cgroup\") pod \"cilium-zbf5p\" (UID: \"bcc00bb2-55b7-449a-9af6-dc3c9f0d415a\") " pod="kube-system/cilium-zbf5p" Dec 13 13:34:38.061249 kubelet[2672]: I1213 13:34:38.061154 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcc00bb2-55b7-449a-9af6-dc3c9f0d415a-lib-modules\") pod \"cilium-zbf5p\" (UID: \"bcc00bb2-55b7-449a-9af6-dc3c9f0d415a\") " pod="kube-system/cilium-zbf5p" Dec 13 13:34:38.061249 kubelet[2672]: I1213 13:34:38.061172 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bcc00bb2-55b7-449a-9af6-dc3c9f0d415a-cilium-run\") pod \"cilium-zbf5p\" (UID: \"bcc00bb2-55b7-449a-9af6-dc3c9f0d415a\") " pod="kube-system/cilium-zbf5p" Dec 13 13:34:38.061249 kubelet[2672]: I1213 13:34:38.061213 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bcc00bb2-55b7-449a-9af6-dc3c9f0d415a-cilium-ipsec-secrets\") pod \"cilium-zbf5p\" (UID: \"bcc00bb2-55b7-449a-9af6-dc3c9f0d415a\") " pod="kube-system/cilium-zbf5p" Dec 13 13:34:38.061249 kubelet[2672]: I1213 13:34:38.061244 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bcc00bb2-55b7-449a-9af6-dc3c9f0d415a-hubble-tls\") pod \"cilium-zbf5p\" (UID: \"bcc00bb2-55b7-449a-9af6-dc3c9f0d415a\") " pod="kube-system/cilium-zbf5p" Dec 13 13:34:38.061403 kubelet[2672]: I1213 13:34:38.061346 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcc00bb2-55b7-449a-9af6-dc3c9f0d415a-xtables-lock\") pod \"cilium-zbf5p\" (UID: \"bcc00bb2-55b7-449a-9af6-dc3c9f0d415a\") " pod="kube-system/cilium-zbf5p" Dec 13 13:34:38.061403 kubelet[2672]: I1213 13:34:38.061390 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bcc00bb2-55b7-449a-9af6-dc3c9f0d415a-host-proc-sys-net\") pod \"cilium-zbf5p\" (UID: \"bcc00bb2-55b7-449a-9af6-dc3c9f0d415a\") " pod="kube-system/cilium-zbf5p" Dec 13 13:34:38.061446 kubelet[2672]: I1213 13:34:38.061420 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bcc00bb2-55b7-449a-9af6-dc3c9f0d415a-host-proc-sys-kernel\") pod \"cilium-zbf5p\" (UID: \"bcc00bb2-55b7-449a-9af6-dc3c9f0d415a\") " pod="kube-system/cilium-zbf5p" Dec 13 13:34:38.061446 kubelet[2672]: I1213 13:34:38.061440 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcc00bb2-55b7-449a-9af6-dc3c9f0d415a-etc-cni-netd\") pod \"cilium-zbf5p\" (UID: \"bcc00bb2-55b7-449a-9af6-dc3c9f0d415a\") " pod="kube-system/cilium-zbf5p" Dec 13 13:34:38.061493 kubelet[2672]: I1213 13:34:38.061463 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bcc00bb2-55b7-449a-9af6-dc3c9f0d415a-hostproc\") pod \"cilium-zbf5p\" (UID: \"bcc00bb2-55b7-449a-9af6-dc3c9f0d415a\") " pod="kube-system/cilium-zbf5p" Dec 13 13:34:38.061493 kubelet[2672]: I1213 13:34:38.061481 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bcc00bb2-55b7-449a-9af6-dc3c9f0d415a-cni-path\") pod \"cilium-zbf5p\" (UID: \"bcc00bb2-55b7-449a-9af6-dc3c9f0d415a\") " pod="kube-system/cilium-zbf5p" Dec 13 13:34:38.061534 kubelet[2672]: I1213 13:34:38.061498 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxctg\" (UniqueName: \"kubernetes.io/projected/bcc00bb2-55b7-449a-9af6-dc3c9f0d415a-kube-api-access-hxctg\") pod \"cilium-zbf5p\" (UID: \"bcc00bb2-55b7-449a-9af6-dc3c9f0d415a\") " pod="kube-system/cilium-zbf5p" Dec 13 13:34:38.061534 kubelet[2672]: I1213 13:34:38.061531 2672 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bcc00bb2-55b7-449a-9af6-dc3c9f0d415a-cilium-config-path\") pod \"cilium-zbf5p\" (UID: \"bcc00bb2-55b7-449a-9af6-dc3c9f0d415a\") " pod="kube-system/cilium-zbf5p" Dec 13 13:34:38.073757 sshd[4496]: Accepted publickey for core from 10.0.0.1 port 40314 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:38.075126 sshd-session[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:38.078850 systemd-logind[1464]: New session 28 of user core. Dec 13 13:34:38.096105 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 13:34:38.145389 sshd[4498]: Connection closed by 10.0.0.1 port 40314 Dec 13 13:34:38.145876 sshd-session[4496]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:38.154950 systemd[1]: sshd@27-10.0.0.150:22-10.0.0.1:40314.service: Deactivated successfully. Dec 13 13:34:38.156734 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 13:34:38.158294 systemd-logind[1464]: Session 28 logged out. Waiting for processes to exit. Dec 13 13:34:38.159594 systemd[1]: Started sshd@28-10.0.0.150:22-10.0.0.1:40322.service - OpenSSH per-connection server daemon (10.0.0.1:40322). Dec 13 13:34:38.160279 systemd-logind[1464]: Removed session 28. Dec 13 13:34:38.196448 sshd[4506]: Accepted publickey for core from 10.0.0.1 port 40322 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:34:38.197726 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:34:38.201685 systemd-logind[1464]: New session 29 of user core. Dec 13 13:34:38.212099 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 13:34:38.350692 kubelet[2672]: E1213 13:34:38.350648 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:38.351278 containerd[1483]: time="2024-12-13T13:34:38.351248338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zbf5p,Uid:bcc00bb2-55b7-449a-9af6-dc3c9f0d415a,Namespace:kube-system,Attempt:0,}" Dec 13 13:34:38.371261 containerd[1483]: time="2024-12-13T13:34:38.371090039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:34:38.371261 containerd[1483]: time="2024-12-13T13:34:38.371241156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:34:38.371351 containerd[1483]: time="2024-12-13T13:34:38.371254171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:38.371377 containerd[1483]: time="2024-12-13T13:34:38.371333583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:34:38.389107 systemd[1]: Started cri-containerd-7caf3cb2e16df6bcd7131ff1212b4ed3f77e0ab95d2ca54bf57532be52b03219.scope - libcontainer container 7caf3cb2e16df6bcd7131ff1212b4ed3f77e0ab95d2ca54bf57532be52b03219. Dec 13 13:34:38.410457 containerd[1483]: time="2024-12-13T13:34:38.410417318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zbf5p,Uid:bcc00bb2-55b7-449a-9af6-dc3c9f0d415a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7caf3cb2e16df6bcd7131ff1212b4ed3f77e0ab95d2ca54bf57532be52b03219\"" Dec 13 13:34:38.411187 kubelet[2672]: E1213 13:34:38.411167 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:38.412933 containerd[1483]: time="2024-12-13T13:34:38.412892571Z" level=info msg="CreateContainer within sandbox \"7caf3cb2e16df6bcd7131ff1212b4ed3f77e0ab95d2ca54bf57532be52b03219\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:34:38.425775 containerd[1483]: time="2024-12-13T13:34:38.425732590Z" level=info msg="CreateContainer within sandbox \"7caf3cb2e16df6bcd7131ff1212b4ed3f77e0ab95d2ca54bf57532be52b03219\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"edfef4d41660351fa08a7c2b7cef5b6aff738022a8f6ec890b728f3a87fdee17\"" Dec 13 13:34:38.426127 containerd[1483]: time="2024-12-13T13:34:38.426105001Z" level=info msg="StartContainer for \"edfef4d41660351fa08a7c2b7cef5b6aff738022a8f6ec890b728f3a87fdee17\"" Dec 13 13:34:38.458117 systemd[1]: Started cri-containerd-edfef4d41660351fa08a7c2b7cef5b6aff738022a8f6ec890b728f3a87fdee17.scope - libcontainer container edfef4d41660351fa08a7c2b7cef5b6aff738022a8f6ec890b728f3a87fdee17. Dec 13 13:34:38.481677 containerd[1483]: time="2024-12-13T13:34:38.481631499Z" level=info msg="StartContainer for \"edfef4d41660351fa08a7c2b7cef5b6aff738022a8f6ec890b728f3a87fdee17\" returns successfully" Dec 13 13:34:38.490661 systemd[1]: cri-containerd-edfef4d41660351fa08a7c2b7cef5b6aff738022a8f6ec890b728f3a87fdee17.scope: Deactivated successfully. Dec 13 13:34:38.520946 containerd[1483]: time="2024-12-13T13:34:38.520882464Z" level=info msg="shim disconnected" id=edfef4d41660351fa08a7c2b7cef5b6aff738022a8f6ec890b728f3a87fdee17 namespace=k8s.io Dec 13 13:34:38.520946 containerd[1483]: time="2024-12-13T13:34:38.520935004Z" level=warning msg="cleaning up after shim disconnected" id=edfef4d41660351fa08a7c2b7cef5b6aff738022a8f6ec890b728f3a87fdee17 namespace=k8s.io Dec 13 13:34:38.520946 containerd[1483]: time="2024-12-13T13:34:38.520943861Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:34:38.834214 kubelet[2672]: E1213 13:34:38.834084 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:38.836530 containerd[1483]: time="2024-12-13T13:34:38.836390908Z" level=info msg="CreateContainer within sandbox \"7caf3cb2e16df6bcd7131ff1212b4ed3f77e0ab95d2ca54bf57532be52b03219\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:34:38.848724 containerd[1483]: time="2024-12-13T13:34:38.848675778Z" level=info msg="CreateContainer within sandbox \"7caf3cb2e16df6bcd7131ff1212b4ed3f77e0ab95d2ca54bf57532be52b03219\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"47f6fe0cd07a344e88f51a490aaf1ff48e466c703883b02feba6996d748ef00b\"" Dec 13 13:34:38.849157 containerd[1483]: time="2024-12-13T13:34:38.849088406Z" level=info msg="StartContainer for \"47f6fe0cd07a344e88f51a490aaf1ff48e466c703883b02feba6996d748ef00b\"" Dec 13 13:34:38.879109 systemd[1]: Started cri-containerd-47f6fe0cd07a344e88f51a490aaf1ff48e466c703883b02feba6996d748ef00b.scope - libcontainer container 47f6fe0cd07a344e88f51a490aaf1ff48e466c703883b02feba6996d748ef00b. Dec 13 13:34:38.905275 containerd[1483]: time="2024-12-13T13:34:38.905217935Z" level=info msg="StartContainer for \"47f6fe0cd07a344e88f51a490aaf1ff48e466c703883b02feba6996d748ef00b\" returns successfully" Dec 13 13:34:38.912407 systemd[1]: cri-containerd-47f6fe0cd07a344e88f51a490aaf1ff48e466c703883b02feba6996d748ef00b.scope: Deactivated successfully. Dec 13 13:34:38.936044 containerd[1483]: time="2024-12-13T13:34:38.935966245Z" level=info msg="shim disconnected" id=47f6fe0cd07a344e88f51a490aaf1ff48e466c703883b02feba6996d748ef00b namespace=k8s.io Dec 13 13:34:38.936044 containerd[1483]: time="2024-12-13T13:34:38.936037060Z" level=warning msg="cleaning up after shim disconnected" id=47f6fe0cd07a344e88f51a490aaf1ff48e466c703883b02feba6996d748ef00b namespace=k8s.io Dec 13 13:34:38.936044 containerd[1483]: time="2024-12-13T13:34:38.936045726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:34:39.100849 kubelet[2672]: I1213 13:34:39.100748 2672 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T13:34:39Z","lastTransitionTime":"2024-12-13T13:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 13:34:39.837437 kubelet[2672]: E1213 13:34:39.837395 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:39.839473 containerd[1483]: time="2024-12-13T13:34:39.839440983Z" level=info msg="CreateContainer within sandbox \"7caf3cb2e16df6bcd7131ff1212b4ed3f77e0ab95d2ca54bf57532be52b03219\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:34:39.855035 containerd[1483]: time="2024-12-13T13:34:39.854979649Z" level=info msg="CreateContainer within sandbox \"7caf3cb2e16df6bcd7131ff1212b4ed3f77e0ab95d2ca54bf57532be52b03219\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b65db4d05555ead38caf80d5722a24fa55f8d476b2a80afb986fb553419f029c\"" Dec 13 13:34:39.855476 containerd[1483]: time="2024-12-13T13:34:39.855437312Z" level=info msg="StartContainer for \"b65db4d05555ead38caf80d5722a24fa55f8d476b2a80afb986fb553419f029c\"" Dec 13 13:34:39.883117 systemd[1]: Started cri-containerd-b65db4d05555ead38caf80d5722a24fa55f8d476b2a80afb986fb553419f029c.scope - libcontainer container b65db4d05555ead38caf80d5722a24fa55f8d476b2a80afb986fb553419f029c. Dec 13 13:34:39.911246 containerd[1483]: time="2024-12-13T13:34:39.910969667Z" level=info msg="StartContainer for \"b65db4d05555ead38caf80d5722a24fa55f8d476b2a80afb986fb553419f029c\" returns successfully" Dec 13 13:34:39.913286 systemd[1]: cri-containerd-b65db4d05555ead38caf80d5722a24fa55f8d476b2a80afb986fb553419f029c.scope: Deactivated successfully. Dec 13 13:34:39.935639 containerd[1483]: time="2024-12-13T13:34:39.935574985Z" level=info msg="shim disconnected" id=b65db4d05555ead38caf80d5722a24fa55f8d476b2a80afb986fb553419f029c namespace=k8s.io Dec 13 13:34:39.935639 containerd[1483]: time="2024-12-13T13:34:39.935623959Z" level=warning msg="cleaning up after shim disconnected" id=b65db4d05555ead38caf80d5722a24fa55f8d476b2a80afb986fb553419f029c namespace=k8s.io Dec 13 13:34:39.935639 containerd[1483]: time="2024-12-13T13:34:39.935632295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:34:40.168034 systemd[1]: run-containerd-runc-k8s.io-b65db4d05555ead38caf80d5722a24fa55f8d476b2a80afb986fb553419f029c-runc.0O8xyF.mount: Deactivated successfully. Dec 13 13:34:40.168153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b65db4d05555ead38caf80d5722a24fa55f8d476b2a80afb986fb553419f029c-rootfs.mount: Deactivated successfully. Dec 13 13:34:40.839891 kubelet[2672]: E1213 13:34:40.839865 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:40.842215 containerd[1483]: time="2024-12-13T13:34:40.842170651Z" level=info msg="CreateContainer within sandbox \"7caf3cb2e16df6bcd7131ff1212b4ed3f77e0ab95d2ca54bf57532be52b03219\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:34:40.856027 containerd[1483]: time="2024-12-13T13:34:40.855962304Z" level=info msg="CreateContainer within sandbox \"7caf3cb2e16df6bcd7131ff1212b4ed3f77e0ab95d2ca54bf57532be52b03219\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9418d26e2fa582945b971936ef116e8ae2c52466b57220608ac10fd20dc4818d\"" Dec 13 13:34:40.856450 containerd[1483]: time="2024-12-13T13:34:40.856402303Z" level=info msg="StartContainer for \"9418d26e2fa582945b971936ef116e8ae2c52466b57220608ac10fd20dc4818d\"" Dec 13 13:34:40.893213 systemd[1]: Started cri-containerd-9418d26e2fa582945b971936ef116e8ae2c52466b57220608ac10fd20dc4818d.scope - libcontainer container 9418d26e2fa582945b971936ef116e8ae2c52466b57220608ac10fd20dc4818d. Dec 13 13:34:40.915378 systemd[1]: cri-containerd-9418d26e2fa582945b971936ef116e8ae2c52466b57220608ac10fd20dc4818d.scope: Deactivated successfully. Dec 13 13:34:40.917618 containerd[1483]: time="2024-12-13T13:34:40.917564127Z" level=info msg="StartContainer for \"9418d26e2fa582945b971936ef116e8ae2c52466b57220608ac10fd20dc4818d\" returns successfully" Dec 13 13:34:40.938107 containerd[1483]: time="2024-12-13T13:34:40.938042449Z" level=info msg="shim disconnected" id=9418d26e2fa582945b971936ef116e8ae2c52466b57220608ac10fd20dc4818d namespace=k8s.io Dec 13 13:34:40.938107 containerd[1483]: time="2024-12-13T13:34:40.938092143Z" level=warning msg="cleaning up after shim disconnected" id=9418d26e2fa582945b971936ef116e8ae2c52466b57220608ac10fd20dc4818d namespace=k8s.io Dec 13 13:34:40.938107 containerd[1483]: time="2024-12-13T13:34:40.938101521Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:34:41.167298 systemd[1]: run-containerd-runc-k8s.io-9418d26e2fa582945b971936ef116e8ae2c52466b57220608ac10fd20dc4818d-runc.6I2uPV.mount: Deactivated successfully. Dec 13 13:34:41.167412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9418d26e2fa582945b971936ef116e8ae2c52466b57220608ac10fd20dc4818d-rootfs.mount: Deactivated successfully. Dec 13 13:34:41.846797 kubelet[2672]: E1213 13:34:41.844476 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:41.850306 containerd[1483]: time="2024-12-13T13:34:41.850260045Z" level=info msg="CreateContainer within sandbox \"7caf3cb2e16df6bcd7131ff1212b4ed3f77e0ab95d2ca54bf57532be52b03219\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:34:41.863899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1398039606.mount: Deactivated successfully. Dec 13 13:34:41.865075 containerd[1483]: time="2024-12-13T13:34:41.865039210Z" level=info msg="CreateContainer within sandbox \"7caf3cb2e16df6bcd7131ff1212b4ed3f77e0ab95d2ca54bf57532be52b03219\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7761d2d5390e371b7661ac5a4da36740ddfff04b28200c83c0eca52c391a01ce\"" Dec 13 13:34:41.865496 containerd[1483]: time="2024-12-13T13:34:41.865465603Z" level=info msg="StartContainer for \"7761d2d5390e371b7661ac5a4da36740ddfff04b28200c83c0eca52c391a01ce\"" Dec 13 13:34:41.897111 systemd[1]: Started cri-containerd-7761d2d5390e371b7661ac5a4da36740ddfff04b28200c83c0eca52c391a01ce.scope - libcontainer container 7761d2d5390e371b7661ac5a4da36740ddfff04b28200c83c0eca52c391a01ce. Dec 13 13:34:41.925051 containerd[1483]: time="2024-12-13T13:34:41.924979649Z" level=info msg="StartContainer for \"7761d2d5390e371b7661ac5a4da36740ddfff04b28200c83c0eca52c391a01ce\" returns successfully" Dec 13 13:34:42.167263 systemd[1]: run-containerd-runc-k8s.io-7761d2d5390e371b7661ac5a4da36740ddfff04b28200c83c0eca52c391a01ce-runc.NP7KbN.mount: Deactivated successfully. Dec 13 13:34:42.320020 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 13:34:42.860802 kubelet[2672]: E1213 13:34:42.860770 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:44.352963 kubelet[2672]: E1213 13:34:44.352930 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:45.266862 systemd-networkd[1414]: lxc_health: Link UP Dec 13 13:34:45.275153 systemd-networkd[1414]: lxc_health: Gained carrier Dec 13 13:34:46.353009 kubelet[2672]: E1213 13:34:46.352560 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:46.366421 kubelet[2672]: I1213 13:34:46.366376 2672 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zbf5p" podStartSLOduration=8.366341634 podStartE2EDuration="8.366341634s" podCreationTimestamp="2024-12-13 13:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:34:42.874479101 +0000 UTC m=+85.820022983" watchObservedRunningTime="2024-12-13 13:34:46.366341634 +0000 UTC m=+89.311885516" Dec 13 13:34:46.707967 systemd-networkd[1414]: lxc_health: Gained IPv6LL Dec 13 13:34:46.868101 kubelet[2672]: E1213 13:34:46.867969 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:47.869362 kubelet[2672]: E1213 13:34:47.869316 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:50.132227 kubelet[2672]: E1213 13:34:50.132190 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:34:50.849470 sshd[4512]: Connection closed by 10.0.0.1 port 40322 Dec 13 13:34:50.849931 sshd-session[4506]: pam_unix(sshd:session): session closed for user core Dec 13 13:34:50.854200 systemd[1]: sshd@28-10.0.0.150:22-10.0.0.1:40322.service: Deactivated successfully. Dec 13 13:34:50.856332 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 13:34:50.856962 systemd-logind[1464]: Session 29 logged out. Waiting for processes to exit. Dec 13 13:34:50.857858 systemd-logind[1464]: Removed session 29.