Mar 21 12:35:20.968715 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 21 10:52:59 -00 2025 Mar 21 12:35:20.968740 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fb715041d083099c6a15c8aee7cc93fc3f3ca8764fc0aaaff245a06641d663d2 Mar 21 12:35:20.968752 kernel: BIOS-provided physical RAM map: Mar 21 12:35:20.968759 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 21 12:35:20.968766 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 21 12:35:20.968772 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 21 12:35:20.968780 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 21 12:35:20.968787 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 21 12:35:20.968793 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 21 12:35:20.968800 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 21 12:35:20.968807 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 21 12:35:20.968865 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 21 12:35:20.968872 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 21 12:35:20.968879 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 21 12:35:20.968890 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 21 12:35:20.968898 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 21 12:35:20.968908 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Mar 21 12:35:20.968915 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Mar 21 12:35:20.968922 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Mar 21 12:35:20.968929 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Mar 21 12:35:20.968936 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 21 12:35:20.968943 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 21 12:35:20.968950 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 21 12:35:20.968957 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 21 12:35:20.968964 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 21 12:35:20.968971 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 21 12:35:20.968978 kernel: NX (Execute Disable) protection: active Mar 21 12:35:20.968988 kernel: APIC: Static calls initialized Mar 21 12:35:20.968996 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Mar 21 12:35:20.969003 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Mar 21 12:35:20.969010 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Mar 21 12:35:20.969017 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Mar 21 12:35:20.969024 kernel: extended physical RAM map: Mar 21 12:35:20.969031 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 21 12:35:20.969038 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 21 12:35:20.969045 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 21 12:35:20.969052 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 21 12:35:20.969059 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 21 12:35:20.969066 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 21 12:35:20.969076 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 21 12:35:20.969087 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Mar 21 12:35:20.969094 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Mar 21 12:35:20.969104 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Mar 21 12:35:20.969111 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Mar 21 12:35:20.969119 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Mar 21 12:35:20.969130 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 21 12:35:20.969137 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 21 12:35:20.969144 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 21 12:35:20.969152 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 21 12:35:20.969169 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 21 12:35:20.969176 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Mar 21 12:35:20.969184 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Mar 21 12:35:20.969192 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Mar 21 12:35:20.969200 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Mar 21 12:35:20.969207 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 21 12:35:20.969218 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 21 12:35:20.969225 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 21 12:35:20.969233 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 21 12:35:20.969242 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 21 12:35:20.969250 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 21 12:35:20.969257 kernel: efi: EFI v2.7 by EDK II Mar 21 12:35:20.969265 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Mar 21 12:35:20.969272 kernel: random: crng init done Mar 21 12:35:20.969280 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 21 12:35:20.969287 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 21 12:35:20.969297 kernel: secureboot: Secure boot disabled Mar 21 12:35:20.969306 kernel: SMBIOS 2.8 present. Mar 21 12:35:20.969314 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 21 12:35:20.969321 kernel: Hypervisor detected: KVM Mar 21 12:35:20.969329 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 21 12:35:20.969337 kernel: kvm-clock: using sched offset of 3757213606 cycles Mar 21 12:35:20.969345 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 21 12:35:20.969353 kernel: tsc: Detected 2794.746 MHz processor Mar 21 12:35:20.969360 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 21 12:35:20.969368 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 21 12:35:20.969376 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 21 12:35:20.969387 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 21 12:35:20.969395 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 21 12:35:20.969402 kernel: Using GB pages for direct mapping Mar 21 12:35:20.969410 kernel: ACPI: Early table checksum verification disabled Mar 21 12:35:20.969418 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 21 12:35:20.969425 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 21 12:35:20.969433 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:35:20.969442 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:35:20.969452 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 21 12:35:20.969466 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:35:20.969476 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:35:20.969486 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:35:20.969496 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:35:20.969507 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 21 12:35:20.969517 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 21 12:35:20.969524 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Mar 21 12:35:20.969532 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 21 12:35:20.969540 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 21 12:35:20.969550 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 21 12:35:20.969558 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 21 12:35:20.969566 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 21 12:35:20.969573 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 21 12:35:20.969581 kernel: No NUMA configuration found Mar 21 12:35:20.969588 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 21 12:35:20.969596 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Mar 21 12:35:20.969604 kernel: Zone ranges: Mar 21 12:35:20.969611 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 21 12:35:20.969622 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 21 12:35:20.969633 kernel: Normal empty Mar 21 12:35:20.969641 kernel: Movable zone start for each node Mar 21 12:35:20.969649 kernel: Early memory node ranges Mar 21 12:35:20.969656 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 21 12:35:20.969664 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 21 12:35:20.969671 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 21 12:35:20.969679 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 21 12:35:20.969686 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 21 12:35:20.969694 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 21 12:35:20.969704 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Mar 21 12:35:20.969711 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Mar 21 12:35:20.969719 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 21 12:35:20.969727 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 21 12:35:20.969734 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 21 12:35:20.969750 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 21 12:35:20.969760 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 21 12:35:20.969768 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 21 12:35:20.969776 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 21 12:35:20.969796 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 21 12:35:20.969843 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 21 12:35:20.969861 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 21 12:35:20.969873 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 21 12:35:20.969881 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 21 12:35:20.969889 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 21 12:35:20.969897 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 21 12:35:20.969908 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 21 12:35:20.969916 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 21 12:35:20.969924 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 21 12:35:20.969932 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 21 12:35:20.969945 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 21 12:35:20.969953 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 21 12:35:20.969961 kernel: TSC deadline timer available Mar 21 12:35:20.969969 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 21 12:35:20.969977 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 21 12:35:20.969988 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 21 12:35:20.969996 kernel: kvm-guest: setup PV sched yield Mar 21 12:35:20.970004 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 21 12:35:20.970012 kernel: Booting paravirtualized kernel on KVM Mar 21 12:35:20.970020 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 21 12:35:20.970028 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 21 12:35:20.970036 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 21 12:35:20.970044 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 21 12:35:20.970051 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 21 12:35:20.970061 kernel: kvm-guest: PV spinlocks enabled Mar 21 12:35:20.970070 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 21 12:35:20.970079 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fb715041d083099c6a15c8aee7cc93fc3f3ca8764fc0aaaff245a06641d663d2 Mar 21 12:35:20.970087 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 21 12:35:20.970098 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 21 12:35:20.970107 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 21 12:35:20.970114 kernel: Fallback order for Node 0: 0 Mar 21 12:35:20.970122 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Mar 21 12:35:20.970130 kernel: Policy zone: DMA32 Mar 21 12:35:20.970141 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 21 12:35:20.970149 kernel: Memory: 2385672K/2565800K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43588K init, 1476K bss, 179872K reserved, 0K cma-reserved) Mar 21 12:35:20.970164 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 21 12:35:20.970172 kernel: ftrace: allocating 37985 entries in 149 pages Mar 21 12:35:20.970180 kernel: ftrace: allocated 149 pages with 4 groups Mar 21 12:35:20.970188 kernel: Dynamic Preempt: voluntary Mar 21 12:35:20.970196 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 21 12:35:20.970205 kernel: rcu: RCU event tracing is enabled. Mar 21 12:35:20.970213 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 21 12:35:20.970224 kernel: Trampoline variant of Tasks RCU enabled. Mar 21 12:35:20.970232 kernel: Rude variant of Tasks RCU enabled. Mar 21 12:35:20.970240 kernel: Tracing variant of Tasks RCU enabled. Mar 21 12:35:20.970248 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 21 12:35:20.970256 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 21 12:35:20.970264 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 21 12:35:20.970272 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 21 12:35:20.970279 kernel: Console: colour dummy device 80x25 Mar 21 12:35:20.970287 kernel: printk: console [ttyS0] enabled Mar 21 12:35:20.970298 kernel: ACPI: Core revision 20230628 Mar 21 12:35:20.970306 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 21 12:35:20.970314 kernel: APIC: Switch to symmetric I/O mode setup Mar 21 12:35:20.970321 kernel: x2apic enabled Mar 21 12:35:20.970329 kernel: APIC: Switched APIC routing to: physical x2apic Mar 21 12:35:20.970340 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 21 12:35:20.970348 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 21 12:35:20.970355 kernel: kvm-guest: setup PV IPIs Mar 21 12:35:20.970363 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 21 12:35:20.970374 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 21 12:35:20.970382 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Mar 21 12:35:20.970390 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 21 12:35:20.970397 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 21 12:35:20.970405 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 21 12:35:20.970413 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 21 12:35:20.970421 kernel: Spectre V2 : Mitigation: Retpolines Mar 21 12:35:20.970429 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 21 12:35:20.970437 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 21 12:35:20.970448 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 21 12:35:20.970455 kernel: RETBleed: Mitigation: untrained return thunk Mar 21 12:35:20.970463 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 21 12:35:20.970471 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 21 12:35:20.970479 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 21 12:35:20.970490 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 21 12:35:20.970498 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 21 12:35:20.970507 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 21 12:35:20.970517 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 21 12:35:20.970525 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 21 12:35:20.970533 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 21 12:35:20.970541 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 21 12:35:20.970549 kernel: Freeing SMP alternatives memory: 32K Mar 21 12:35:20.970557 kernel: pid_max: default: 32768 minimum: 301 Mar 21 12:35:20.970565 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 21 12:35:20.970572 kernel: landlock: Up and running. Mar 21 12:35:20.970580 kernel: SELinux: Initializing. Mar 21 12:35:20.970590 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 21 12:35:20.970598 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 21 12:35:20.970606 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 21 12:35:20.970614 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 21 12:35:20.970622 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 21 12:35:20.970630 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 21 12:35:20.970638 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 21 12:35:20.970646 kernel: ... version: 0 Mar 21 12:35:20.970654 kernel: ... bit width: 48 Mar 21 12:35:20.970664 kernel: ... generic registers: 6 Mar 21 12:35:20.970672 kernel: ... value mask: 0000ffffffffffff Mar 21 12:35:20.970680 kernel: ... max period: 00007fffffffffff Mar 21 12:35:20.970687 kernel: ... fixed-purpose events: 0 Mar 21 12:35:20.970695 kernel: ... event mask: 000000000000003f Mar 21 12:35:20.970703 kernel: signal: max sigframe size: 1776 Mar 21 12:35:20.970711 kernel: rcu: Hierarchical SRCU implementation. Mar 21 12:35:20.970728 kernel: rcu: Max phase no-delay instances is 400. Mar 21 12:35:20.970746 kernel: smp: Bringing up secondary CPUs ... Mar 21 12:35:20.970766 kernel: smpboot: x86: Booting SMP configuration: Mar 21 12:35:20.970774 kernel: .... node #0, CPUs: #1 #2 #3 Mar 21 12:35:20.970782 kernel: smp: Brought up 1 node, 4 CPUs Mar 21 12:35:20.970790 kernel: smpboot: Max logical packages: 1 Mar 21 12:35:20.970798 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Mar 21 12:35:20.970806 kernel: devtmpfs: initialized Mar 21 12:35:20.970826 kernel: x86/mm: Memory block size: 128MB Mar 21 12:35:20.970834 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 21 12:35:20.970847 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 21 12:35:20.970859 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 21 12:35:20.970867 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 21 12:35:20.970875 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Mar 21 12:35:20.970883 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 21 12:35:20.970891 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 21 12:35:20.970899 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 21 12:35:20.970907 kernel: pinctrl core: initialized pinctrl subsystem Mar 21 12:35:20.970915 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 21 12:35:20.970923 kernel: audit: initializing netlink subsys (disabled) Mar 21 12:35:20.970933 kernel: audit: type=2000 audit(1742560520.485:1): state=initialized audit_enabled=0 res=1 Mar 21 12:35:20.970941 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 21 12:35:20.970949 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 21 12:35:20.970957 kernel: cpuidle: using governor menu Mar 21 12:35:20.970965 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 21 12:35:20.970973 kernel: dca service started, version 1.12.1 Mar 21 12:35:20.970981 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Mar 21 12:35:20.970989 kernel: PCI: Using configuration type 1 for base access Mar 21 12:35:20.970997 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 21 12:35:20.971007 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 21 12:35:20.971015 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 21 12:35:20.971023 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 21 12:35:20.971031 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 21 12:35:20.971039 kernel: ACPI: Added _OSI(Module Device) Mar 21 12:35:20.971046 kernel: ACPI: Added _OSI(Processor Device) Mar 21 12:35:20.971054 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 21 12:35:20.971062 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 21 12:35:20.971070 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 21 12:35:20.971080 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 21 12:35:20.971088 kernel: ACPI: Interpreter enabled Mar 21 12:35:20.971096 kernel: ACPI: PM: (supports S0 S3 S5) Mar 21 12:35:20.971103 kernel: ACPI: Using IOAPIC for interrupt routing Mar 21 12:35:20.971111 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 21 12:35:20.971119 kernel: PCI: Using E820 reservations for host bridge windows Mar 21 12:35:20.971127 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 21 12:35:20.971135 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 21 12:35:20.971371 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 21 12:35:20.971518 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 21 12:35:20.971658 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 21 12:35:20.971668 kernel: PCI host bridge to bus 0000:00 Mar 21 12:35:20.971841 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 21 12:35:20.971972 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 21 12:35:20.972096 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 21 12:35:20.972234 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 21 12:35:20.972356 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 21 12:35:20.972485 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 21 12:35:20.972608 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 21 12:35:20.972793 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 21 12:35:20.972975 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 21 12:35:20.973117 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 21 12:35:20.973275 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 21 12:35:20.973410 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 21 12:35:20.973540 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 21 12:35:20.973670 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 21 12:35:20.973832 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 21 12:35:20.973975 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 21 12:35:20.974116 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 21 12:35:20.974259 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Mar 21 12:35:20.974424 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 21 12:35:20.974578 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 21 12:35:20.974744 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 21 12:35:20.974903 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Mar 21 12:35:20.975055 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 21 12:35:20.975206 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 21 12:35:20.975337 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 21 12:35:20.975471 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 21 12:35:20.975622 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 21 12:35:20.975778 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 21 12:35:20.975933 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 21 12:35:20.976085 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 21 12:35:20.976260 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 21 12:35:20.976395 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 21 12:35:20.976547 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 21 12:35:20.976711 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 21 12:35:20.976725 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 21 12:35:20.976734 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 21 12:35:20.976742 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 21 12:35:20.976754 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 21 12:35:20.976762 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 21 12:35:20.976770 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 21 12:35:20.976778 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 21 12:35:20.976786 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 21 12:35:20.976794 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 21 12:35:20.976802 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 21 12:35:20.976826 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 21 12:35:20.976834 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 21 12:35:20.976845 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 21 12:35:20.976853 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 21 12:35:20.976861 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 21 12:35:20.976869 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 21 12:35:20.976876 kernel: iommu: Default domain type: Translated Mar 21 12:35:20.976884 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 21 12:35:20.976892 kernel: efivars: Registered efivars operations Mar 21 12:35:20.976900 kernel: PCI: Using ACPI for IRQ routing Mar 21 12:35:20.976908 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 21 12:35:20.976919 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 21 12:35:20.976926 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 21 12:35:20.976934 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Mar 21 12:35:20.976942 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Mar 21 12:35:20.976950 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 21 12:35:20.976958 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 21 12:35:20.976966 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Mar 21 12:35:20.976974 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 21 12:35:20.977112 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 21 12:35:20.977262 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 21 12:35:20.977395 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 21 12:35:20.977406 kernel: vgaarb: loaded Mar 21 12:35:20.977415 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 21 12:35:20.977423 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 21 12:35:20.977431 kernel: clocksource: Switched to clocksource kvm-clock Mar 21 12:35:20.977439 kernel: VFS: Disk quotas dquot_6.6.0 Mar 21 12:35:20.977447 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 21 12:35:20.977459 kernel: pnp: PnP ACPI init Mar 21 12:35:20.977661 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 21 12:35:20.977675 kernel: pnp: PnP ACPI: found 6 devices Mar 21 12:35:20.977683 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 21 12:35:20.977691 kernel: NET: Registered PF_INET protocol family Mar 21 12:35:20.977721 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 21 12:35:20.977732 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 21 12:35:20.977741 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 21 12:35:20.977751 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 21 12:35:20.977760 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 21 12:35:20.977768 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 21 12:35:20.977776 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 21 12:35:20.977784 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 21 12:35:20.977793 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 21 12:35:20.977802 kernel: NET: Registered PF_XDP protocol family Mar 21 12:35:20.978037 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 21 12:35:20.978183 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 21 12:35:20.978312 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 21 12:35:20.978432 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 21 12:35:20.978551 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 21 12:35:20.978670 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 21 12:35:20.978788 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 21 12:35:20.978924 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 21 12:35:20.978936 kernel: PCI: CLS 0 bytes, default 64 Mar 21 12:35:20.978945 kernel: Initialise system trusted keyrings Mar 21 12:35:20.978958 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 21 12:35:20.978966 kernel: Key type asymmetric registered Mar 21 12:35:20.978975 kernel: Asymmetric key parser 'x509' registered Mar 21 12:35:20.978983 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 21 12:35:20.978991 kernel: io scheduler mq-deadline registered Mar 21 12:35:20.979000 kernel: io scheduler kyber registered Mar 21 12:35:20.979008 kernel: io scheduler bfq registered Mar 21 12:35:20.979016 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 21 12:35:20.979025 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 21 12:35:20.979037 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 21 12:35:20.979050 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 21 12:35:20.979059 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 21 12:35:20.979067 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 21 12:35:20.979076 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 21 12:35:20.979087 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 21 12:35:20.979095 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 21 12:35:20.979250 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 21 12:35:20.979263 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 21 12:35:20.979385 kernel: rtc_cmos 00:04: registered as rtc0 Mar 21 12:35:20.979507 kernel: rtc_cmos 00:04: setting system clock to 2025-03-21T12:35:20 UTC (1742560520) Mar 21 12:35:20.979631 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 21 12:35:20.979643 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 21 12:35:20.979656 kernel: efifb: probing for efifb Mar 21 12:35:20.979664 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 21 12:35:20.979672 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 21 12:35:20.979681 kernel: efifb: scrolling: redraw Mar 21 12:35:20.979689 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 21 12:35:20.979697 kernel: Console: switching to colour frame buffer device 160x50 Mar 21 12:35:20.979705 kernel: fb0: EFI VGA frame buffer device Mar 21 12:35:20.979714 kernel: pstore: Using crash dump compression: deflate Mar 21 12:35:20.979722 kernel: pstore: Registered efi_pstore as persistent store backend Mar 21 12:35:20.979733 kernel: NET: Registered PF_INET6 protocol family Mar 21 12:35:20.979742 kernel: Segment Routing with IPv6 Mar 21 12:35:20.979750 kernel: In-situ OAM (IOAM) with IPv6 Mar 21 12:35:20.979758 kernel: NET: Registered PF_PACKET protocol family Mar 21 12:35:20.979766 kernel: Key type dns_resolver registered Mar 21 12:35:20.979774 kernel: IPI shorthand broadcast: enabled Mar 21 12:35:20.979783 kernel: sched_clock: Marking stable (1037002967, 166453605)->(1312586897, -109130325) Mar 21 12:35:20.979792 kernel: registered taskstats version 1 Mar 21 12:35:20.979800 kernel: Loading compiled-in X.509 certificates Mar 21 12:35:20.979821 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: d76f2258ffed89096a9428010e5ac0a0babcea9e' Mar 21 12:35:20.979833 kernel: Key type .fscrypt registered Mar 21 12:35:20.979841 kernel: Key type fscrypt-provisioning registered Mar 21 12:35:20.979850 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 21 12:35:20.979858 kernel: ima: Allocated hash algorithm: sha1 Mar 21 12:35:20.979866 kernel: ima: No architecture policies found Mar 21 12:35:20.979875 kernel: clk: Disabling unused clocks Mar 21 12:35:20.979883 kernel: Freeing unused kernel image (initmem) memory: 43588K Mar 21 12:35:20.979892 kernel: Write protecting the kernel read-only data: 40960k Mar 21 12:35:20.979903 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 21 12:35:20.979911 kernel: Run /init as init process Mar 21 12:35:20.979920 kernel: with arguments: Mar 21 12:35:20.979928 kernel: /init Mar 21 12:35:20.979936 kernel: with environment: Mar 21 12:35:20.979945 kernel: HOME=/ Mar 21 12:35:20.979953 kernel: TERM=linux Mar 21 12:35:20.979961 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 21 12:35:20.979970 systemd[1]: Successfully made /usr/ read-only. Mar 21 12:35:20.979984 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 21 12:35:20.979994 systemd[1]: Detected virtualization kvm. Mar 21 12:35:20.980003 systemd[1]: Detected architecture x86-64. Mar 21 12:35:20.980011 systemd[1]: Running in initrd. Mar 21 12:35:20.980020 systemd[1]: No hostname configured, using default hostname. Mar 21 12:35:20.980029 systemd[1]: Hostname set to . Mar 21 12:35:20.980037 systemd[1]: Initializing machine ID from VM UUID. Mar 21 12:35:20.980049 systemd[1]: Queued start job for default target initrd.target. Mar 21 12:35:20.980058 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 12:35:20.980067 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 12:35:20.980076 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 21 12:35:20.980085 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 21 12:35:20.980094 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 21 12:35:20.980105 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 21 12:35:20.980118 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 21 12:35:20.980127 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 21 12:35:20.980135 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 12:35:20.980144 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 21 12:35:20.980160 systemd[1]: Reached target paths.target - Path Units. Mar 21 12:35:20.980169 systemd[1]: Reached target slices.target - Slice Units. Mar 21 12:35:20.980178 systemd[1]: Reached target swap.target - Swaps. Mar 21 12:35:20.980187 systemd[1]: Reached target timers.target - Timer Units. Mar 21 12:35:20.980196 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 21 12:35:20.980207 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 21 12:35:20.980216 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 21 12:35:20.980225 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 21 12:35:20.980234 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 21 12:35:20.980243 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 21 12:35:20.980252 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 12:35:20.980260 systemd[1]: Reached target sockets.target - Socket Units. Mar 21 12:35:20.980269 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 21 12:35:20.980281 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 21 12:35:20.980290 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 21 12:35:20.980298 systemd[1]: Starting systemd-fsck-usr.service... Mar 21 12:35:20.980307 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 21 12:35:20.980316 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 21 12:35:20.980325 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:35:20.980334 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 21 12:35:20.980343 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 12:35:20.980354 systemd[1]: Finished systemd-fsck-usr.service. Mar 21 12:35:20.980364 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 21 12:35:20.980398 systemd-journald[192]: Collecting audit messages is disabled. Mar 21 12:35:20.980422 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:35:20.980431 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 12:35:20.980441 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 21 12:35:20.980450 systemd-journald[192]: Journal started Mar 21 12:35:20.980472 systemd-journald[192]: Runtime Journal (/run/log/journal/8d03ac7777f34971abcdd31d22c45cea) is 6M, max 48.2M, 42.2M free. Mar 21 12:35:20.968679 systemd-modules-load[193]: Inserted module 'overlay' Mar 21 12:35:20.999844 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 21 12:35:20.999908 systemd[1]: Started systemd-journald.service - Journal Service. Mar 21 12:35:21.002855 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 21 12:35:21.003001 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 21 12:35:21.006167 systemd-modules-load[193]: Inserted module 'br_netfilter' Mar 21 12:35:21.007242 kernel: Bridge firewalling registered Mar 21 12:35:21.007495 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 21 12:35:21.009439 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 21 12:35:21.011170 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:35:21.014150 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 12:35:21.019594 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 21 12:35:21.038855 dracut-cmdline[226]: dracut-dracut-053 Mar 21 12:35:21.094170 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fb715041d083099c6a15c8aee7cc93fc3f3ca8764fc0aaaff245a06641d663d2 Mar 21 12:35:21.099353 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 12:35:21.102276 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:35:21.107363 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 21 12:35:21.152876 kernel: SCSI subsystem initialized Mar 21 12:35:21.159993 systemd-resolved[288]: Positive Trust Anchors: Mar 21 12:35:21.160014 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 21 12:35:21.160046 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 21 12:35:21.175289 kernel: Loading iSCSI transport class v2.0-870. Mar 21 12:35:21.177078 systemd-resolved[288]: Defaulting to hostname 'linux'. Mar 21 12:35:21.179731 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 21 12:35:21.181023 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 21 12:35:21.192858 kernel: iscsi: registered transport (tcp) Mar 21 12:35:21.218312 kernel: iscsi: registered transport (qla4xxx) Mar 21 12:35:21.218423 kernel: QLogic iSCSI HBA Driver Mar 21 12:35:21.277744 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 21 12:35:21.280892 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 21 12:35:21.328247 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 21 12:35:21.328345 kernel: device-mapper: uevent: version 1.0.3 Mar 21 12:35:21.329315 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 21 12:35:21.371887 kernel: raid6: avx2x4 gen() 22098 MB/s Mar 21 12:35:21.388872 kernel: raid6: avx2x2 gen() 25656 MB/s Mar 21 12:35:21.406118 kernel: raid6: avx2x1 gen() 23009 MB/s Mar 21 12:35:21.406221 kernel: raid6: using algorithm avx2x2 gen() 25656 MB/s Mar 21 12:35:21.424173 kernel: raid6: .... xor() 19464 MB/s, rmw enabled Mar 21 12:35:21.424276 kernel: raid6: using avx2x2 recovery algorithm Mar 21 12:35:21.444860 kernel: xor: automatically using best checksumming function avx Mar 21 12:35:21.602860 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 21 12:35:21.619170 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 21 12:35:21.623391 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 12:35:21.651062 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 21 12:35:21.657112 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 12:35:21.661344 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 21 12:35:21.687910 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Mar 21 12:35:21.725072 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 21 12:35:21.729131 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 21 12:35:21.814008 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 12:35:21.818309 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 21 12:35:21.842948 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 21 12:35:21.848669 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 21 12:35:21.851794 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 12:35:21.853088 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 21 12:35:21.857929 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 21 12:35:21.864695 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 21 12:35:21.880452 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 21 12:35:21.881010 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 21 12:35:21.881024 kernel: GPT:9289727 != 19775487 Mar 21 12:35:21.881035 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 21 12:35:21.881045 kernel: GPT:9289727 != 19775487 Mar 21 12:35:21.881055 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 21 12:35:21.881065 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:35:21.881076 kernel: cryptd: max_cpu_qlen set to 1000 Mar 21 12:35:21.884838 kernel: libata version 3.00 loaded. Mar 21 12:35:21.887077 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 21 12:35:21.895992 kernel: AVX2 version of gcm_enc/dec engaged. Mar 21 12:35:21.896044 kernel: AES CTR mode by8 optimization enabled Mar 21 12:35:21.896238 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 21 12:35:21.897467 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:35:21.900694 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 12:35:21.905402 kernel: ahci 0000:00:1f.2: version 3.0 Mar 21 12:35:21.925781 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 21 12:35:21.925802 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 21 12:35:21.925989 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 21 12:35:21.926157 kernel: scsi host0: ahci Mar 21 12:35:21.926335 kernel: scsi host1: ahci Mar 21 12:35:21.926491 kernel: scsi host2: ahci Mar 21 12:35:21.926667 kernel: scsi host3: ahci Mar 21 12:35:21.926930 kernel: scsi host4: ahci Mar 21 12:35:21.927092 kernel: scsi host5: ahci Mar 21 12:35:21.927269 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 21 12:35:21.927281 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 21 12:35:21.927292 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (467) Mar 21 12:35:21.927303 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 21 12:35:21.927314 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 21 12:35:21.927324 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 21 12:35:21.927335 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 21 12:35:21.927346 kernel: BTRFS: device fsid c99b4410-5d95-4377-8189-88a588aa2514 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (478) Mar 21 12:35:21.904231 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 12:35:21.904434 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:35:21.908339 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:35:21.915207 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:35:21.951395 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 21 12:35:21.967899 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 21 12:35:21.985210 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 21 12:35:21.994176 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 21 12:35:21.994257 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 21 12:35:21.995497 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 21 12:35:22.000735 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 12:35:22.000797 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:35:22.003226 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:35:22.010583 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:35:22.012271 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 21 12:35:22.020228 disk-uuid[558]: Primary Header is updated. Mar 21 12:35:22.020228 disk-uuid[558]: Secondary Entries is updated. Mar 21 12:35:22.020228 disk-uuid[558]: Secondary Header is updated. Mar 21 12:35:22.024839 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:35:22.028931 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:35:22.029160 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:35:22.030450 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 12:35:22.062370 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:35:22.231865 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 21 12:35:22.231954 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 21 12:35:22.238832 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 21 12:35:22.238855 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 21 12:35:22.239847 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 21 12:35:22.240851 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 21 12:35:22.241848 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 21 12:35:22.241865 kernel: ata3.00: applying bridge limits Mar 21 12:35:22.243070 kernel: ata3.00: configured for UDMA/100 Mar 21 12:35:22.243854 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 21 12:35:22.309867 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 21 12:35:22.328780 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 21 12:35:22.328800 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 21 12:35:23.030872 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:35:23.031304 disk-uuid[560]: The operation has completed successfully. Mar 21 12:35:23.072675 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 21 12:35:23.072829 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 21 12:35:23.097167 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 21 12:35:23.117507 sh[598]: Success Mar 21 12:35:23.131849 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 21 12:35:23.170925 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 21 12:35:23.175495 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 21 12:35:23.189645 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 21 12:35:23.197443 kernel: BTRFS info (device dm-0): first mount of filesystem c99b4410-5d95-4377-8189-88a588aa2514 Mar 21 12:35:23.197473 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 21 12:35:23.197485 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 21 12:35:23.199427 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 21 12:35:23.199444 kernel: BTRFS info (device dm-0): using free space tree Mar 21 12:35:23.204371 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 21 12:35:23.205122 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 21 12:35:23.208027 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 21 12:35:23.208778 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 21 12:35:23.249370 kernel: BTRFS info (device vda6): first mount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 12:35:23.249402 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 21 12:35:23.249417 kernel: BTRFS info (device vda6): using free space tree Mar 21 12:35:23.252848 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 12:35:23.256902 kernel: BTRFS info (device vda6): last unmount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 12:35:23.263274 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 21 12:35:23.266548 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 21 12:35:23.452128 ignition[691]: Ignition 2.20.0 Mar 21 12:35:23.453244 ignition[691]: Stage: fetch-offline Mar 21 12:35:23.453325 ignition[691]: no configs at "/usr/lib/ignition/base.d" Mar 21 12:35:23.453341 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:35:23.456434 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 21 12:35:23.453510 ignition[691]: parsed url from cmdline: "" Mar 21 12:35:23.453516 ignition[691]: no config URL provided Mar 21 12:35:23.453525 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Mar 21 12:35:23.460653 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 21 12:35:23.453539 ignition[691]: no config at "/usr/lib/ignition/user.ign" Mar 21 12:35:23.453582 ignition[691]: op(1): [started] loading QEMU firmware config module Mar 21 12:35:23.453590 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 21 12:35:23.462774 ignition[691]: op(1): [finished] loading QEMU firmware config module Mar 21 12:35:23.504067 ignition[691]: parsing config with SHA512: d576c85e131f60f2eb8d4dbc9d5a5084cd0dee4d76555386f4f2cfeffa68b769c5412b61500651a972ab68caca7008bb6f1a1db0a399ed41546a45e90d78b875 Mar 21 12:35:23.507712 systemd-networkd[786]: lo: Link UP Mar 21 12:35:23.507724 systemd-networkd[786]: lo: Gained carrier Mar 21 12:35:23.509561 systemd-networkd[786]: Enumeration completed Mar 21 12:35:23.509938 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:35:23.509943 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 21 12:35:23.510350 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 21 12:35:23.515414 ignition[691]: fetch-offline: fetch-offline passed Mar 21 12:35:23.511006 systemd-networkd[786]: eth0: Link UP Mar 21 12:35:23.515557 ignition[691]: Ignition finished successfully Mar 21 12:35:23.511010 systemd-networkd[786]: eth0: Gained carrier Mar 21 12:35:23.511018 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:35:23.512192 systemd[1]: Reached target network.target - Network. Mar 21 12:35:23.513411 unknown[691]: fetched base config from "system" Mar 21 12:35:23.513424 unknown[691]: fetched user config from "qemu" Mar 21 12:35:23.518489 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 21 12:35:23.520551 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 21 12:35:23.521509 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 21 12:35:23.526863 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 21 12:35:23.692703 ignition[790]: Ignition 2.20.0 Mar 21 12:35:23.692717 ignition[790]: Stage: kargs Mar 21 12:35:23.692950 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 21 12:35:23.692967 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:35:23.694078 ignition[790]: kargs: kargs passed Mar 21 12:35:23.694150 ignition[790]: Ignition finished successfully Mar 21 12:35:23.697603 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 21 12:35:23.700424 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 21 12:35:23.727462 ignition[800]: Ignition 2.20.0 Mar 21 12:35:23.727473 ignition[800]: Stage: disks Mar 21 12:35:23.727625 ignition[800]: no configs at "/usr/lib/ignition/base.d" Mar 21 12:35:23.727637 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:35:23.730516 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 21 12:35:23.728440 ignition[800]: disks: disks passed Mar 21 12:35:23.732369 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 21 12:35:23.728487 ignition[800]: Ignition finished successfully Mar 21 12:35:23.734314 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 21 12:35:23.736318 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 21 12:35:23.738489 systemd[1]: Reached target sysinit.target - System Initialization. Mar 21 12:35:23.739656 systemd[1]: Reached target basic.target - Basic System. Mar 21 12:35:23.742733 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 21 12:35:23.795674 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 21 12:35:23.906547 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 21 12:35:23.909401 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 21 12:35:24.017860 kernel: EXT4-fs (vda9): mounted filesystem c540419e-275b-4bd7-8ebd-24b19ec75c0b r/w with ordered data mode. Quota mode: none. Mar 21 12:35:24.018660 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 21 12:35:24.020410 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 21 12:35:24.023006 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 21 12:35:24.024898 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 21 12:35:24.026253 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 21 12:35:24.026299 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 21 12:35:24.026325 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 21 12:35:24.042513 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 21 12:35:24.043905 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 21 12:35:24.047863 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (818) Mar 21 12:35:24.049906 kernel: BTRFS info (device vda6): first mount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 12:35:24.049929 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 21 12:35:24.049939 kernel: BTRFS info (device vda6): using free space tree Mar 21 12:35:24.053842 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 12:35:24.055341 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 21 12:35:24.080101 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Mar 21 12:35:24.085140 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Mar 21 12:35:24.090050 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Mar 21 12:35:24.094824 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Mar 21 12:35:24.183412 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 21 12:35:24.186598 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 21 12:35:24.189272 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 21 12:35:24.207429 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 21 12:35:24.208902 kernel: BTRFS info (device vda6): last unmount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 12:35:24.220594 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 21 12:35:24.295388 ignition[935]: INFO : Ignition 2.20.0 Mar 21 12:35:24.295388 ignition[935]: INFO : Stage: mount Mar 21 12:35:24.297185 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 12:35:24.297185 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:35:24.297185 ignition[935]: INFO : mount: mount passed Mar 21 12:35:24.297185 ignition[935]: INFO : Ignition finished successfully Mar 21 12:35:24.302879 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 21 12:35:24.305296 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 21 12:35:24.324795 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 21 12:35:24.343844 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (944) Mar 21 12:35:24.345993 kernel: BTRFS info (device vda6): first mount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 12:35:24.346018 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 21 12:35:24.346030 kernel: BTRFS info (device vda6): using free space tree Mar 21 12:35:24.348879 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 12:35:24.350741 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 21 12:35:24.388108 ignition[961]: INFO : Ignition 2.20.0 Mar 21 12:35:24.388108 ignition[961]: INFO : Stage: files Mar 21 12:35:24.388108 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 12:35:24.388108 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:35:24.392643 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Mar 21 12:35:24.392643 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 21 12:35:24.392643 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 21 12:35:24.396878 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 21 12:35:24.398474 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 21 12:35:24.400417 unknown[961]: wrote ssh authorized keys file for user: core Mar 21 12:35:24.401589 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 21 12:35:24.406945 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 21 12:35:24.409267 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 21 12:35:24.467536 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 21 12:35:24.582261 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 21 12:35:24.582261 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 21 12:35:24.586079 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 21 12:35:24.663985 systemd-networkd[786]: eth0: Gained IPv6LL Mar 21 12:35:25.068757 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 21 12:35:25.297510 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 21 12:35:25.297510 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 21 12:35:25.301895 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 21 12:35:25.301895 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 21 12:35:25.301895 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 21 12:35:25.301895 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 21 12:35:25.301895 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 21 12:35:25.301895 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 21 12:35:25.301895 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 21 12:35:25.301895 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 21 12:35:25.301895 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 21 12:35:25.301895 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 21 12:35:25.301895 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 21 12:35:25.301895 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 21 12:35:25.301895 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 21 12:35:25.579333 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 21 12:35:26.104195 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 21 12:35:26.104195 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 21 12:35:26.108014 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 21 12:35:26.108014 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 21 12:35:26.108014 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 21 12:35:26.108014 ignition[961]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 21 12:35:26.108014 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 21 12:35:26.108014 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 21 12:35:26.108014 ignition[961]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 21 12:35:26.108014 ignition[961]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 21 12:35:26.128076 ignition[961]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 21 12:35:26.132018 ignition[961]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 21 12:35:26.133855 ignition[961]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 21 12:35:26.133855 ignition[961]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 21 12:35:26.136655 ignition[961]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 21 12:35:26.138529 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 21 12:35:26.140690 ignition[961]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 21 12:35:26.142852 ignition[961]: INFO : files: files passed Mar 21 12:35:26.142852 ignition[961]: INFO : Ignition finished successfully Mar 21 12:35:26.147759 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 21 12:35:26.150369 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 21 12:35:26.151183 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 21 12:35:26.165899 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 21 12:35:26.166056 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 21 12:35:26.169449 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Mar 21 12:35:26.170904 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 21 12:35:26.170904 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 21 12:35:26.175208 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 21 12:35:26.172645 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 21 12:35:26.175459 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 21 12:35:26.178708 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 21 12:35:26.226589 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 21 12:35:26.226718 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 21 12:35:26.229118 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 21 12:35:26.230182 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 21 12:35:26.233066 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 21 12:35:26.235553 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 21 12:35:26.267714 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 21 12:35:26.270227 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 21 12:35:26.308624 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 21 12:35:26.309934 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 12:35:26.312267 systemd[1]: Stopped target timers.target - Timer Units. Mar 21 12:35:26.314271 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 21 12:35:26.314379 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 21 12:35:26.315459 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 21 12:35:26.315791 systemd[1]: Stopped target basic.target - Basic System. Mar 21 12:35:26.316329 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 21 12:35:26.316642 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 21 12:35:26.317150 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 21 12:35:26.317470 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 21 12:35:26.317789 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 21 12:35:26.318357 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 21 12:35:26.318676 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 21 12:35:26.319161 systemd[1]: Stopped target swap.target - Swaps. Mar 21 12:35:26.319474 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 21 12:35:26.319583 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 21 12:35:26.338970 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 21 12:35:26.339124 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 12:35:26.341137 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 21 12:35:26.343289 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 12:35:26.345382 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 21 12:35:26.345497 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 21 12:35:26.348618 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 21 12:35:26.348740 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 21 12:35:26.349741 systemd[1]: Stopped target paths.target - Path Units. Mar 21 12:35:26.350146 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 21 12:35:26.356884 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 12:35:26.357055 systemd[1]: Stopped target slices.target - Slice Units. Mar 21 12:35:26.360468 systemd[1]: Stopped target sockets.target - Socket Units. Mar 21 12:35:26.361388 systemd[1]: iscsid.socket: Deactivated successfully. Mar 21 12:35:26.361480 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 21 12:35:26.363953 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 21 12:35:26.364049 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 21 12:35:26.365731 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 21 12:35:26.365859 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 21 12:35:26.366642 systemd[1]: ignition-files.service: Deactivated successfully. Mar 21 12:35:26.366748 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 21 12:35:26.372564 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 21 12:35:26.374106 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 21 12:35:26.375992 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 21 12:35:26.376119 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 12:35:26.377164 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 21 12:35:26.377266 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 21 12:35:26.392062 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 21 12:35:26.392173 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 21 12:35:26.414173 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 21 12:35:26.428563 ignition[1017]: INFO : Ignition 2.20.0 Mar 21 12:35:26.428563 ignition[1017]: INFO : Stage: umount Mar 21 12:35:26.430466 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 12:35:26.430466 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:35:26.430466 ignition[1017]: INFO : umount: umount passed Mar 21 12:35:26.430466 ignition[1017]: INFO : Ignition finished successfully Mar 21 12:35:26.432724 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 21 12:35:26.432870 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 21 12:35:26.434858 systemd[1]: Stopped target network.target - Network. Mar 21 12:35:26.436432 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 21 12:35:26.436492 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 21 12:35:26.436596 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 21 12:35:26.436643 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 21 12:35:26.437210 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 21 12:35:26.437260 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 21 12:35:26.437536 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 21 12:35:26.437583 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 21 12:35:26.438141 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 21 12:35:26.438411 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 21 12:35:26.445885 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 21 12:35:26.446028 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 21 12:35:26.453237 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 21 12:35:26.453603 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 21 12:35:26.453762 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 21 12:35:26.457418 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 21 12:35:26.458367 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 21 12:35:26.458445 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 21 12:35:26.460693 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 21 12:35:26.461636 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 21 12:35:26.461700 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 21 12:35:26.464052 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 21 12:35:26.464105 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:35:26.466575 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 21 12:35:26.466628 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 21 12:35:26.468550 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 21 12:35:26.468601 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 12:35:26.470950 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 12:35:26.474199 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 21 12:35:26.474271 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 21 12:35:26.484886 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 21 12:35:26.485019 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 21 12:35:26.491643 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 21 12:35:26.491844 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 12:35:26.494166 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 21 12:35:26.494219 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 21 12:35:26.495833 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 21 12:35:26.495876 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 12:35:26.498120 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 21 12:35:26.498171 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 21 12:35:26.500275 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 21 12:35:26.500324 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 21 12:35:26.502327 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 21 12:35:26.502378 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:35:26.505333 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 21 12:35:26.506532 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 21 12:35:26.506590 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 12:35:26.508860 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 21 12:35:26.508910 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 21 12:35:26.510985 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 21 12:35:26.511043 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 12:35:26.513184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 12:35:26.513233 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:35:26.517312 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 21 12:35:26.517381 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 21 12:35:26.524601 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 21 12:35:26.524722 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 21 12:35:26.590617 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 21 12:35:26.590776 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 21 12:35:26.592006 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 21 12:35:26.592261 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 21 12:35:26.592319 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 21 12:35:26.593468 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 21 12:35:26.623915 systemd[1]: Switching root. Mar 21 12:35:26.658552 systemd-journald[192]: Journal stopped Mar 21 12:35:27.929995 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Mar 21 12:35:27.930091 kernel: SELinux: policy capability network_peer_controls=1 Mar 21 12:35:27.930107 kernel: SELinux: policy capability open_perms=1 Mar 21 12:35:27.930119 kernel: SELinux: policy capability extended_socket_class=1 Mar 21 12:35:27.930131 kernel: SELinux: policy capability always_check_network=0 Mar 21 12:35:27.930144 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 21 12:35:27.930171 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 21 12:35:27.930183 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 21 12:35:27.930201 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 21 12:35:27.930214 kernel: audit: type=1403 audit(1742560527.062:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 21 12:35:27.930227 systemd[1]: Successfully loaded SELinux policy in 46.899ms. Mar 21 12:35:27.930255 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.967ms. Mar 21 12:35:27.930269 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 21 12:35:27.930287 systemd[1]: Detected virtualization kvm. Mar 21 12:35:27.930301 systemd[1]: Detected architecture x86-64. Mar 21 12:35:27.930316 systemd[1]: Detected first boot. Mar 21 12:35:27.930329 systemd[1]: Initializing machine ID from VM UUID. Mar 21 12:35:27.930342 zram_generator::config[1064]: No configuration found. Mar 21 12:35:27.930356 kernel: Guest personality initialized and is inactive Mar 21 12:35:27.930368 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 21 12:35:27.930381 kernel: Initialized host personality Mar 21 12:35:27.930393 kernel: NET: Registered PF_VSOCK protocol family Mar 21 12:35:27.930405 systemd[1]: Populated /etc with preset unit settings. Mar 21 12:35:27.930422 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 21 12:35:27.930435 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 21 12:35:27.930448 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 21 12:35:27.930461 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 21 12:35:27.930475 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 21 12:35:27.930489 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 21 12:35:27.930502 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 21 12:35:27.930514 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 21 12:35:27.930527 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 21 12:35:27.930544 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 21 12:35:27.930559 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 21 12:35:27.930572 systemd[1]: Created slice user.slice - User and Session Slice. Mar 21 12:35:27.930585 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 12:35:27.930598 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 12:35:27.930611 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 21 12:35:27.930624 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 21 12:35:27.930637 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 21 12:35:27.930654 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 21 12:35:27.930667 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 21 12:35:27.930679 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 12:35:27.930693 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 21 12:35:27.930706 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 21 12:35:27.930719 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 21 12:35:27.930732 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 21 12:35:27.930745 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 12:35:27.930760 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 21 12:35:27.930773 systemd[1]: Reached target slices.target - Slice Units. Mar 21 12:35:27.930786 systemd[1]: Reached target swap.target - Swaps. Mar 21 12:35:27.930799 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 21 12:35:27.930851 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 21 12:35:27.930866 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 21 12:35:27.930879 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 21 12:35:27.930893 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 21 12:35:27.930906 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 12:35:27.930922 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 21 12:35:27.930935 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 21 12:35:27.930947 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 21 12:35:27.930961 systemd[1]: Mounting media.mount - External Media Directory... Mar 21 12:35:27.930974 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:35:27.930987 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 21 12:35:27.931005 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 21 12:35:27.931020 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 21 12:35:27.931033 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 21 12:35:27.931049 systemd[1]: Reached target machines.target - Containers. Mar 21 12:35:27.931062 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 21 12:35:27.931075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:35:27.931089 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 21 12:35:27.931102 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 21 12:35:27.931115 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:35:27.931128 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 21 12:35:27.931141 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:35:27.931156 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 21 12:35:27.931169 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:35:27.931183 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 21 12:35:27.931195 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 21 12:35:27.931208 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 21 12:35:27.931221 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 21 12:35:27.931234 systemd[1]: Stopped systemd-fsck-usr.service. Mar 21 12:35:27.931247 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:35:27.931262 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 21 12:35:27.931275 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 21 12:35:27.931288 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 21 12:35:27.931301 kernel: loop: module loaded Mar 21 12:35:27.931313 kernel: fuse: init (API version 7.39) Mar 21 12:35:27.931325 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 21 12:35:27.931339 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 21 12:35:27.931355 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 21 12:35:27.931369 systemd[1]: verity-setup.service: Deactivated successfully. Mar 21 12:35:27.931381 systemd[1]: Stopped verity-setup.service. Mar 21 12:35:27.931395 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:35:27.931407 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 21 12:35:27.931420 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 21 12:35:27.931434 systemd[1]: Mounted media.mount - External Media Directory. Mar 21 12:35:27.931449 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 21 12:35:27.931462 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 21 12:35:27.931475 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 21 12:35:27.931487 kernel: ACPI: bus type drm_connector registered Mar 21 12:35:27.931500 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 21 12:35:27.931516 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 12:35:27.931529 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 21 12:35:27.931542 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 21 12:35:27.931555 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:35:27.931567 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:35:27.931580 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 21 12:35:27.931593 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 21 12:35:27.931606 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:35:27.931639 systemd-journald[1140]: Collecting audit messages is disabled. Mar 21 12:35:27.931666 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:35:27.931679 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 21 12:35:27.931692 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 21 12:35:27.931705 systemd-journald[1140]: Journal started Mar 21 12:35:27.931728 systemd-journald[1140]: Runtime Journal (/run/log/journal/8d03ac7777f34971abcdd31d22c45cea) is 6M, max 48.2M, 42.2M free. Mar 21 12:35:27.652094 systemd[1]: Queued start job for default target multi-user.target. Mar 21 12:35:27.669996 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 21 12:35:27.670554 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 21 12:35:27.934773 systemd[1]: Started systemd-journald.service - Journal Service. Mar 21 12:35:27.935430 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:35:27.935658 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:35:27.937411 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 21 12:35:27.938888 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 21 12:35:27.940491 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 21 12:35:27.942088 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 21 12:35:27.959883 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 21 12:35:27.962870 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 21 12:35:27.965167 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 21 12:35:27.966312 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 21 12:35:27.966344 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 21 12:35:27.968447 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 21 12:35:27.974937 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 21 12:35:27.977551 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 21 12:35:27.979013 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:35:27.980960 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 21 12:35:27.983891 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 21 12:35:27.985223 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 21 12:35:27.994204 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 21 12:35:27.997375 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 21 12:35:28.000951 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 21 12:35:28.004781 systemd-journald[1140]: Time spent on flushing to /var/log/journal/8d03ac7777f34971abcdd31d22c45cea is 17.913ms for 1061 entries. Mar 21 12:35:28.004781 systemd-journald[1140]: System Journal (/var/log/journal/8d03ac7777f34971abcdd31d22c45cea) is 8M, max 195.6M, 187.6M free. Mar 21 12:35:28.281287 systemd-journald[1140]: Received client request to flush runtime journal. Mar 21 12:35:28.281335 kernel: loop0: detected capacity change from 0 to 210664 Mar 21 12:35:28.281354 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 21 12:35:28.005197 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 21 12:35:28.010097 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 21 12:35:28.014476 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 21 12:35:28.026177 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 12:35:28.027604 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 21 12:35:28.029118 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 21 12:35:28.035512 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 21 12:35:28.058466 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 21 12:35:28.059755 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Mar 21 12:35:28.059769 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Mar 21 12:35:28.060043 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:35:28.067758 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 21 12:35:28.070648 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 21 12:35:28.085878 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 21 12:35:28.087885 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 21 12:35:28.091205 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 21 12:35:28.283702 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 21 12:35:28.300854 kernel: loop1: detected capacity change from 0 to 151640 Mar 21 12:35:28.335220 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 21 12:35:28.337547 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 21 12:35:28.342605 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 21 12:35:28.377470 kernel: loop2: detected capacity change from 0 to 109808 Mar 21 12:35:28.385086 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Mar 21 12:35:28.385108 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Mar 21 12:35:28.391032 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 12:35:28.422093 kernel: loop3: detected capacity change from 0 to 210664 Mar 21 12:35:28.432907 kernel: loop4: detected capacity change from 0 to 151640 Mar 21 12:35:28.447849 kernel: loop5: detected capacity change from 0 to 109808 Mar 21 12:35:28.462950 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 21 12:35:28.463776 (sd-merge)[1213]: Merged extensions into '/usr'. Mar 21 12:35:28.517492 systemd[1]: Reload requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Mar 21 12:35:28.517519 systemd[1]: Reloading... Mar 21 12:35:28.598840 zram_generator::config[1246]: No configuration found. Mar 21 12:35:28.741082 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:35:28.754424 ldconfig[1179]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 21 12:35:28.808190 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 21 12:35:28.808791 systemd[1]: Reloading finished in 290 ms. Mar 21 12:35:28.833358 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 21 12:35:28.834968 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 21 12:35:28.854396 systemd[1]: Starting ensure-sysext.service... Mar 21 12:35:28.856405 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 21 12:35:28.907750 systemd[1]: Reload requested from client PID 1278 ('systemctl') (unit ensure-sysext.service)... Mar 21 12:35:28.907774 systemd[1]: Reloading... Mar 21 12:35:28.926069 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 21 12:35:28.926365 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 21 12:35:28.927395 systemd-tmpfiles[1279]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 21 12:35:28.927675 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Mar 21 12:35:28.927762 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Mar 21 12:35:28.932200 systemd-tmpfiles[1279]: Detected autofs mount point /boot during canonicalization of boot. Mar 21 12:35:28.932213 systemd-tmpfiles[1279]: Skipping /boot Mar 21 12:35:28.957358 systemd-tmpfiles[1279]: Detected autofs mount point /boot during canonicalization of boot. Mar 21 12:35:28.957428 systemd-tmpfiles[1279]: Skipping /boot Mar 21 12:35:28.978900 zram_generator::config[1308]: No configuration found. Mar 21 12:35:29.105505 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:35:29.175185 systemd[1]: Reloading finished in 266 ms. Mar 21 12:35:29.190841 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 21 12:35:29.210149 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 12:35:29.221231 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 12:35:29.224130 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 21 12:35:29.240836 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 21 12:35:29.245194 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 21 12:35:29.248568 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 12:35:29.251051 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 21 12:35:29.255801 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:35:29.256558 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:35:29.259742 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:35:29.262895 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:35:29.268593 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:35:29.270015 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:35:29.270127 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:35:29.272165 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 21 12:35:29.273268 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:35:29.274540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:35:29.274773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:35:29.287232 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:35:29.287631 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:35:29.290537 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:35:29.290777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:35:29.294572 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:35:29.296238 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:35:29.299151 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Mar 21 12:35:29.300036 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:35:29.301313 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:35:29.301436 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:35:29.301538 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 21 12:35:29.301656 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:35:29.302938 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 21 12:35:29.305379 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 21 12:35:29.306616 augenrules[1380]: No rules Mar 21 12:35:29.307529 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 12:35:29.307808 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 12:35:29.314732 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:35:29.315269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:35:29.323243 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:35:29.325106 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 12:35:29.327258 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:35:29.328915 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:35:29.332955 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 21 12:35:29.345125 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:35:29.352056 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:35:29.353544 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:35:29.353703 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:35:29.376299 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 21 12:35:29.378132 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:35:29.384011 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 12:35:29.385567 augenrules[1389]: /sbin/augenrules: No change Mar 21 12:35:29.387001 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 21 12:35:29.390031 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 21 12:35:29.394436 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:35:29.395101 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:35:29.398034 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 21 12:35:29.402369 augenrules[1435]: No rules Mar 21 12:35:29.398298 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 21 12:35:29.400190 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 12:35:29.400534 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 12:35:29.402093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:35:29.403917 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:35:29.406354 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:35:29.407304 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:35:29.410872 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 21 12:35:29.424901 systemd[1]: Finished ensure-sysext.service. Mar 21 12:35:29.440461 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 21 12:35:29.488573 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 21 12:35:29.491173 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 21 12:35:29.491271 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 21 12:35:29.499005 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 21 12:35:29.500527 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 21 12:35:29.501078 systemd-resolved[1350]: Positive Trust Anchors: Mar 21 12:35:29.501299 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 21 12:35:29.501330 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 21 12:35:29.506334 systemd-resolved[1350]: Defaulting to hostname 'linux'. Mar 21 12:35:29.508245 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 21 12:35:29.510134 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 21 12:35:29.549876 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1405) Mar 21 12:35:29.560840 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 21 12:35:29.568843 kernel: ACPI: button: Power Button [PWRF] Mar 21 12:35:29.590513 systemd-networkd[1450]: lo: Link UP Mar 21 12:35:29.590528 systemd-networkd[1450]: lo: Gained carrier Mar 21 12:35:29.593403 systemd-networkd[1450]: Enumeration completed Mar 21 12:35:29.593511 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 21 12:35:29.594750 systemd[1]: Reached target network.target - Network. Mar 21 12:35:29.596139 systemd-networkd[1450]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:35:29.596152 systemd-networkd[1450]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 21 12:35:29.597533 systemd-networkd[1450]: eth0: Link UP Mar 21 12:35:29.597545 systemd-networkd[1450]: eth0: Gained carrier Mar 21 12:35:29.597558 systemd-networkd[1450]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:35:29.598315 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 21 12:35:29.601013 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 21 12:35:29.604838 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 21 12:35:29.609483 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 21 12:35:29.610877 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 21 12:35:29.616409 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 21 12:35:29.616608 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 21 12:35:29.616829 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 21 12:35:29.622914 systemd-networkd[1450]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 21 12:35:29.624125 systemd-timesyncd[1451]: Network configuration changed, trying to establish connection. Mar 21 12:35:29.624367 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 21 12:35:29.626749 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 21 12:35:29.628136 systemd[1]: Reached target time-set.target - System Time Set. Mar 21 12:35:30.666951 systemd-resolved[1350]: Clock change detected. Flushing caches. Mar 21 12:35:30.667461 systemd-timesyncd[1451]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 21 12:35:30.667573 systemd-timesyncd[1451]: Initial clock synchronization to Fri 2025-03-21 12:35:30.666896 UTC. Mar 21 12:35:30.682207 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 21 12:35:30.694691 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:35:30.696421 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 21 12:35:30.700959 kernel: mousedev: PS/2 mouse device common for all mice Mar 21 12:35:30.707677 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 12:35:30.708742 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:35:30.714475 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:35:30.775068 kernel: kvm_amd: TSC scaling supported Mar 21 12:35:30.775122 kernel: kvm_amd: Nested Virtualization enabled Mar 21 12:35:30.775136 kernel: kvm_amd: Nested Paging enabled Mar 21 12:35:30.776505 kernel: kvm_amd: LBR virtualization supported Mar 21 12:35:30.776534 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 21 12:35:30.777201 kernel: kvm_amd: Virtual GIF supported Mar 21 12:35:30.797055 kernel: EDAC MC: Ver: 3.0.0 Mar 21 12:35:30.816909 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:35:30.833788 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 21 12:35:30.836881 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 21 12:35:30.859058 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 21 12:35:30.894828 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 21 12:35:30.896449 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 21 12:35:30.897580 systemd[1]: Reached target sysinit.target - System Initialization. Mar 21 12:35:30.898752 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 21 12:35:30.900012 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 21 12:35:30.901492 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 21 12:35:30.902673 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 21 12:35:30.903928 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 21 12:35:30.905321 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 21 12:35:30.905355 systemd[1]: Reached target paths.target - Path Units. Mar 21 12:35:30.906286 systemd[1]: Reached target timers.target - Timer Units. Mar 21 12:35:30.908044 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 21 12:35:30.910901 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 21 12:35:30.914820 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 21 12:35:30.916342 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 21 12:35:30.917584 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 21 12:35:30.921411 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 21 12:35:30.923014 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 21 12:35:30.925689 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 21 12:35:30.927400 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 21 12:35:30.928563 systemd[1]: Reached target sockets.target - Socket Units. Mar 21 12:35:30.929538 systemd[1]: Reached target basic.target - Basic System. Mar 21 12:35:30.930502 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 21 12:35:30.930531 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 21 12:35:30.931567 systemd[1]: Starting containerd.service - containerd container runtime... Mar 21 12:35:30.933769 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 21 12:35:30.938173 lvm[1485]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 21 12:35:30.938100 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 21 12:35:30.940448 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 21 12:35:30.941510 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 21 12:35:30.943253 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 21 12:35:30.946124 jq[1488]: false Mar 21 12:35:30.946367 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 21 12:35:30.949208 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 21 12:35:30.958244 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 21 12:35:30.964870 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 21 12:35:30.967014 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 21 12:35:30.969372 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 21 12:35:30.970119 dbus-daemon[1487]: [system] SELinux support is enabled Mar 21 12:35:30.971253 systemd[1]: Starting update-engine.service - Update Engine... Mar 21 12:35:30.976214 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 21 12:35:30.978598 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 21 12:35:30.979653 extend-filesystems[1489]: Found loop3 Mar 21 12:35:30.982927 extend-filesystems[1489]: Found loop4 Mar 21 12:35:30.982927 extend-filesystems[1489]: Found loop5 Mar 21 12:35:30.982927 extend-filesystems[1489]: Found sr0 Mar 21 12:35:30.982927 extend-filesystems[1489]: Found vda Mar 21 12:35:30.982927 extend-filesystems[1489]: Found vda1 Mar 21 12:35:30.982927 extend-filesystems[1489]: Found vda2 Mar 21 12:35:30.982927 extend-filesystems[1489]: Found vda3 Mar 21 12:35:30.982927 extend-filesystems[1489]: Found usr Mar 21 12:35:30.982927 extend-filesystems[1489]: Found vda4 Mar 21 12:35:30.982927 extend-filesystems[1489]: Found vda6 Mar 21 12:35:30.982927 extend-filesystems[1489]: Found vda7 Mar 21 12:35:30.982927 extend-filesystems[1489]: Found vda9 Mar 21 12:35:30.982927 extend-filesystems[1489]: Checking size of /dev/vda9 Mar 21 12:35:31.003140 extend-filesystems[1489]: Resized partition /dev/vda9 Mar 21 12:35:30.983661 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 21 12:35:31.004261 jq[1502]: true Mar 21 12:35:30.984792 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 21 12:35:30.985110 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 21 12:35:30.986005 systemd[1]: motdgen.service: Deactivated successfully. Mar 21 12:35:30.986290 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 21 12:35:30.997351 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 21 12:35:30.997631 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 21 12:35:31.006990 extend-filesystems[1513]: resize2fs 1.47.2 (1-Jan-2025) Mar 21 12:35:31.010593 update_engine[1501]: I20250321 12:35:31.009212 1501 main.cc:92] Flatcar Update Engine starting Mar 21 12:35:31.013280 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 21 12:35:31.013305 update_engine[1501]: I20250321 12:35:31.011920 1501 update_check_scheduler.cc:74] Next update check in 8m37s Mar 21 12:35:31.013844 (ntainerd)[1512]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 21 12:35:31.054298 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 21 12:35:31.054333 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 21 12:35:31.055822 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 21 12:35:31.055838 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 21 12:35:31.059794 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 21 12:35:31.061361 jq[1517]: true Mar 21 12:35:31.073266 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1400) Mar 21 12:35:31.082294 systemd[1]: Started update-engine.service - Update Engine. Mar 21 12:35:31.098486 tar[1509]: linux-amd64/helm Mar 21 12:35:31.099965 extend-filesystems[1513]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 21 12:35:31.099965 extend-filesystems[1513]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 21 12:35:31.099965 extend-filesystems[1513]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 21 12:35:31.108212 extend-filesystems[1489]: Resized filesystem in /dev/vda9 Mar 21 12:35:31.103305 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 21 12:35:31.106631 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 21 12:35:31.107100 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 21 12:35:31.109231 systemd-logind[1496]: Watching system buttons on /dev/input/event1 (Power Button) Mar 21 12:35:31.109257 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 21 12:35:31.110521 systemd-logind[1496]: New seat seat0. Mar 21 12:35:31.124100 systemd[1]: Started systemd-logind.service - User Login Management. Mar 21 12:35:31.254541 locksmithd[1528]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 21 12:35:31.255726 bash[1543]: Updated "/home/core/.ssh/authorized_keys" Mar 21 12:35:31.257690 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 21 12:35:31.260400 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 21 12:35:31.493051 containerd[1512]: time="2025-03-21T12:35:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 21 12:35:31.494924 containerd[1512]: time="2025-03-21T12:35:31.494883325Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 21 12:35:31.509162 containerd[1512]: time="2025-03-21T12:35:31.509010789Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.786µs" Mar 21 12:35:31.509162 containerd[1512]: time="2025-03-21T12:35:31.509066834Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 21 12:35:31.509162 containerd[1512]: time="2025-03-21T12:35:31.509086631Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 21 12:35:31.509373 containerd[1512]: time="2025-03-21T12:35:31.509334887Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 21 12:35:31.509535 containerd[1512]: time="2025-03-21T12:35:31.509505858Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 21 12:35:31.509583 containerd[1512]: time="2025-03-21T12:35:31.509543999Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 21 12:35:31.509650 containerd[1512]: time="2025-03-21T12:35:31.509628247Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 21 12:35:31.509717 containerd[1512]: time="2025-03-21T12:35:31.509703138Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 21 12:35:31.510111 containerd[1512]: time="2025-03-21T12:35:31.510089903Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 21 12:35:31.510179 containerd[1512]: time="2025-03-21T12:35:31.510164994Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 21 12:35:31.510234 containerd[1512]: time="2025-03-21T12:35:31.510219877Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 21 12:35:31.510308 containerd[1512]: time="2025-03-21T12:35:31.510293686Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 21 12:35:31.510484 containerd[1512]: time="2025-03-21T12:35:31.510466841Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 21 12:35:31.512308 containerd[1512]: time="2025-03-21T12:35:31.512279491Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 21 12:35:31.512421 containerd[1512]: time="2025-03-21T12:35:31.512404706Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 21 12:35:31.512472 containerd[1512]: time="2025-03-21T12:35:31.512459379Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 21 12:35:31.512553 containerd[1512]: time="2025-03-21T12:35:31.512538828Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 21 12:35:31.512910 containerd[1512]: time="2025-03-21T12:35:31.512891279Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 21 12:35:31.513057 containerd[1512]: time="2025-03-21T12:35:31.513039898Z" level=info msg="metadata content store policy set" policy=shared Mar 21 12:35:31.560185 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 21 12:35:31.581985 containerd[1512]: time="2025-03-21T12:35:31.581845101Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 21 12:35:31.581985 containerd[1512]: time="2025-03-21T12:35:31.581939198Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 21 12:35:31.581985 containerd[1512]: time="2025-03-21T12:35:31.581957322Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 21 12:35:31.581985 containerd[1512]: time="2025-03-21T12:35:31.581970527Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 21 12:35:31.581985 containerd[1512]: time="2025-03-21T12:35:31.581986917Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 21 12:35:31.581985 containerd[1512]: time="2025-03-21T12:35:31.581999030Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 21 12:35:31.582317 containerd[1512]: time="2025-03-21T12:35:31.582016753Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 21 12:35:31.582317 containerd[1512]: time="2025-03-21T12:35:31.582048753Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 21 12:35:31.582317 containerd[1512]: time="2025-03-21T12:35:31.582062168Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 21 12:35:31.582317 containerd[1512]: time="2025-03-21T12:35:31.582074311Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 21 12:35:31.582317 containerd[1512]: time="2025-03-21T12:35:31.582084460Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 21 12:35:31.582317 containerd[1512]: time="2025-03-21T12:35:31.582102975Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 21 12:35:31.582317 containerd[1512]: time="2025-03-21T12:35:31.582300345Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 21 12:35:31.582448 containerd[1512]: time="2025-03-21T12:35:31.582335000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 21 12:35:31.582448 containerd[1512]: time="2025-03-21T12:35:31.582353325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 21 12:35:31.582448 containerd[1512]: time="2025-03-21T12:35:31.582371308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 21 12:35:31.582448 containerd[1512]: time="2025-03-21T12:35:31.582382840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 21 12:35:31.582448 containerd[1512]: time="2025-03-21T12:35:31.582393771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 21 12:35:31.582448 containerd[1512]: time="2025-03-21T12:35:31.582405212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 21 12:35:31.582448 containerd[1512]: time="2025-03-21T12:35:31.582418657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 21 12:35:31.582448 containerd[1512]: time="2025-03-21T12:35:31.582433345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 21 12:35:31.582448 containerd[1512]: time="2025-03-21T12:35:31.582446469Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 21 12:35:31.582612 containerd[1512]: time="2025-03-21T12:35:31.582465906Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 21 12:35:31.582612 containerd[1512]: time="2025-03-21T12:35:31.582551577Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 21 12:35:31.582612 containerd[1512]: time="2025-03-21T12:35:31.582569049Z" level=info msg="Start snapshots syncer" Mar 21 12:35:31.582612 containerd[1512]: time="2025-03-21T12:35:31.582600128Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 21 12:35:31.582957 containerd[1512]: time="2025-03-21T12:35:31.582910430Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 21 12:35:31.583178 containerd[1512]: time="2025-03-21T12:35:31.582963019Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 21 12:35:31.583178 containerd[1512]: time="2025-03-21T12:35:31.583091660Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 21 12:35:31.583268 containerd[1512]: time="2025-03-21T12:35:31.583209381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 21 12:35:31.583268 containerd[1512]: time="2025-03-21T12:35:31.583233877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 21 12:35:31.583268 containerd[1512]: time="2025-03-21T12:35:31.583245829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 21 12:35:31.583268 containerd[1512]: time="2025-03-21T12:35:31.583256649Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 21 12:35:31.583268 containerd[1512]: time="2025-03-21T12:35:31.583269453Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 21 12:35:31.583268 containerd[1512]: time="2025-03-21T12:35:31.583280414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 21 12:35:31.583433 containerd[1512]: time="2025-03-21T12:35:31.583292126Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 21 12:35:31.583433 containerd[1512]: time="2025-03-21T12:35:31.583315640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 21 12:35:31.583433 containerd[1512]: time="2025-03-21T12:35:31.583327923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 21 12:35:31.583433 containerd[1512]: time="2025-03-21T12:35:31.583337291Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 21 12:35:31.583433 containerd[1512]: time="2025-03-21T12:35:31.583375552Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 21 12:35:31.583433 containerd[1512]: time="2025-03-21T12:35:31.583391633Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 21 12:35:31.583433 containerd[1512]: time="2025-03-21T12:35:31.583401261Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 21 12:35:31.583433 containerd[1512]: time="2025-03-21T12:35:31.583411810Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 21 12:35:31.583433 containerd[1512]: time="2025-03-21T12:35:31.583419926Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 21 12:35:31.583433 containerd[1512]: time="2025-03-21T12:35:31.583434593Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 21 12:35:31.583687 containerd[1512]: time="2025-03-21T12:35:31.583451184Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 21 12:35:31.583687 containerd[1512]: time="2025-03-21T12:35:31.583470781Z" level=info msg="runtime interface created" Mar 21 12:35:31.583687 containerd[1512]: time="2025-03-21T12:35:31.583476913Z" level=info msg="created NRI interface" Mar 21 12:35:31.583687 containerd[1512]: time="2025-03-21T12:35:31.583489787Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 21 12:35:31.583687 containerd[1512]: time="2025-03-21T12:35:31.583501258Z" level=info msg="Connect containerd service" Mar 21 12:35:31.583687 containerd[1512]: time="2025-03-21T12:35:31.583524963Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 21 12:35:31.584345 containerd[1512]: time="2025-03-21T12:35:31.584312470Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 21 12:35:31.587939 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 21 12:35:31.591487 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 21 12:35:31.625653 systemd[1]: issuegen.service: Deactivated successfully. Mar 21 12:35:31.625981 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 21 12:35:31.630472 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 21 12:35:31.659057 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 21 12:35:31.663320 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 21 12:35:31.666461 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 21 12:35:31.667904 systemd[1]: Reached target getty.target - Login Prompts. Mar 21 12:35:31.719870 tar[1509]: linux-amd64/LICENSE Mar 21 12:35:31.719870 tar[1509]: linux-amd64/README.md Mar 21 12:35:31.746869 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 21 12:35:31.771898 containerd[1512]: time="2025-03-21T12:35:31.771756238Z" level=info msg="Start subscribing containerd event" Mar 21 12:35:31.771898 containerd[1512]: time="2025-03-21T12:35:31.771870783Z" level=info msg="Start recovering state" Mar 21 12:35:31.772157 containerd[1512]: time="2025-03-21T12:35:31.772098260Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 21 12:35:31.772194 containerd[1512]: time="2025-03-21T12:35:31.772111054Z" level=info msg="Start event monitor" Mar 21 12:35:31.772194 containerd[1512]: time="2025-03-21T12:35:31.772190112Z" level=info msg="Start cni network conf syncer for default" Mar 21 12:35:31.772243 containerd[1512]: time="2025-03-21T12:35:31.772201794Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 21 12:35:31.772243 containerd[1512]: time="2025-03-21T12:35:31.772212384Z" level=info msg="Start streaming server" Mar 21 12:35:31.772243 containerd[1512]: time="2025-03-21T12:35:31.772229987Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 21 12:35:31.772243 containerd[1512]: time="2025-03-21T12:35:31.772239224Z" level=info msg="runtime interface starting up..." Mar 21 12:35:31.772316 containerd[1512]: time="2025-03-21T12:35:31.772245977Z" level=info msg="starting plugins..." Mar 21 12:35:31.772316 containerd[1512]: time="2025-03-21T12:35:31.772271134Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 21 12:35:31.772484 containerd[1512]: time="2025-03-21T12:35:31.772466941Z" level=info msg="containerd successfully booted in 0.279987s" Mar 21 12:35:31.772581 systemd[1]: Started containerd.service - containerd container runtime. Mar 21 12:35:32.288394 systemd-networkd[1450]: eth0: Gained IPv6LL Mar 21 12:35:32.292673 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 21 12:35:32.294539 systemd[1]: Reached target network-online.target - Network is Online. Mar 21 12:35:32.297339 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 21 12:35:32.299850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:35:32.302214 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 21 12:35:32.343481 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 21 12:35:32.347121 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 21 12:35:32.347425 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 21 12:35:32.348993 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 21 12:35:33.060876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:35:33.062672 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 21 12:35:33.064635 systemd[1]: Startup finished in 1.174s (kernel) + 6.340s (initrd) + 5.014s (userspace) = 12.529s. Mar 21 12:35:33.074427 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 21 12:35:33.517232 kubelet[1612]: E0321 12:35:33.517071 1612 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 21 12:35:33.521869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 21 12:35:33.522150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 21 12:35:33.522568 systemd[1]: kubelet.service: Consumed 1.099s CPU time, 244.4M memory peak. Mar 21 12:35:34.657804 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 21 12:35:34.659319 systemd[1]: Started sshd@0-10.0.0.85:22-10.0.0.1:54334.service - OpenSSH per-connection server daemon (10.0.0.1:54334). Mar 21 12:35:34.720610 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 54334 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:35:34.722785 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:35:34.729745 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 21 12:35:34.730998 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 21 12:35:34.736758 systemd-logind[1496]: New session 1 of user core. Mar 21 12:35:34.757244 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 21 12:35:34.760847 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 21 12:35:34.784546 (systemd)[1630]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 21 12:35:34.786873 systemd-logind[1496]: New session c1 of user core. Mar 21 12:35:34.941762 systemd[1630]: Queued start job for default target default.target. Mar 21 12:35:34.953497 systemd[1630]: Created slice app.slice - User Application Slice. Mar 21 12:35:34.953527 systemd[1630]: Reached target paths.target - Paths. Mar 21 12:35:34.953574 systemd[1630]: Reached target timers.target - Timers. Mar 21 12:35:34.955304 systemd[1630]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 21 12:35:34.966986 systemd[1630]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 21 12:35:34.967161 systemd[1630]: Reached target sockets.target - Sockets. Mar 21 12:35:34.967208 systemd[1630]: Reached target basic.target - Basic System. Mar 21 12:35:34.967253 systemd[1630]: Reached target default.target - Main User Target. Mar 21 12:35:34.967286 systemd[1630]: Startup finished in 173ms. Mar 21 12:35:34.967974 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 21 12:35:34.978202 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 21 12:35:35.041128 systemd[1]: Started sshd@1-10.0.0.85:22-10.0.0.1:54346.service - OpenSSH per-connection server daemon (10.0.0.1:54346). Mar 21 12:35:35.088737 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 54346 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:35:35.090360 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:35:35.094640 systemd-logind[1496]: New session 2 of user core. Mar 21 12:35:35.104161 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 21 12:35:35.159840 sshd[1643]: Connection closed by 10.0.0.1 port 54346 Mar 21 12:35:35.160244 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Mar 21 12:35:35.178642 systemd[1]: sshd@1-10.0.0.85:22-10.0.0.1:54346.service: Deactivated successfully. Mar 21 12:35:35.180491 systemd[1]: session-2.scope: Deactivated successfully. Mar 21 12:35:35.182000 systemd-logind[1496]: Session 2 logged out. Waiting for processes to exit. Mar 21 12:35:35.183265 systemd[1]: Started sshd@2-10.0.0.85:22-10.0.0.1:54348.service - OpenSSH per-connection server daemon (10.0.0.1:54348). Mar 21 12:35:35.184018 systemd-logind[1496]: Removed session 2. Mar 21 12:35:35.234710 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 54348 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:35:35.236141 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:35:35.240291 systemd-logind[1496]: New session 3 of user core. Mar 21 12:35:35.250156 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 21 12:35:35.299890 sshd[1651]: Connection closed by 10.0.0.1 port 54348 Mar 21 12:35:35.300448 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Mar 21 12:35:35.318681 systemd[1]: sshd@2-10.0.0.85:22-10.0.0.1:54348.service: Deactivated successfully. Mar 21 12:35:35.320679 systemd[1]: session-3.scope: Deactivated successfully. Mar 21 12:35:35.322395 systemd-logind[1496]: Session 3 logged out. Waiting for processes to exit. Mar 21 12:35:35.323675 systemd[1]: Started sshd@3-10.0.0.85:22-10.0.0.1:54350.service - OpenSSH per-connection server daemon (10.0.0.1:54350). Mar 21 12:35:35.324508 systemd-logind[1496]: Removed session 3. Mar 21 12:35:35.369229 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 54350 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:35:35.370729 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:35:35.375554 systemd-logind[1496]: New session 4 of user core. Mar 21 12:35:35.386208 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 21 12:35:35.440271 sshd[1659]: Connection closed by 10.0.0.1 port 54350 Mar 21 12:35:35.440606 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Mar 21 12:35:35.456832 systemd[1]: sshd@3-10.0.0.85:22-10.0.0.1:54350.service: Deactivated successfully. Mar 21 12:35:35.458873 systemd[1]: session-4.scope: Deactivated successfully. Mar 21 12:35:35.460529 systemd-logind[1496]: Session 4 logged out. Waiting for processes to exit. Mar 21 12:35:35.461858 systemd[1]: Started sshd@4-10.0.0.85:22-10.0.0.1:54358.service - OpenSSH per-connection server daemon (10.0.0.1:54358). Mar 21 12:35:35.462875 systemd-logind[1496]: Removed session 4. Mar 21 12:35:35.512005 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 54358 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:35:35.513402 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:35:35.517926 systemd-logind[1496]: New session 5 of user core. Mar 21 12:35:35.527161 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 21 12:35:35.587143 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 21 12:35:35.587585 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:35:35.610643 sudo[1668]: pam_unix(sudo:session): session closed for user root Mar 21 12:35:35.612256 sshd[1667]: Connection closed by 10.0.0.1 port 54358 Mar 21 12:35:35.612678 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Mar 21 12:35:35.627350 systemd[1]: sshd@4-10.0.0.85:22-10.0.0.1:54358.service: Deactivated successfully. Mar 21 12:35:35.629672 systemd[1]: session-5.scope: Deactivated successfully. Mar 21 12:35:35.631689 systemd-logind[1496]: Session 5 logged out. Waiting for processes to exit. Mar 21 12:35:35.633372 systemd[1]: Started sshd@5-10.0.0.85:22-10.0.0.1:54372.service - OpenSSH per-connection server daemon (10.0.0.1:54372). Mar 21 12:35:35.634194 systemd-logind[1496]: Removed session 5. Mar 21 12:35:35.683996 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 54372 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:35:35.685560 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:35:35.690006 systemd-logind[1496]: New session 6 of user core. Mar 21 12:35:35.706170 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 21 12:35:35.761793 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 21 12:35:35.762164 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:35:35.766698 sudo[1679]: pam_unix(sudo:session): session closed for user root Mar 21 12:35:35.773807 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 21 12:35:35.774184 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:35:35.785243 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 12:35:35.833882 augenrules[1701]: No rules Mar 21 12:35:35.835808 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 12:35:35.836131 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 12:35:35.837413 sudo[1678]: pam_unix(sudo:session): session closed for user root Mar 21 12:35:35.839090 sshd[1677]: Connection closed by 10.0.0.1 port 54372 Mar 21 12:35:35.839375 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Mar 21 12:35:35.847921 systemd[1]: sshd@5-10.0.0.85:22-10.0.0.1:54372.service: Deactivated successfully. Mar 21 12:35:35.849983 systemd[1]: session-6.scope: Deactivated successfully. Mar 21 12:35:35.851581 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. Mar 21 12:35:35.852953 systemd[1]: Started sshd@6-10.0.0.85:22-10.0.0.1:54376.service - OpenSSH per-connection server daemon (10.0.0.1:54376). Mar 21 12:35:35.853731 systemd-logind[1496]: Removed session 6. Mar 21 12:35:35.900360 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 54376 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:35:35.901803 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:35:35.906276 systemd-logind[1496]: New session 7 of user core. Mar 21 12:35:35.916163 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 21 12:35:35.970329 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 21 12:35:35.970676 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:35:36.282717 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 21 12:35:36.299417 (dockerd)[1734]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 21 12:35:36.556236 dockerd[1734]: time="2025-03-21T12:35:36.556077256Z" level=info msg="Starting up" Mar 21 12:35:36.557502 dockerd[1734]: time="2025-03-21T12:35:36.557463196Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 21 12:35:36.929279 dockerd[1734]: time="2025-03-21T12:35:36.929210270Z" level=info msg="Loading containers: start." Mar 21 12:35:37.123084 kernel: Initializing XFRM netlink socket Mar 21 12:35:37.205424 systemd-networkd[1450]: docker0: Link UP Mar 21 12:35:37.294322 dockerd[1734]: time="2025-03-21T12:35:37.294260470Z" level=info msg="Loading containers: done." Mar 21 12:35:37.310206 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2307411167-merged.mount: Deactivated successfully. Mar 21 12:35:37.311343 dockerd[1734]: time="2025-03-21T12:35:37.311295700Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 21 12:35:37.311428 dockerd[1734]: time="2025-03-21T12:35:37.311390527Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 21 12:35:37.311525 dockerd[1734]: time="2025-03-21T12:35:37.311499231Z" level=info msg="Daemon has completed initialization" Mar 21 12:35:37.349071 dockerd[1734]: time="2025-03-21T12:35:37.348979848Z" level=info msg="API listen on /run/docker.sock" Mar 21 12:35:37.349225 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 21 12:35:38.273546 containerd[1512]: time="2025-03-21T12:35:38.273485757Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 21 12:35:38.881602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2241696727.mount: Deactivated successfully. Mar 21 12:35:40.242903 containerd[1512]: time="2025-03-21T12:35:40.242807204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:40.243498 containerd[1512]: time="2025-03-21T12:35:40.243377915Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674573" Mar 21 12:35:40.244617 containerd[1512]: time="2025-03-21T12:35:40.244577496Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:40.247153 containerd[1512]: time="2025-03-21T12:35:40.247095750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:40.247943 containerd[1512]: time="2025-03-21T12:35:40.247909677Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 1.97436531s" Mar 21 12:35:40.248011 containerd[1512]: time="2025-03-21T12:35:40.247947308Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 21 12:35:40.275963 containerd[1512]: time="2025-03-21T12:35:40.275907323Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 21 12:35:42.011067 containerd[1512]: time="2025-03-21T12:35:42.010965986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:42.011880 containerd[1512]: time="2025-03-21T12:35:42.011776397Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619772" Mar 21 12:35:42.012962 containerd[1512]: time="2025-03-21T12:35:42.012925813Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:42.016013 containerd[1512]: time="2025-03-21T12:35:42.015949736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:42.017044 containerd[1512]: time="2025-03-21T12:35:42.016988305Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 1.741028764s" Mar 21 12:35:42.017098 containerd[1512]: time="2025-03-21T12:35:42.017047546Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 21 12:35:42.039386 containerd[1512]: time="2025-03-21T12:35:42.039340589Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 21 12:35:43.100274 containerd[1512]: time="2025-03-21T12:35:43.100195829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:43.101055 containerd[1512]: time="2025-03-21T12:35:43.100966666Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903309" Mar 21 12:35:43.102632 containerd[1512]: time="2025-03-21T12:35:43.102563101Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:43.105385 containerd[1512]: time="2025-03-21T12:35:43.105333678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:43.106221 containerd[1512]: time="2025-03-21T12:35:43.106181138Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 1.066796577s" Mar 21 12:35:43.106221 containerd[1512]: time="2025-03-21T12:35:43.106217537Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 21 12:35:43.127358 containerd[1512]: time="2025-03-21T12:35:43.127306090Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 21 12:35:43.671891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 21 12:35:43.673806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:35:43.993560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:35:44.013394 (kubelet)[2049]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 21 12:35:44.225410 kubelet[2049]: E0321 12:35:44.225345 2049 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 21 12:35:44.232644 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 21 12:35:44.232911 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 21 12:35:44.233461 systemd[1]: kubelet.service: Consumed 237ms CPU time, 98.6M memory peak. Mar 21 12:35:44.689932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2145439898.mount: Deactivated successfully. Mar 21 12:35:44.937724 containerd[1512]: time="2025-03-21T12:35:44.937662378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:44.938493 containerd[1512]: time="2025-03-21T12:35:44.938453041Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185372" Mar 21 12:35:44.939646 containerd[1512]: time="2025-03-21T12:35:44.939619389Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:44.941917 containerd[1512]: time="2025-03-21T12:35:44.941809929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:44.942362 containerd[1512]: time="2025-03-21T12:35:44.942305689Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 1.814953813s" Mar 21 12:35:44.942362 containerd[1512]: time="2025-03-21T12:35:44.942357446Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 21 12:35:44.961492 containerd[1512]: time="2025-03-21T12:35:44.961446889Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 21 12:35:45.511538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3428233135.mount: Deactivated successfully. Mar 21 12:35:46.385935 containerd[1512]: time="2025-03-21T12:35:46.385814244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:46.386696 containerd[1512]: time="2025-03-21T12:35:46.386583738Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 21 12:35:46.388172 containerd[1512]: time="2025-03-21T12:35:46.388087078Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:46.391244 containerd[1512]: time="2025-03-21T12:35:46.391202483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:46.392122 containerd[1512]: time="2025-03-21T12:35:46.392069750Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.430581723s" Mar 21 12:35:46.392122 containerd[1512]: time="2025-03-21T12:35:46.392106328Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 21 12:35:46.419361 containerd[1512]: time="2025-03-21T12:35:46.419320093Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 21 12:35:46.890315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2555676.mount: Deactivated successfully. Mar 21 12:35:46.897866 containerd[1512]: time="2025-03-21T12:35:46.897800737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:46.898681 containerd[1512]: time="2025-03-21T12:35:46.898599336Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Mar 21 12:35:46.899984 containerd[1512]: time="2025-03-21T12:35:46.899932276Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:46.902005 containerd[1512]: time="2025-03-21T12:35:46.901958558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:46.902767 containerd[1512]: time="2025-03-21T12:35:46.902706741Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 483.34527ms" Mar 21 12:35:46.902767 containerd[1512]: time="2025-03-21T12:35:46.902763398Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 21 12:35:46.923343 containerd[1512]: time="2025-03-21T12:35:46.923294686Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 21 12:35:47.499639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount763168352.mount: Deactivated successfully. Mar 21 12:35:48.957902 containerd[1512]: time="2025-03-21T12:35:48.957831312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:48.958568 containerd[1512]: time="2025-03-21T12:35:48.958501218Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Mar 21 12:35:48.959649 containerd[1512]: time="2025-03-21T12:35:48.959606973Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:48.962384 containerd[1512]: time="2025-03-21T12:35:48.962328318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:35:48.963483 containerd[1512]: time="2025-03-21T12:35:48.963430256Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.040090245s" Mar 21 12:35:48.963483 containerd[1512]: time="2025-03-21T12:35:48.963478586Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 21 12:35:52.251577 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:35:52.251763 systemd[1]: kubelet.service: Consumed 237ms CPU time, 98.6M memory peak. Mar 21 12:35:52.254244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:35:52.289769 systemd[1]: Reload requested from client PID 2285 ('systemctl') (unit session-7.scope)... Mar 21 12:35:52.289798 systemd[1]: Reloading... Mar 21 12:35:52.388057 zram_generator::config[2330]: No configuration found. Mar 21 12:35:52.541574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:35:52.645958 systemd[1]: Reloading finished in 355 ms. Mar 21 12:35:52.727192 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:35:52.729870 systemd[1]: kubelet.service: Deactivated successfully. Mar 21 12:35:52.730190 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:35:52.730233 systemd[1]: kubelet.service: Consumed 161ms CPU time, 83.7M memory peak. Mar 21 12:35:52.731970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:35:52.893563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:35:52.902498 (kubelet)[2379]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 21 12:35:52.944751 kubelet[2379]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:35:52.944751 kubelet[2379]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 21 12:35:52.944751 kubelet[2379]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:35:52.945266 kubelet[2379]: I0321 12:35:52.944773 2379 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 21 12:35:53.179438 kubelet[2379]: I0321 12:35:53.179400 2379 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 21 12:35:53.179438 kubelet[2379]: I0321 12:35:53.179431 2379 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 21 12:35:53.179638 kubelet[2379]: I0321 12:35:53.179620 2379 server.go:927] "Client rotation is on, will bootstrap in background" Mar 21 12:35:53.192215 kubelet[2379]: I0321 12:35:53.192181 2379 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 21 12:35:53.192576 kubelet[2379]: E0321 12:35:53.192554 2379 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:53.204572 kubelet[2379]: I0321 12:35:53.204534 2379 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 21 12:35:53.206261 kubelet[2379]: I0321 12:35:53.206214 2379 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 21 12:35:53.206439 kubelet[2379]: I0321 12:35:53.206245 2379 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 21 12:35:53.206850 kubelet[2379]: I0321 12:35:53.206827 2379 topology_manager.go:138] "Creating topology manager with none policy" Mar 21 12:35:53.206850 kubelet[2379]: I0321 12:35:53.206844 2379 container_manager_linux.go:301] "Creating device plugin manager" Mar 21 12:35:53.207003 kubelet[2379]: I0321 12:35:53.206983 2379 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:35:53.207608 kubelet[2379]: I0321 12:35:53.207578 2379 kubelet.go:400] "Attempting to sync node with API server" Mar 21 12:35:53.207608 kubelet[2379]: I0321 12:35:53.207594 2379 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 21 12:35:53.207709 kubelet[2379]: I0321 12:35:53.207614 2379 kubelet.go:312] "Adding apiserver pod source" Mar 21 12:35:53.207709 kubelet[2379]: I0321 12:35:53.207629 2379 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 21 12:35:53.210774 kubelet[2379]: W0321 12:35:53.210724 2379 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:53.210859 kubelet[2379]: E0321 12:35:53.210784 2379 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:53.210859 kubelet[2379]: W0321 12:35:53.210724 2379 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:53.210859 kubelet[2379]: E0321 12:35:53.210811 2379 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:53.211862 kubelet[2379]: I0321 12:35:53.211831 2379 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 21 12:35:53.213054 kubelet[2379]: I0321 12:35:53.213011 2379 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 21 12:35:53.213114 kubelet[2379]: W0321 12:35:53.213091 2379 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 21 12:35:53.213776 kubelet[2379]: I0321 12:35:53.213745 2379 server.go:1264] "Started kubelet" Mar 21 12:35:53.215147 kubelet[2379]: I0321 12:35:53.214525 2379 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 21 12:35:53.215147 kubelet[2379]: I0321 12:35:53.215089 2379 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 21 12:35:53.215147 kubelet[2379]: I0321 12:35:53.215128 2379 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 21 12:35:53.215440 kubelet[2379]: I0321 12:35:53.215416 2379 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 21 12:35:53.216318 kubelet[2379]: I0321 12:35:53.216292 2379 server.go:455] "Adding debug handlers to kubelet server" Mar 21 12:35:53.218243 kubelet[2379]: E0321 12:35:53.217700 2379 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:35:53.218243 kubelet[2379]: I0321 12:35:53.217747 2379 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 21 12:35:53.218243 kubelet[2379]: I0321 12:35:53.217823 2379 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 21 12:35:53.218243 kubelet[2379]: I0321 12:35:53.217912 2379 reconciler.go:26] "Reconciler: start to sync state" Mar 21 12:35:53.218378 kubelet[2379]: W0321 12:35:53.218293 2379 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:53.218378 kubelet[2379]: E0321 12:35:53.218335 2379 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:53.218932 kubelet[2379]: E0321 12:35:53.218902 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="200ms" Mar 21 12:35:53.219400 kubelet[2379]: E0321 12:35:53.218999 2379 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 21 12:35:53.219810 kubelet[2379]: I0321 12:35:53.219784 2379 factory.go:221] Registration of the systemd container factory successfully Mar 21 12:35:53.219924 kubelet[2379]: I0321 12:35:53.219894 2379 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 21 12:35:53.220294 kubelet[2379]: E0321 12:35:53.220095 2379 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.85:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.85:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182ed19735e980b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-21 12:35:53.213722808 +0000 UTC m=+0.306492122,LastTimestamp:2025-03-21 12:35:53.213722808 +0000 UTC m=+0.306492122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 21 12:35:53.220984 kubelet[2379]: I0321 12:35:53.220966 2379 factory.go:221] Registration of the containerd container factory successfully Mar 21 12:35:53.237495 kubelet[2379]: I0321 12:35:53.237449 2379 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 21 12:35:53.237495 kubelet[2379]: I0321 12:35:53.237477 2379 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 21 12:35:53.237495 kubelet[2379]: I0321 12:35:53.237506 2379 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:35:53.243560 kubelet[2379]: I0321 12:35:53.243510 2379 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 21 12:35:53.245140 kubelet[2379]: I0321 12:35:53.245112 2379 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 21 12:35:53.245207 kubelet[2379]: I0321 12:35:53.245152 2379 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 21 12:35:53.245207 kubelet[2379]: I0321 12:35:53.245179 2379 kubelet.go:2337] "Starting kubelet main sync loop" Mar 21 12:35:53.245273 kubelet[2379]: E0321 12:35:53.245237 2379 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 21 12:35:53.246408 kubelet[2379]: W0321 12:35:53.246372 2379 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:53.246491 kubelet[2379]: E0321 12:35:53.246413 2379 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:53.319603 kubelet[2379]: I0321 12:35:53.319538 2379 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:35:53.319916 kubelet[2379]: E0321 12:35:53.319890 2379 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Mar 21 12:35:53.346126 kubelet[2379]: E0321 12:35:53.346076 2379 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 21 12:35:53.419821 kubelet[2379]: E0321 12:35:53.419762 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="400ms" Mar 21 12:35:53.486752 kubelet[2379]: I0321 12:35:53.486589 2379 policy_none.go:49] "None policy: Start" Mar 21 12:35:53.487443 kubelet[2379]: I0321 12:35:53.487414 2379 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 21 12:35:53.487523 kubelet[2379]: I0321 12:35:53.487448 2379 state_mem.go:35] "Initializing new in-memory state store" Mar 21 12:35:53.496687 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 21 12:35:53.511243 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 21 12:35:53.514307 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 21 12:35:53.521181 kubelet[2379]: I0321 12:35:53.521157 2379 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:35:53.521527 kubelet[2379]: E0321 12:35:53.521493 2379 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Mar 21 12:35:53.524071 kubelet[2379]: I0321 12:35:53.524044 2379 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 21 12:35:53.524341 kubelet[2379]: I0321 12:35:53.524302 2379 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 21 12:35:53.524469 kubelet[2379]: I0321 12:35:53.524448 2379 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 21 12:35:53.525346 kubelet[2379]: E0321 12:35:53.525324 2379 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 21 12:35:53.546823 kubelet[2379]: I0321 12:35:53.546723 2379 topology_manager.go:215] "Topology Admit Handler" podUID="7680fdada41873264f9d1cf14d8f8646" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 21 12:35:53.547924 kubelet[2379]: I0321 12:35:53.547900 2379 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 21 12:35:53.548671 kubelet[2379]: I0321 12:35:53.548641 2379 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 21 12:35:53.555106 systemd[1]: Created slice kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice - libcontainer container kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice. Mar 21 12:35:53.596219 systemd[1]: Created slice kubepods-burstable-pod7680fdada41873264f9d1cf14d8f8646.slice - libcontainer container kubepods-burstable-pod7680fdada41873264f9d1cf14d8f8646.slice. Mar 21 12:35:53.599802 systemd[1]: Created slice kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice - libcontainer container kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice. Mar 21 12:35:53.620931 kubelet[2379]: I0321 12:35:53.620849 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7680fdada41873264f9d1cf14d8f8646-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7680fdada41873264f9d1cf14d8f8646\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:35:53.620931 kubelet[2379]: I0321 12:35:53.620909 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7680fdada41873264f9d1cf14d8f8646-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7680fdada41873264f9d1cf14d8f8646\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:35:53.620931 kubelet[2379]: I0321 12:35:53.620939 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:35:53.620931 kubelet[2379]: I0321 12:35:53.620960 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:35:53.621284 kubelet[2379]: I0321 12:35:53.620982 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:35:53.621284 kubelet[2379]: I0321 12:35:53.621006 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 21 12:35:53.621284 kubelet[2379]: I0321 12:35:53.621049 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7680fdada41873264f9d1cf14d8f8646-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7680fdada41873264f9d1cf14d8f8646\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:35:53.621284 kubelet[2379]: I0321 12:35:53.621095 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:35:53.621284 kubelet[2379]: I0321 12:35:53.621123 2379 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:35:53.820579 kubelet[2379]: E0321 12:35:53.820393 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="800ms" Mar 21 12:35:53.892925 kubelet[2379]: E0321 12:35:53.892858 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:53.893728 containerd[1512]: time="2025-03-21T12:35:53.893679295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 21 12:35:53.898799 kubelet[2379]: E0321 12:35:53.898777 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:53.899121 containerd[1512]: time="2025-03-21T12:35:53.899096889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7680fdada41873264f9d1cf14d8f8646,Namespace:kube-system,Attempt:0,}" Mar 21 12:35:53.902407 kubelet[2379]: E0321 12:35:53.902378 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:53.902687 containerd[1512]: time="2025-03-21T12:35:53.902634485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 21 12:35:53.923123 kubelet[2379]: I0321 12:35:53.923074 2379 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:35:53.923395 kubelet[2379]: E0321 12:35:53.923365 2379 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Mar 21 12:35:54.155776 kubelet[2379]: W0321 12:35:54.155597 2379 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:54.155776 kubelet[2379]: E0321 12:35:54.155670 2379 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:54.165248 kubelet[2379]: W0321 12:35:54.165165 2379 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:54.165248 kubelet[2379]: E0321 12:35:54.165248 2379 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:54.263226 kubelet[2379]: W0321 12:35:54.263134 2379 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:54.263226 kubelet[2379]: E0321 12:35:54.263200 2379 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:54.621514 kubelet[2379]: E0321 12:35:54.621435 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="1.6s" Mar 21 12:35:54.627151 kubelet[2379]: W0321 12:35:54.627017 2379 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:54.627151 kubelet[2379]: E0321 12:35:54.627139 2379 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:54.725666 kubelet[2379]: I0321 12:35:54.725591 2379 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:35:54.726220 kubelet[2379]: E0321 12:35:54.726150 2379 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Mar 21 12:35:55.328960 kubelet[2379]: E0321 12:35:55.328902 2379 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.85:6443: connect: connection refused Mar 21 12:35:55.504467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount707227079.mount: Deactivated successfully. Mar 21 12:35:55.509802 containerd[1512]: time="2025-03-21T12:35:55.509735572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:35:55.512626 containerd[1512]: time="2025-03-21T12:35:55.512566783Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 21 12:35:55.513744 containerd[1512]: time="2025-03-21T12:35:55.513696222Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:35:55.515636 containerd[1512]: time="2025-03-21T12:35:55.515584114Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:35:55.516491 containerd[1512]: time="2025-03-21T12:35:55.516428388Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 21 12:35:55.517478 containerd[1512]: time="2025-03-21T12:35:55.517421321Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:35:55.518627 containerd[1512]: time="2025-03-21T12:35:55.518532185Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 21 12:35:55.519790 containerd[1512]: time="2025-03-21T12:35:55.519747715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:35:55.520782 containerd[1512]: time="2025-03-21T12:35:55.520737482Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.623971426s" Mar 21 12:35:55.523456 containerd[1512]: time="2025-03-21T12:35:55.523397202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.621860916s" Mar 21 12:35:55.524200 containerd[1512]: time="2025-03-21T12:35:55.524164371Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.616050717s" Mar 21 12:35:55.552786 containerd[1512]: time="2025-03-21T12:35:55.552733639Z" level=info msg="connecting to shim 31455186f0c091cfac75e04506a6d6128d2b67565a6eeb01c2d879c807196507" address="unix:///run/containerd/s/edd0493cd5f29666cc279247ba4da7a09069f1afeb0c9ee75f7dcf106014acb1" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:35:55.557332 containerd[1512]: time="2025-03-21T12:35:55.557162729Z" level=info msg="connecting to shim 90de469eff7f07a9c58355d3c5d9d3afafaca54a1165cb204f3698f25fd72662" address="unix:///run/containerd/s/42247601b74d810e428ea86ba8463ad3eccd0bdd22504e72bd015047e50d4c58" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:35:55.564890 containerd[1512]: time="2025-03-21T12:35:55.564826466Z" level=info msg="connecting to shim e2c88d9114f403cb722d0e53b172ec6bc8f6d67295b7456c59d9085d56b28664" address="unix:///run/containerd/s/3bb7636cf554e2afb70c00db4d4cdd3e95f222bc4097dff4d4852f3d5cc08abd" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:35:55.586308 systemd[1]: Started cri-containerd-31455186f0c091cfac75e04506a6d6128d2b67565a6eeb01c2d879c807196507.scope - libcontainer container 31455186f0c091cfac75e04506a6d6128d2b67565a6eeb01c2d879c807196507. Mar 21 12:35:55.591770 systemd[1]: Started cri-containerd-90de469eff7f07a9c58355d3c5d9d3afafaca54a1165cb204f3698f25fd72662.scope - libcontainer container 90de469eff7f07a9c58355d3c5d9d3afafaca54a1165cb204f3698f25fd72662. Mar 21 12:35:55.593844 systemd[1]: Started cri-containerd-e2c88d9114f403cb722d0e53b172ec6bc8f6d67295b7456c59d9085d56b28664.scope - libcontainer container e2c88d9114f403cb722d0e53b172ec6bc8f6d67295b7456c59d9085d56b28664. Mar 21 12:35:55.637121 containerd[1512]: time="2025-03-21T12:35:55.637076322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"31455186f0c091cfac75e04506a6d6128d2b67565a6eeb01c2d879c807196507\"" Mar 21 12:35:55.640137 kubelet[2379]: E0321 12:35:55.640094 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:55.644259 containerd[1512]: time="2025-03-21T12:35:55.644215997Z" level=info msg="CreateContainer within sandbox \"31455186f0c091cfac75e04506a6d6128d2b67565a6eeb01c2d879c807196507\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 21 12:35:55.645951 containerd[1512]: time="2025-03-21T12:35:55.645918601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7680fdada41873264f9d1cf14d8f8646,Namespace:kube-system,Attempt:0,} returns sandbox id \"90de469eff7f07a9c58355d3c5d9d3afafaca54a1165cb204f3698f25fd72662\"" Mar 21 12:35:55.646529 kubelet[2379]: E0321 12:35:55.646497 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:55.648908 containerd[1512]: time="2025-03-21T12:35:55.648877512Z" level=info msg="CreateContainer within sandbox \"90de469eff7f07a9c58355d3c5d9d3afafaca54a1165cb204f3698f25fd72662\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 21 12:35:55.655347 containerd[1512]: time="2025-03-21T12:35:55.655307225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2c88d9114f403cb722d0e53b172ec6bc8f6d67295b7456c59d9085d56b28664\"" Mar 21 12:35:55.655970 kubelet[2379]: E0321 12:35:55.655934 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:55.657498 containerd[1512]: time="2025-03-21T12:35:55.657470233Z" level=info msg="CreateContainer within sandbox \"e2c88d9114f403cb722d0e53b172ec6bc8f6d67295b7456c59d9085d56b28664\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 21 12:35:55.659105 containerd[1512]: time="2025-03-21T12:35:55.659065425Z" level=info msg="Container baf67072604a83985ff374b78ad568b88df85c5dc7c017cf7868711c294cf60c: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:35:55.662094 containerd[1512]: time="2025-03-21T12:35:55.662060745Z" level=info msg="Container 530726f5db9c65cfc0051270b5be74fb79a84386c1f8b2649d68506645ada7ed: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:35:55.668741 containerd[1512]: time="2025-03-21T12:35:55.668701092Z" level=info msg="CreateContainer within sandbox \"31455186f0c091cfac75e04506a6d6128d2b67565a6eeb01c2d879c807196507\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"baf67072604a83985ff374b78ad568b88df85c5dc7c017cf7868711c294cf60c\"" Mar 21 12:35:55.669443 containerd[1512]: time="2025-03-21T12:35:55.669398331Z" level=info msg="StartContainer for \"baf67072604a83985ff374b78ad568b88df85c5dc7c017cf7868711c294cf60c\"" Mar 21 12:35:55.670484 containerd[1512]: time="2025-03-21T12:35:55.670459802Z" level=info msg="connecting to shim baf67072604a83985ff374b78ad568b88df85c5dc7c017cf7868711c294cf60c" address="unix:///run/containerd/s/edd0493cd5f29666cc279247ba4da7a09069f1afeb0c9ee75f7dcf106014acb1" protocol=ttrpc version=3 Mar 21 12:35:55.671099 containerd[1512]: time="2025-03-21T12:35:55.671058635Z" level=info msg="CreateContainer within sandbox \"90de469eff7f07a9c58355d3c5d9d3afafaca54a1165cb204f3698f25fd72662\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"530726f5db9c65cfc0051270b5be74fb79a84386c1f8b2649d68506645ada7ed\"" Mar 21 12:35:55.671929 containerd[1512]: time="2025-03-21T12:35:55.671444038Z" level=info msg="StartContainer for \"530726f5db9c65cfc0051270b5be74fb79a84386c1f8b2649d68506645ada7ed\"" Mar 21 12:35:55.672402 containerd[1512]: time="2025-03-21T12:35:55.672365948Z" level=info msg="connecting to shim 530726f5db9c65cfc0051270b5be74fb79a84386c1f8b2649d68506645ada7ed" address="unix:///run/containerd/s/42247601b74d810e428ea86ba8463ad3eccd0bdd22504e72bd015047e50d4c58" protocol=ttrpc version=3 Mar 21 12:35:55.677302 containerd[1512]: time="2025-03-21T12:35:55.677250933Z" level=info msg="Container 7a73aff041a8a08cd0e3d6aabcb46b93078703886ec044f7ccc1add45300c70c: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:35:55.686468 containerd[1512]: time="2025-03-21T12:35:55.686419023Z" level=info msg="CreateContainer within sandbox \"e2c88d9114f403cb722d0e53b172ec6bc8f6d67295b7456c59d9085d56b28664\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7a73aff041a8a08cd0e3d6aabcb46b93078703886ec044f7ccc1add45300c70c\"" Mar 21 12:35:55.687198 containerd[1512]: time="2025-03-21T12:35:55.687157678Z" level=info msg="StartContainer for \"7a73aff041a8a08cd0e3d6aabcb46b93078703886ec044f7ccc1add45300c70c\"" Mar 21 12:35:55.688311 containerd[1512]: time="2025-03-21T12:35:55.688276347Z" level=info msg="connecting to shim 7a73aff041a8a08cd0e3d6aabcb46b93078703886ec044f7ccc1add45300c70c" address="unix:///run/containerd/s/3bb7636cf554e2afb70c00db4d4cdd3e95f222bc4097dff4d4852f3d5cc08abd" protocol=ttrpc version=3 Mar 21 12:35:55.696193 systemd[1]: Started cri-containerd-530726f5db9c65cfc0051270b5be74fb79a84386c1f8b2649d68506645ada7ed.scope - libcontainer container 530726f5db9c65cfc0051270b5be74fb79a84386c1f8b2649d68506645ada7ed. Mar 21 12:35:55.697816 systemd[1]: Started cri-containerd-baf67072604a83985ff374b78ad568b88df85c5dc7c017cf7868711c294cf60c.scope - libcontainer container baf67072604a83985ff374b78ad568b88df85c5dc7c017cf7868711c294cf60c. Mar 21 12:35:55.712169 systemd[1]: Started cri-containerd-7a73aff041a8a08cd0e3d6aabcb46b93078703886ec044f7ccc1add45300c70c.scope - libcontainer container 7a73aff041a8a08cd0e3d6aabcb46b93078703886ec044f7ccc1add45300c70c. Mar 21 12:35:55.754663 containerd[1512]: time="2025-03-21T12:35:55.754619411Z" level=info msg="StartContainer for \"530726f5db9c65cfc0051270b5be74fb79a84386c1f8b2649d68506645ada7ed\" returns successfully" Mar 21 12:35:55.766142 containerd[1512]: time="2025-03-21T12:35:55.766014770Z" level=info msg="StartContainer for \"baf67072604a83985ff374b78ad568b88df85c5dc7c017cf7868711c294cf60c\" returns successfully" Mar 21 12:35:55.780227 containerd[1512]: time="2025-03-21T12:35:55.780123759Z" level=info msg="StartContainer for \"7a73aff041a8a08cd0e3d6aabcb46b93078703886ec044f7ccc1add45300c70c\" returns successfully" Mar 21 12:35:56.255911 kubelet[2379]: E0321 12:35:56.255860 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:56.260210 kubelet[2379]: E0321 12:35:56.260166 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:56.260643 kubelet[2379]: E0321 12:35:56.260617 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:56.329641 kubelet[2379]: I0321 12:35:56.329049 2379 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:35:57.262832 kubelet[2379]: E0321 12:35:57.262772 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:57.310749 kubelet[2379]: E0321 12:35:57.309284 2379 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 21 12:35:57.493726 kubelet[2379]: I0321 12:35:57.493673 2379 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 21 12:35:57.669622 kubelet[2379]: E0321 12:35:57.669457 2379 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 21 12:35:57.669773 kubelet[2379]: E0321 12:35:57.669754 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:35:58.214700 kubelet[2379]: I0321 12:35:58.214640 2379 apiserver.go:52] "Watching apiserver" Mar 21 12:35:58.218849 kubelet[2379]: I0321 12:35:58.218798 2379 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 21 12:35:59.837316 systemd[1]: Reload requested from client PID 2654 ('systemctl') (unit session-7.scope)... Mar 21 12:35:59.837342 systemd[1]: Reloading... Mar 21 12:35:59.931142 zram_generator::config[2699]: No configuration found. Mar 21 12:35:59.945798 kubelet[2379]: E0321 12:35:59.945718 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:00.058290 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:36:00.183649 systemd[1]: Reloading finished in 345 ms. Mar 21 12:36:00.212471 kubelet[2379]: I0321 12:36:00.212334 2379 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 21 12:36:00.212503 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:36:00.232704 systemd[1]: kubelet.service: Deactivated successfully. Mar 21 12:36:00.233157 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:36:00.233235 systemd[1]: kubelet.service: Consumed 839ms CPU time, 117.1M memory peak. Mar 21 12:36:00.236133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:36:00.459612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:36:00.469414 (kubelet)[2743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 21 12:36:00.515719 kubelet[2743]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:36:00.515719 kubelet[2743]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 21 12:36:00.515719 kubelet[2743]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:36:00.516302 kubelet[2743]: I0321 12:36:00.515759 2743 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 21 12:36:00.521031 kubelet[2743]: I0321 12:36:00.520996 2743 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 21 12:36:00.521031 kubelet[2743]: I0321 12:36:00.521016 2743 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 21 12:36:00.521205 kubelet[2743]: I0321 12:36:00.521179 2743 server.go:927] "Client rotation is on, will bootstrap in background" Mar 21 12:36:00.522469 kubelet[2743]: I0321 12:36:00.522443 2743 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 21 12:36:00.523581 kubelet[2743]: I0321 12:36:00.523538 2743 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 21 12:36:00.535174 kubelet[2743]: I0321 12:36:00.535135 2743 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 21 12:36:00.535419 kubelet[2743]: I0321 12:36:00.535378 2743 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 21 12:36:00.535610 kubelet[2743]: I0321 12:36:00.535412 2743 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 21 12:36:00.535740 kubelet[2743]: I0321 12:36:00.535611 2743 topology_manager.go:138] "Creating topology manager with none policy" Mar 21 12:36:00.535740 kubelet[2743]: I0321 12:36:00.535623 2743 container_manager_linux.go:301] "Creating device plugin manager" Mar 21 12:36:00.535740 kubelet[2743]: I0321 12:36:00.535677 2743 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:36:00.535821 kubelet[2743]: I0321 12:36:00.535772 2743 kubelet.go:400] "Attempting to sync node with API server" Mar 21 12:36:00.535821 kubelet[2743]: I0321 12:36:00.535782 2743 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 21 12:36:00.535821 kubelet[2743]: I0321 12:36:00.535801 2743 kubelet.go:312] "Adding apiserver pod source" Mar 21 12:36:00.535821 kubelet[2743]: I0321 12:36:00.535811 2743 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 21 12:36:00.536634 kubelet[2743]: I0321 12:36:00.536415 2743 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 21 12:36:00.536845 kubelet[2743]: I0321 12:36:00.536812 2743 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 21 12:36:00.542854 kubelet[2743]: I0321 12:36:00.537612 2743 server.go:1264] "Started kubelet" Mar 21 12:36:00.542854 kubelet[2743]: I0321 12:36:00.537888 2743 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 21 12:36:00.542854 kubelet[2743]: I0321 12:36:00.538105 2743 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 21 12:36:00.542854 kubelet[2743]: I0321 12:36:00.538779 2743 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 21 12:36:00.542854 kubelet[2743]: I0321 12:36:00.538974 2743 server.go:455] "Adding debug handlers to kubelet server" Mar 21 12:36:00.549272 kubelet[2743]: I0321 12:36:00.548324 2743 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 21 12:36:00.549661 kubelet[2743]: E0321 12:36:00.549628 2743 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 21 12:36:00.551684 kubelet[2743]: I0321 12:36:00.551670 2743 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 21 12:36:00.551850 kubelet[2743]: I0321 12:36:00.551837 2743 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 21 12:36:00.552009 kubelet[2743]: I0321 12:36:00.551987 2743 factory.go:221] Registration of the systemd container factory successfully Mar 21 12:36:00.552134 kubelet[2743]: I0321 12:36:00.552121 2743 reconciler.go:26] "Reconciler: start to sync state" Mar 21 12:36:00.552224 kubelet[2743]: I0321 12:36:00.552115 2743 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 21 12:36:00.554364 kubelet[2743]: I0321 12:36:00.554343 2743 factory.go:221] Registration of the containerd container factory successfully Mar 21 12:36:00.561604 kubelet[2743]: I0321 12:36:00.561475 2743 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 21 12:36:00.563227 kubelet[2743]: I0321 12:36:00.563182 2743 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 21 12:36:00.563227 kubelet[2743]: I0321 12:36:00.563218 2743 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 21 12:36:00.563227 kubelet[2743]: I0321 12:36:00.563237 2743 kubelet.go:2337] "Starting kubelet main sync loop" Mar 21 12:36:00.563481 kubelet[2743]: E0321 12:36:00.563278 2743 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 21 12:36:00.594764 kubelet[2743]: I0321 12:36:00.594735 2743 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 21 12:36:00.594764 kubelet[2743]: I0321 12:36:00.594752 2743 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 21 12:36:00.594764 kubelet[2743]: I0321 12:36:00.594771 2743 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:36:00.594941 kubelet[2743]: I0321 12:36:00.594915 2743 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 21 12:36:00.594941 kubelet[2743]: I0321 12:36:00.594925 2743 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 21 12:36:00.595002 kubelet[2743]: I0321 12:36:00.594943 2743 policy_none.go:49] "None policy: Start" Mar 21 12:36:00.595427 kubelet[2743]: I0321 12:36:00.595389 2743 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 21 12:36:00.595427 kubelet[2743]: I0321 12:36:00.595411 2743 state_mem.go:35] "Initializing new in-memory state store" Mar 21 12:36:00.595605 kubelet[2743]: I0321 12:36:00.595530 2743 state_mem.go:75] "Updated machine memory state" Mar 21 12:36:00.600143 kubelet[2743]: I0321 12:36:00.599991 2743 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 21 12:36:00.600451 kubelet[2743]: I0321 12:36:00.600223 2743 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 21 12:36:00.600451 kubelet[2743]: I0321 12:36:00.600397 2743 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 21 12:36:00.664057 kubelet[2743]: I0321 12:36:00.663978 2743 topology_manager.go:215] "Topology Admit Handler" podUID="7680fdada41873264f9d1cf14d8f8646" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 21 12:36:00.664237 kubelet[2743]: I0321 12:36:00.664110 2743 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 21 12:36:00.664237 kubelet[2743]: I0321 12:36:00.664175 2743 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 21 12:36:00.672099 kubelet[2743]: E0321 12:36:00.672045 2743 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 21 12:36:00.706783 kubelet[2743]: I0321 12:36:00.706726 2743 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:36:00.713210 kubelet[2743]: I0321 12:36:00.713096 2743 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 21 12:36:00.713303 kubelet[2743]: I0321 12:36:00.713222 2743 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 21 12:36:00.752910 kubelet[2743]: I0321 12:36:00.752839 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7680fdada41873264f9d1cf14d8f8646-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7680fdada41873264f9d1cf14d8f8646\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:36:00.853502 kubelet[2743]: I0321 12:36:00.853405 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:36:00.853502 kubelet[2743]: I0321 12:36:00.853460 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7680fdada41873264f9d1cf14d8f8646-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7680fdada41873264f9d1cf14d8f8646\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:36:00.853502 kubelet[2743]: I0321 12:36:00.853497 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7680fdada41873264f9d1cf14d8f8646-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7680fdada41873264f9d1cf14d8f8646\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:36:00.853502 kubelet[2743]: I0321 12:36:00.853517 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:36:00.853845 kubelet[2743]: I0321 12:36:00.853586 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:36:00.853845 kubelet[2743]: I0321 12:36:00.853643 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:36:00.853845 kubelet[2743]: I0321 12:36:00.853668 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:36:00.853845 kubelet[2743]: I0321 12:36:00.853696 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 21 12:36:00.973890 kubelet[2743]: E0321 12:36:00.973657 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:00.973890 kubelet[2743]: E0321 12:36:00.973722 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:00.973890 kubelet[2743]: E0321 12:36:00.973796 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:00.992592 sudo[2778]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 21 12:36:00.993137 sudo[2778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 21 12:36:01.503975 sudo[2778]: pam_unix(sudo:session): session closed for user root Mar 21 12:36:01.536912 kubelet[2743]: I0321 12:36:01.536846 2743 apiserver.go:52] "Watching apiserver" Mar 21 12:36:01.552830 kubelet[2743]: I0321 12:36:01.552775 2743 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 21 12:36:01.575258 kubelet[2743]: I0321 12:36:01.575177 2743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.575131187 podStartE2EDuration="1.575131187s" podCreationTimestamp="2025-03-21 12:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:36:01.573111558 +0000 UTC m=+1.099123245" watchObservedRunningTime="2025-03-21 12:36:01.575131187 +0000 UTC m=+1.101142874" Mar 21 12:36:01.579335 kubelet[2743]: E0321 12:36:01.579291 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:01.579912 kubelet[2743]: E0321 12:36:01.579618 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:01.580359 kubelet[2743]: E0321 12:36:01.580323 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:01.585503 kubelet[2743]: I0321 12:36:01.585413 2743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.5853825390000003 podStartE2EDuration="2.585382539s" podCreationTimestamp="2025-03-21 12:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:36:01.58297902 +0000 UTC m=+1.108990707" watchObservedRunningTime="2025-03-21 12:36:01.585382539 +0000 UTC m=+1.111394226" Mar 21 12:36:01.736640 kubelet[2743]: I0321 12:36:01.736536 2743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.736513301 podStartE2EDuration="1.736513301s" podCreationTimestamp="2025-03-21 12:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:36:01.598973978 +0000 UTC m=+1.124985665" watchObservedRunningTime="2025-03-21 12:36:01.736513301 +0000 UTC m=+1.262524988" Mar 21 12:36:02.580895 kubelet[2743]: E0321 12:36:02.580860 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:02.917552 sudo[1713]: pam_unix(sudo:session): session closed for user root Mar 21 12:36:03.132728 sshd[1712]: Connection closed by 10.0.0.1 port 54376 Mar 21 12:36:03.133421 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:03.138685 systemd[1]: sshd@6-10.0.0.85:22-10.0.0.1:54376.service: Deactivated successfully. Mar 21 12:36:03.141405 systemd[1]: session-7.scope: Deactivated successfully. Mar 21 12:36:03.141654 systemd[1]: session-7.scope: Consumed 5.588s CPU time, 273.8M memory peak. Mar 21 12:36:03.143202 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. Mar 21 12:36:03.144246 systemd-logind[1496]: Removed session 7. Mar 21 12:36:03.147954 kubelet[2743]: E0321 12:36:03.147919 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:07.049130 kubelet[2743]: E0321 12:36:07.049007 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:07.587347 kubelet[2743]: E0321 12:36:07.587310 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:10.620924 kubelet[2743]: E0321 12:36:10.620861 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:11.592508 kubelet[2743]: E0321 12:36:11.592458 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:13.152381 kubelet[2743]: E0321 12:36:13.152343 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:14.231720 kubelet[2743]: I0321 12:36:14.231632 2743 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 21 12:36:14.232259 kubelet[2743]: I0321 12:36:14.232224 2743 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 21 12:36:14.232289 containerd[1512]: time="2025-03-21T12:36:14.231992739Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 21 12:36:15.076450 kubelet[2743]: I0321 12:36:15.076371 2743 topology_manager.go:215] "Topology Admit Handler" podUID="880189ce-1f32-4083-bc8a-d6b79a21cc63" podNamespace="kube-system" podName="kube-proxy-pct2m" Mar 21 12:36:15.088296 systemd[1]: Created slice kubepods-besteffort-pod880189ce_1f32_4083_bc8a_d6b79a21cc63.slice - libcontainer container kubepods-besteffort-pod880189ce_1f32_4083_bc8a_d6b79a21cc63.slice. Mar 21 12:36:15.092912 kubelet[2743]: I0321 12:36:15.088304 2743 topology_manager.go:215] "Topology Admit Handler" podUID="07a7801f-9180-44e0-987e-7943aebb157b" podNamespace="kube-system" podName="cilium-m86vw" Mar 21 12:36:15.106606 systemd[1]: Created slice kubepods-burstable-pod07a7801f_9180_44e0_987e_7943aebb157b.slice - libcontainer container kubepods-burstable-pod07a7801f_9180_44e0_987e_7943aebb157b.slice. Mar 21 12:36:15.229842 kubelet[2743]: I0321 12:36:15.229753 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-hostproc\") pod \"cilium-m86vw\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " pod="kube-system/cilium-m86vw" Mar 21 12:36:15.229842 kubelet[2743]: I0321 12:36:15.229811 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-cni-path\") pod \"cilium-m86vw\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " pod="kube-system/cilium-m86vw" Mar 21 12:36:15.229842 kubelet[2743]: I0321 12:36:15.229826 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/07a7801f-9180-44e0-987e-7943aebb157b-hubble-tls\") pod \"cilium-m86vw\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " pod="kube-system/cilium-m86vw" Mar 21 12:36:15.229842 kubelet[2743]: I0321 12:36:15.229845 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/880189ce-1f32-4083-bc8a-d6b79a21cc63-lib-modules\") pod \"kube-proxy-pct2m\" (UID: \"880189ce-1f32-4083-bc8a-d6b79a21cc63\") " pod="kube-system/kube-proxy-pct2m" Mar 21 12:36:15.229842 kubelet[2743]: I0321 12:36:15.229860 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-cilium-run\") pod \"cilium-m86vw\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " pod="kube-system/cilium-m86vw" Mar 21 12:36:15.230159 kubelet[2743]: I0321 12:36:15.229876 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-cilium-cgroup\") pod \"cilium-m86vw\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " pod="kube-system/cilium-m86vw" Mar 21 12:36:15.230159 kubelet[2743]: I0321 12:36:15.229891 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-xtables-lock\") pod \"cilium-m86vw\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " pod="kube-system/cilium-m86vw" Mar 21 12:36:15.230159 kubelet[2743]: I0321 12:36:15.229907 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj2wh\" (UniqueName: \"kubernetes.io/projected/07a7801f-9180-44e0-987e-7943aebb157b-kube-api-access-cj2wh\") pod \"cilium-m86vw\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " pod="kube-system/cilium-m86vw" Mar 21 12:36:15.230159 kubelet[2743]: I0321 12:36:15.229924 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-host-proc-sys-net\") pod \"cilium-m86vw\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " pod="kube-system/cilium-m86vw" Mar 21 12:36:15.230159 kubelet[2743]: I0321 12:36:15.230041 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-host-proc-sys-kernel\") pod \"cilium-m86vw\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " pod="kube-system/cilium-m86vw" Mar 21 12:36:15.230348 kubelet[2743]: I0321 12:36:15.230120 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/880189ce-1f32-4083-bc8a-d6b79a21cc63-xtables-lock\") pod \"kube-proxy-pct2m\" (UID: \"880189ce-1f32-4083-bc8a-d6b79a21cc63\") " pod="kube-system/kube-proxy-pct2m" Mar 21 12:36:15.230348 kubelet[2743]: I0321 12:36:15.230144 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-etc-cni-netd\") pod \"cilium-m86vw\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " pod="kube-system/cilium-m86vw" Mar 21 12:36:15.230348 kubelet[2743]: I0321 12:36:15.230166 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07a7801f-9180-44e0-987e-7943aebb157b-cilium-config-path\") pod \"cilium-m86vw\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " pod="kube-system/cilium-m86vw" Mar 21 12:36:15.230348 kubelet[2743]: I0321 12:36:15.230195 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/07a7801f-9180-44e0-987e-7943aebb157b-clustermesh-secrets\") pod \"cilium-m86vw\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " pod="kube-system/cilium-m86vw" Mar 21 12:36:15.230348 kubelet[2743]: I0321 12:36:15.230260 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-bpf-maps\") pod \"cilium-m86vw\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " pod="kube-system/cilium-m86vw" Mar 21 12:36:15.230348 kubelet[2743]: I0321 12:36:15.230318 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-lib-modules\") pod \"cilium-m86vw\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " pod="kube-system/cilium-m86vw" Mar 21 12:36:15.230501 kubelet[2743]: I0321 12:36:15.230362 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/880189ce-1f32-4083-bc8a-d6b79a21cc63-kube-proxy\") pod \"kube-proxy-pct2m\" (UID: \"880189ce-1f32-4083-bc8a-d6b79a21cc63\") " pod="kube-system/kube-proxy-pct2m" Mar 21 12:36:15.230501 kubelet[2743]: I0321 12:36:15.230379 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwbhw\" (UniqueName: \"kubernetes.io/projected/880189ce-1f32-4083-bc8a-d6b79a21cc63-kube-api-access-rwbhw\") pod \"kube-proxy-pct2m\" (UID: \"880189ce-1f32-4083-bc8a-d6b79a21cc63\") " pod="kube-system/kube-proxy-pct2m" Mar 21 12:36:15.303777 kubelet[2743]: I0321 12:36:15.303545 2743 topology_manager.go:215] "Topology Admit Handler" podUID="8982cd85-5089-4cb7-8b6e-cb6d6339203c" podNamespace="kube-system" podName="cilium-operator-599987898-d48t6" Mar 21 12:36:15.311570 systemd[1]: Created slice kubepods-besteffort-pod8982cd85_5089_4cb7_8b6e_cb6d6339203c.slice - libcontainer container kubepods-besteffort-pod8982cd85_5089_4cb7_8b6e_cb6d6339203c.slice. Mar 21 12:36:15.399882 kubelet[2743]: E0321 12:36:15.399730 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:15.400589 containerd[1512]: time="2025-03-21T12:36:15.400486084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pct2m,Uid:880189ce-1f32-4083-bc8a-d6b79a21cc63,Namespace:kube-system,Attempt:0,}" Mar 21 12:36:15.410583 kubelet[2743]: E0321 12:36:15.410541 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:15.410990 containerd[1512]: time="2025-03-21T12:36:15.410959326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m86vw,Uid:07a7801f-9180-44e0-987e-7943aebb157b,Namespace:kube-system,Attempt:0,}" Mar 21 12:36:15.431706 kubelet[2743]: I0321 12:36:15.431614 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8982cd85-5089-4cb7-8b6e-cb6d6339203c-cilium-config-path\") pod \"cilium-operator-599987898-d48t6\" (UID: \"8982cd85-5089-4cb7-8b6e-cb6d6339203c\") " pod="kube-system/cilium-operator-599987898-d48t6" Mar 21 12:36:15.431706 kubelet[2743]: I0321 12:36:15.431655 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckrsv\" (UniqueName: \"kubernetes.io/projected/8982cd85-5089-4cb7-8b6e-cb6d6339203c-kube-api-access-ckrsv\") pod \"cilium-operator-599987898-d48t6\" (UID: \"8982cd85-5089-4cb7-8b6e-cb6d6339203c\") " pod="kube-system/cilium-operator-599987898-d48t6" Mar 21 12:36:15.445699 containerd[1512]: time="2025-03-21T12:36:15.445627127Z" level=info msg="connecting to shim 5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe" address="unix:///run/containerd/s/664ed2711fa4b79be8ec13dcb9e2ad1651e2709a81f297e8aa6ea4e5e0831696" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:36:15.446078 containerd[1512]: time="2025-03-21T12:36:15.445666612Z" level=info msg="connecting to shim fc8d61a65a873139359fcb2f2153da3cd8374243623a4ab0cb8445a768de0dbe" address="unix:///run/containerd/s/d12876a1d412ada376e34ff51d3f7de51aa5c9194b99bfa146ada057da7854c4" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:36:15.495172 systemd[1]: Started cri-containerd-5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe.scope - libcontainer container 5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe. Mar 21 12:36:15.497481 systemd[1]: Started cri-containerd-fc8d61a65a873139359fcb2f2153da3cd8374243623a4ab0cb8445a768de0dbe.scope - libcontainer container fc8d61a65a873139359fcb2f2153da3cd8374243623a4ab0cb8445a768de0dbe. Mar 21 12:36:15.528528 containerd[1512]: time="2025-03-21T12:36:15.528466332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m86vw,Uid:07a7801f-9180-44e0-987e-7943aebb157b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\"" Mar 21 12:36:15.529411 kubelet[2743]: E0321 12:36:15.529351 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:15.530755 containerd[1512]: time="2025-03-21T12:36:15.530670138Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 21 12:36:15.532257 containerd[1512]: time="2025-03-21T12:36:15.532107020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pct2m,Uid:880189ce-1f32-4083-bc8a-d6b79a21cc63,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc8d61a65a873139359fcb2f2153da3cd8374243623a4ab0cb8445a768de0dbe\"" Mar 21 12:36:15.532739 kubelet[2743]: E0321 12:36:15.532685 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:15.534813 containerd[1512]: time="2025-03-21T12:36:15.534762611Z" level=info msg="CreateContainer within sandbox \"fc8d61a65a873139359fcb2f2153da3cd8374243623a4ab0cb8445a768de0dbe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 21 12:36:15.546569 containerd[1512]: time="2025-03-21T12:36:15.546494719Z" level=info msg="Container 7d28f74e4796647c062aa65be250e84f2efbbdd252a07a0b78f2732b6f3a63ad: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:36:15.556896 containerd[1512]: time="2025-03-21T12:36:15.556843326Z" level=info msg="CreateContainer within sandbox \"fc8d61a65a873139359fcb2f2153da3cd8374243623a4ab0cb8445a768de0dbe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7d28f74e4796647c062aa65be250e84f2efbbdd252a07a0b78f2732b6f3a63ad\"" Mar 21 12:36:15.557381 containerd[1512]: time="2025-03-21T12:36:15.557351758Z" level=info msg="StartContainer for \"7d28f74e4796647c062aa65be250e84f2efbbdd252a07a0b78f2732b6f3a63ad\"" Mar 21 12:36:15.558884 containerd[1512]: time="2025-03-21T12:36:15.558850628Z" level=info msg="connecting to shim 7d28f74e4796647c062aa65be250e84f2efbbdd252a07a0b78f2732b6f3a63ad" address="unix:///run/containerd/s/d12876a1d412ada376e34ff51d3f7de51aa5c9194b99bfa146ada057da7854c4" protocol=ttrpc version=3 Mar 21 12:36:15.583176 systemd[1]: Started cri-containerd-7d28f74e4796647c062aa65be250e84f2efbbdd252a07a0b78f2732b6f3a63ad.scope - libcontainer container 7d28f74e4796647c062aa65be250e84f2efbbdd252a07a0b78f2732b6f3a63ad. Mar 21 12:36:15.613974 kubelet[2743]: E0321 12:36:15.613713 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:15.614277 containerd[1512]: time="2025-03-21T12:36:15.614211294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-d48t6,Uid:8982cd85-5089-4cb7-8b6e-cb6d6339203c,Namespace:kube-system,Attempt:0,}" Mar 21 12:36:15.635108 containerd[1512]: time="2025-03-21T12:36:15.635042620Z" level=info msg="StartContainer for \"7d28f74e4796647c062aa65be250e84f2efbbdd252a07a0b78f2732b6f3a63ad\" returns successfully" Mar 21 12:36:15.638916 containerd[1512]: time="2025-03-21T12:36:15.638859432Z" level=info msg="connecting to shim 0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c" address="unix:///run/containerd/s/40ded9054c1d1e2a83046aeb5ac186056ea5e116a42a33773528145f0ad53e33" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:36:15.668272 systemd[1]: Started cri-containerd-0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c.scope - libcontainer container 0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c. Mar 21 12:36:15.716134 containerd[1512]: time="2025-03-21T12:36:15.716074753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-d48t6,Uid:8982cd85-5089-4cb7-8b6e-cb6d6339203c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c\"" Mar 21 12:36:15.717309 kubelet[2743]: E0321 12:36:15.717265 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:16.535189 update_engine[1501]: I20250321 12:36:16.535097 1501 update_attempter.cc:509] Updating boot flags... Mar 21 12:36:16.563102 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3069) Mar 21 12:36:16.609101 kubelet[2743]: E0321 12:36:16.608778 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:16.635505 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3068) Mar 21 12:36:17.617450 kubelet[2743]: E0321 12:36:17.617404 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:20.580154 kubelet[2743]: I0321 12:36:20.580071 2743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pct2m" podStartSLOduration=5.580054404 podStartE2EDuration="5.580054404s" podCreationTimestamp="2025-03-21 12:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:36:16.634882495 +0000 UTC m=+16.160894182" watchObservedRunningTime="2025-03-21 12:36:20.580054404 +0000 UTC m=+20.106066091" Mar 21 12:36:25.128367 systemd[1]: Started sshd@7-10.0.0.85:22-10.0.0.1:43514.service - OpenSSH per-connection server daemon (10.0.0.1:43514). Mar 21 12:36:25.182717 sshd[3132]: Accepted publickey for core from 10.0.0.1 port 43514 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:36:25.184380 sshd-session[3132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:25.190413 systemd-logind[1496]: New session 8 of user core. Mar 21 12:36:25.197211 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 21 12:36:25.352916 sshd[3134]: Connection closed by 10.0.0.1 port 43514 Mar 21 12:36:25.353267 sshd-session[3132]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:25.356682 systemd[1]: sshd@7-10.0.0.85:22-10.0.0.1:43514.service: Deactivated successfully. Mar 21 12:36:25.358878 systemd[1]: session-8.scope: Deactivated successfully. Mar 21 12:36:25.360510 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. Mar 21 12:36:25.361595 systemd-logind[1496]: Removed session 8. Mar 21 12:36:25.586131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3873048608.mount: Deactivated successfully. Mar 21 12:36:27.354941 containerd[1512]: time="2025-03-21T12:36:27.354850437Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:36:27.355906 containerd[1512]: time="2025-03-21T12:36:27.355798764Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 21 12:36:27.356885 containerd[1512]: time="2025-03-21T12:36:27.356844535Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:36:27.358461 containerd[1512]: time="2025-03-21T12:36:27.358423321Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.827698649s" Mar 21 12:36:27.358522 containerd[1512]: time="2025-03-21T12:36:27.358461442Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 21 12:36:27.359540 containerd[1512]: time="2025-03-21T12:36:27.359500701Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 21 12:36:27.362172 containerd[1512]: time="2025-03-21T12:36:27.362146838Z" level=info msg="CreateContainer within sandbox \"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 21 12:36:27.372680 containerd[1512]: time="2025-03-21T12:36:27.372166020Z" level=info msg="Container 083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:36:27.381205 containerd[1512]: time="2025-03-21T12:36:27.381148168Z" level=info msg="CreateContainer within sandbox \"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508\"" Mar 21 12:36:27.381623 containerd[1512]: time="2025-03-21T12:36:27.381588267Z" level=info msg="StartContainer for \"083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508\"" Mar 21 12:36:27.382479 containerd[1512]: time="2025-03-21T12:36:27.382454159Z" level=info msg="connecting to shim 083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508" address="unix:///run/containerd/s/664ed2711fa4b79be8ec13dcb9e2ad1651e2709a81f297e8aa6ea4e5e0831696" protocol=ttrpc version=3 Mar 21 12:36:27.407209 systemd[1]: Started cri-containerd-083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508.scope - libcontainer container 083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508. Mar 21 12:36:27.460743 systemd[1]: cri-containerd-083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508.scope: Deactivated successfully. Mar 21 12:36:27.564371 containerd[1512]: time="2025-03-21T12:36:27.462456141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508\" id:\"083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508\" pid:3183 exited_at:{seconds:1742560587 nanos:461799112}" Mar 21 12:36:27.665762 containerd[1512]: time="2025-03-21T12:36:27.663737637Z" level=info msg="received exit event container_id:\"083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508\" id:\"083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508\" pid:3183 exited_at:{seconds:1742560587 nanos:461799112}" Mar 21 12:36:27.665762 containerd[1512]: time="2025-03-21T12:36:27.665340718Z" level=info msg="StartContainer for \"083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508\" returns successfully" Mar 21 12:36:27.689831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508-rootfs.mount: Deactivated successfully. Mar 21 12:36:28.671152 kubelet[2743]: E0321 12:36:28.671111 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:28.674243 containerd[1512]: time="2025-03-21T12:36:28.674190401Z" level=info msg="CreateContainer within sandbox \"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 21 12:36:28.684452 containerd[1512]: time="2025-03-21T12:36:28.684310909Z" level=info msg="Container e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:36:28.695827 containerd[1512]: time="2025-03-21T12:36:28.695763697Z" level=info msg="CreateContainer within sandbox \"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2\"" Mar 21 12:36:28.697047 containerd[1512]: time="2025-03-21T12:36:28.696987282Z" level=info msg="StartContainer for \"e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2\"" Mar 21 12:36:28.697943 containerd[1512]: time="2025-03-21T12:36:28.697917806Z" level=info msg="connecting to shim e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2" address="unix:///run/containerd/s/664ed2711fa4b79be8ec13dcb9e2ad1651e2709a81f297e8aa6ea4e5e0831696" protocol=ttrpc version=3 Mar 21 12:36:28.717156 systemd[1]: Started cri-containerd-e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2.scope - libcontainer container e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2. Mar 21 12:36:28.747653 containerd[1512]: time="2025-03-21T12:36:28.747608877Z" level=info msg="StartContainer for \"e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2\" returns successfully" Mar 21 12:36:28.760589 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 21 12:36:28.760850 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:36:28.761072 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 21 12:36:28.762800 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 21 12:36:28.764628 containerd[1512]: time="2025-03-21T12:36:28.764593167Z" level=info msg="received exit event container_id:\"e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2\" id:\"e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2\" pid:3228 exited_at:{seconds:1742560588 nanos:764362202}" Mar 21 12:36:28.764783 containerd[1512]: time="2025-03-21T12:36:28.764705439Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2\" id:\"e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2\" pid:3228 exited_at:{seconds:1742560588 nanos:764362202}" Mar 21 12:36:28.765438 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 21 12:36:28.765866 systemd[1]: cri-containerd-e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2.scope: Deactivated successfully. Mar 21 12:36:28.785950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2-rootfs.mount: Deactivated successfully. Mar 21 12:36:28.795768 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:36:29.675318 kubelet[2743]: E0321 12:36:29.675278 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:29.677111 containerd[1512]: time="2025-03-21T12:36:29.676974039Z" level=info msg="CreateContainer within sandbox \"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 21 12:36:29.693437 containerd[1512]: time="2025-03-21T12:36:29.693393166Z" level=info msg="Container 30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:36:29.703525 containerd[1512]: time="2025-03-21T12:36:29.703466049Z" level=info msg="CreateContainer within sandbox \"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae\"" Mar 21 12:36:29.704055 containerd[1512]: time="2025-03-21T12:36:29.704003000Z" level=info msg="StartContainer for \"30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae\"" Mar 21 12:36:29.705615 containerd[1512]: time="2025-03-21T12:36:29.705582025Z" level=info msg="connecting to shim 30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae" address="unix:///run/containerd/s/664ed2711fa4b79be8ec13dcb9e2ad1651e2709a81f297e8aa6ea4e5e0831696" protocol=ttrpc version=3 Mar 21 12:36:29.732353 systemd[1]: Started cri-containerd-30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae.scope - libcontainer container 30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae. Mar 21 12:36:29.778052 systemd[1]: cri-containerd-30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae.scope: Deactivated successfully. Mar 21 12:36:29.778924 containerd[1512]: time="2025-03-21T12:36:29.778440809Z" level=info msg="StartContainer for \"30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae\" returns successfully" Mar 21 12:36:29.780064 containerd[1512]: time="2025-03-21T12:36:29.780014914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae\" id:\"30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae\" pid:3275 exited_at:{seconds:1742560589 nanos:779768550}" Mar 21 12:36:29.780164 containerd[1512]: time="2025-03-21T12:36:29.780137164Z" level=info msg="received exit event container_id:\"30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae\" id:\"30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae\" pid:3275 exited_at:{seconds:1742560589 nanos:779768550}" Mar 21 12:36:29.803304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae-rootfs.mount: Deactivated successfully. Mar 21 12:36:30.368212 systemd[1]: Started sshd@8-10.0.0.85:22-10.0.0.1:43520.service - OpenSSH per-connection server daemon (10.0.0.1:43520). Mar 21 12:36:30.428575 sshd[3303]: Accepted publickey for core from 10.0.0.1 port 43520 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:36:30.430241 sshd-session[3303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:30.434948 systemd-logind[1496]: New session 9 of user core. Mar 21 12:36:30.445165 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 21 12:36:30.569190 sshd[3305]: Connection closed by 10.0.0.1 port 43520 Mar 21 12:36:30.569577 sshd-session[3303]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:30.574075 systemd[1]: sshd@8-10.0.0.85:22-10.0.0.1:43520.service: Deactivated successfully. Mar 21 12:36:30.576396 systemd[1]: session-9.scope: Deactivated successfully. Mar 21 12:36:30.577225 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. Mar 21 12:36:30.578320 systemd-logind[1496]: Removed session 9. Mar 21 12:36:30.681287 kubelet[2743]: E0321 12:36:30.681250 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:30.683571 containerd[1512]: time="2025-03-21T12:36:30.683413870Z" level=info msg="CreateContainer within sandbox \"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 21 12:36:30.696980 containerd[1512]: time="2025-03-21T12:36:30.696296239Z" level=info msg="Container a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:36:30.704671 containerd[1512]: time="2025-03-21T12:36:30.704605656Z" level=info msg="CreateContainer within sandbox \"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2\"" Mar 21 12:36:30.705237 containerd[1512]: time="2025-03-21T12:36:30.705188414Z" level=info msg="StartContainer for \"a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2\"" Mar 21 12:36:30.706444 containerd[1512]: time="2025-03-21T12:36:30.706403632Z" level=info msg="connecting to shim a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2" address="unix:///run/containerd/s/664ed2711fa4b79be8ec13dcb9e2ad1651e2709a81f297e8aa6ea4e5e0831696" protocol=ttrpc version=3 Mar 21 12:36:30.732265 systemd[1]: Started cri-containerd-a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2.scope - libcontainer container a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2. Mar 21 12:36:30.762705 systemd[1]: cri-containerd-a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2.scope: Deactivated successfully. Mar 21 12:36:30.763314 containerd[1512]: time="2025-03-21T12:36:30.763270609Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2\" id:\"a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2\" pid:3330 exited_at:{seconds:1742560590 nanos:762898989}" Mar 21 12:36:30.766097 containerd[1512]: time="2025-03-21T12:36:30.766001281Z" level=info msg="received exit event container_id:\"a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2\" id:\"a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2\" pid:3330 exited_at:{seconds:1742560590 nanos:762898989}" Mar 21 12:36:30.775527 containerd[1512]: time="2025-03-21T12:36:30.775468338Z" level=info msg="StartContainer for \"a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2\" returns successfully" Mar 21 12:36:30.789558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2-rootfs.mount: Deactivated successfully. Mar 21 12:36:31.686550 kubelet[2743]: E0321 12:36:31.686519 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:31.688418 containerd[1512]: time="2025-03-21T12:36:31.688363540Z" level=info msg="CreateContainer within sandbox \"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 21 12:36:31.700901 containerd[1512]: time="2025-03-21T12:36:31.700841573Z" level=info msg="Container a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:36:31.708479 containerd[1512]: time="2025-03-21T12:36:31.708429238Z" level=info msg="CreateContainer within sandbox \"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\"" Mar 21 12:36:31.708873 containerd[1512]: time="2025-03-21T12:36:31.708841724Z" level=info msg="StartContainer for \"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\"" Mar 21 12:36:31.709734 containerd[1512]: time="2025-03-21T12:36:31.709710620Z" level=info msg="connecting to shim a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da" address="unix:///run/containerd/s/664ed2711fa4b79be8ec13dcb9e2ad1651e2709a81f297e8aa6ea4e5e0831696" protocol=ttrpc version=3 Mar 21 12:36:31.728163 systemd[1]: Started cri-containerd-a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da.scope - libcontainer container a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da. Mar 21 12:36:31.765880 containerd[1512]: time="2025-03-21T12:36:31.765827940Z" level=info msg="StartContainer for \"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\" returns successfully" Mar 21 12:36:31.857358 containerd[1512]: time="2025-03-21T12:36:31.857286085Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\" id:\"c43e2516d239c70081eedac08ad28f6a5583eed5f356994ab6866858ed348b07\" pid:3399 exited_at:{seconds:1742560591 nanos:856873318}" Mar 21 12:36:31.959258 kubelet[2743]: I0321 12:36:31.958901 2743 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 21 12:36:32.097193 kubelet[2743]: I0321 12:36:32.097120 2743 topology_manager.go:215] "Topology Admit Handler" podUID="ca2c1752-7c49-4e12-b44e-2d198394c0a8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vw7pf" Mar 21 12:36:32.099707 kubelet[2743]: I0321 12:36:32.098732 2743 topology_manager.go:215] "Topology Admit Handler" podUID="c60d1832-4fe9-41ac-8a7b-da8e4e1b773d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b2m7w" Mar 21 12:36:32.109759 systemd[1]: Created slice kubepods-burstable-podca2c1752_7c49_4e12_b44e_2d198394c0a8.slice - libcontainer container kubepods-burstable-podca2c1752_7c49_4e12_b44e_2d198394c0a8.slice. Mar 21 12:36:32.118278 systemd[1]: Created slice kubepods-burstable-podc60d1832_4fe9_41ac_8a7b_da8e4e1b773d.slice - libcontainer container kubepods-burstable-podc60d1832_4fe9_41ac_8a7b_da8e4e1b773d.slice. Mar 21 12:36:32.247939 kubelet[2743]: I0321 12:36:32.247679 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psgqf\" (UniqueName: \"kubernetes.io/projected/ca2c1752-7c49-4e12-b44e-2d198394c0a8-kube-api-access-psgqf\") pod \"coredns-7db6d8ff4d-vw7pf\" (UID: \"ca2c1752-7c49-4e12-b44e-2d198394c0a8\") " pod="kube-system/coredns-7db6d8ff4d-vw7pf" Mar 21 12:36:32.247939 kubelet[2743]: I0321 12:36:32.247734 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c60d1832-4fe9-41ac-8a7b-da8e4e1b773d-config-volume\") pod \"coredns-7db6d8ff4d-b2m7w\" (UID: \"c60d1832-4fe9-41ac-8a7b-da8e4e1b773d\") " pod="kube-system/coredns-7db6d8ff4d-b2m7w" Mar 21 12:36:32.247939 kubelet[2743]: I0321 12:36:32.247754 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2c1752-7c49-4e12-b44e-2d198394c0a8-config-volume\") pod \"coredns-7db6d8ff4d-vw7pf\" (UID: \"ca2c1752-7c49-4e12-b44e-2d198394c0a8\") " pod="kube-system/coredns-7db6d8ff4d-vw7pf" Mar 21 12:36:32.247939 kubelet[2743]: I0321 12:36:32.247769 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6524q\" (UniqueName: \"kubernetes.io/projected/c60d1832-4fe9-41ac-8a7b-da8e4e1b773d-kube-api-access-6524q\") pod \"coredns-7db6d8ff4d-b2m7w\" (UID: \"c60d1832-4fe9-41ac-8a7b-da8e4e1b773d\") " pod="kube-system/coredns-7db6d8ff4d-b2m7w" Mar 21 12:36:32.415529 kubelet[2743]: E0321 12:36:32.415473 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:32.416300 containerd[1512]: time="2025-03-21T12:36:32.416227872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vw7pf,Uid:ca2c1752-7c49-4e12-b44e-2d198394c0a8,Namespace:kube-system,Attempt:0,}" Mar 21 12:36:32.421730 kubelet[2743]: E0321 12:36:32.421677 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:32.422251 containerd[1512]: time="2025-03-21T12:36:32.422205283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b2m7w,Uid:c60d1832-4fe9-41ac-8a7b-da8e4e1b773d,Namespace:kube-system,Attempt:0,}" Mar 21 12:36:32.693293 kubelet[2743]: E0321 12:36:32.693258 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:32.708349 kubelet[2743]: I0321 12:36:32.708269 2743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m86vw" podStartSLOduration=5.87907776 podStartE2EDuration="17.708245523s" podCreationTimestamp="2025-03-21 12:36:15 +0000 UTC" firstStartedPulling="2025-03-21 12:36:15.530145394 +0000 UTC m=+15.056157081" lastFinishedPulling="2025-03-21 12:36:27.359313157 +0000 UTC m=+26.885324844" observedRunningTime="2025-03-21 12:36:32.708002766 +0000 UTC m=+32.234014453" watchObservedRunningTime="2025-03-21 12:36:32.708245523 +0000 UTC m=+32.234257210" Mar 21 12:36:33.696099 kubelet[2743]: E0321 12:36:33.696015 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:34.335950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2437158775.mount: Deactivated successfully. Mar 21 12:36:34.697876 kubelet[2743]: E0321 12:36:34.697820 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:35.586036 systemd[1]: Started sshd@9-10.0.0.85:22-10.0.0.1:50422.service - OpenSSH per-connection server daemon (10.0.0.1:50422). Mar 21 12:36:35.672956 containerd[1512]: time="2025-03-21T12:36:35.672870242Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:36:35.675606 containerd[1512]: time="2025-03-21T12:36:35.675523194Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 21 12:36:35.676879 containerd[1512]: time="2025-03-21T12:36:35.676839280Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:36:35.678509 containerd[1512]: time="2025-03-21T12:36:35.678318973Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 8.318779449s" Mar 21 12:36:35.678509 containerd[1512]: time="2025-03-21T12:36:35.678370731Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 21 12:36:35.680383 containerd[1512]: time="2025-03-21T12:36:35.680353310Z" level=info msg="CreateContainer within sandbox \"0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 21 12:36:35.682331 sshd[3507]: Accepted publickey for core from 10.0.0.1 port 50422 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:36:35.684803 sshd-session[3507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:35.689525 containerd[1512]: time="2025-03-21T12:36:35.689475189Z" level=info msg="Container 306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:36:35.690811 systemd-logind[1496]: New session 10 of user core. Mar 21 12:36:35.697767 containerd[1512]: time="2025-03-21T12:36:35.697739526Z" level=info msg="CreateContainer within sandbox \"0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\"" Mar 21 12:36:35.698664 containerd[1512]: time="2025-03-21T12:36:35.698602119Z" level=info msg="StartContainer for \"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\"" Mar 21 12:36:35.700322 containerd[1512]: time="2025-03-21T12:36:35.700097411Z" level=info msg="connecting to shim 306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8" address="unix:///run/containerd/s/40ded9054c1d1e2a83046aeb5ac186056ea5e116a42a33773528145f0ad53e33" protocol=ttrpc version=3 Mar 21 12:36:35.701442 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 21 12:36:35.727418 systemd[1]: Started cri-containerd-306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8.scope - libcontainer container 306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8. Mar 21 12:36:35.811719 containerd[1512]: time="2025-03-21T12:36:35.811656542Z" level=info msg="StartContainer for \"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\" returns successfully" Mar 21 12:36:35.862315 sshd[3519]: Connection closed by 10.0.0.1 port 50422 Mar 21 12:36:35.862594 sshd-session[3507]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:35.867541 systemd[1]: sshd@9-10.0.0.85:22-10.0.0.1:50422.service: Deactivated successfully. Mar 21 12:36:35.870814 systemd[1]: session-10.scope: Deactivated successfully. Mar 21 12:36:35.873109 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. Mar 21 12:36:35.877079 systemd-logind[1496]: Removed session 10. Mar 21 12:36:36.708923 kubelet[2743]: E0321 12:36:36.708875 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:36.717634 kubelet[2743]: I0321 12:36:36.717400 2743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-d48t6" podStartSLOduration=1.7573924939999999 podStartE2EDuration="21.71738085s" podCreationTimestamp="2025-03-21 12:36:15 +0000 UTC" firstStartedPulling="2025-03-21 12:36:15.719168113 +0000 UTC m=+15.245179800" lastFinishedPulling="2025-03-21 12:36:35.679156469 +0000 UTC m=+35.205168156" observedRunningTime="2025-03-21 12:36:36.716751707 +0000 UTC m=+36.242763414" watchObservedRunningTime="2025-03-21 12:36:36.71738085 +0000 UTC m=+36.243392537" Mar 21 12:36:37.710317 kubelet[2743]: E0321 12:36:37.710275 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:39.812301 systemd-networkd[1450]: cilium_host: Link UP Mar 21 12:36:39.812476 systemd-networkd[1450]: cilium_net: Link UP Mar 21 12:36:39.812671 systemd-networkd[1450]: cilium_net: Gained carrier Mar 21 12:36:39.812866 systemd-networkd[1450]: cilium_host: Gained carrier Mar 21 12:36:39.881138 systemd-networkd[1450]: cilium_host: Gained IPv6LL Mar 21 12:36:39.923639 systemd-networkd[1450]: cilium_vxlan: Link UP Mar 21 12:36:39.923648 systemd-networkd[1450]: cilium_vxlan: Gained carrier Mar 21 12:36:39.977194 systemd-networkd[1450]: cilium_net: Gained IPv6LL Mar 21 12:36:40.137060 kernel: NET: Registered PF_ALG protocol family Mar 21 12:36:40.813096 systemd-networkd[1450]: lxc_health: Link UP Mar 21 12:36:40.822358 systemd-networkd[1450]: lxc_health: Gained carrier Mar 21 12:36:40.875831 systemd[1]: Started sshd@10-10.0.0.85:22-10.0.0.1:50426.service - OpenSSH per-connection server daemon (10.0.0.1:50426). Mar 21 12:36:40.927722 sshd[3914]: Accepted publickey for core from 10.0.0.1 port 50426 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:36:40.929704 sshd-session[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:40.934962 systemd-logind[1496]: New session 11 of user core. Mar 21 12:36:40.942164 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 21 12:36:40.968044 systemd-networkd[1450]: lxc7d6b8842d86f: Link UP Mar 21 12:36:40.969053 kernel: eth0: renamed from tmp4a133 Mar 21 12:36:40.986643 kernel: eth0: renamed from tmpfb8c0 Mar 21 12:36:40.993187 systemd-networkd[1450]: lxc03cdcfd3de84: Link UP Mar 21 12:36:40.993620 systemd-networkd[1450]: lxc7d6b8842d86f: Gained carrier Mar 21 12:36:40.993993 systemd-networkd[1450]: lxc03cdcfd3de84: Gained carrier Mar 21 12:36:41.090402 sshd[3916]: Connection closed by 10.0.0.1 port 50426 Mar 21 12:36:41.090730 sshd-session[3914]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:41.102311 systemd[1]: sshd@10-10.0.0.85:22-10.0.0.1:50426.service: Deactivated successfully. Mar 21 12:36:41.104675 systemd[1]: session-11.scope: Deactivated successfully. Mar 21 12:36:41.105646 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. Mar 21 12:36:41.108389 systemd[1]: Started sshd@11-10.0.0.85:22-10.0.0.1:58260.service - OpenSSH per-connection server daemon (10.0.0.1:58260). Mar 21 12:36:41.110497 systemd-logind[1496]: Removed session 11. Mar 21 12:36:41.152189 systemd-networkd[1450]: cilium_vxlan: Gained IPv6LL Mar 21 12:36:41.159212 sshd[3944]: Accepted publickey for core from 10.0.0.1 port 58260 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:36:41.160807 sshd-session[3944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:41.166240 systemd-logind[1496]: New session 12 of user core. Mar 21 12:36:41.170164 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 21 12:36:41.338091 sshd[3947]: Connection closed by 10.0.0.1 port 58260 Mar 21 12:36:41.340414 sshd-session[3944]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:41.351792 systemd[1]: sshd@11-10.0.0.85:22-10.0.0.1:58260.service: Deactivated successfully. Mar 21 12:36:41.355142 systemd[1]: session-12.scope: Deactivated successfully. Mar 21 12:36:41.356221 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. Mar 21 12:36:41.362342 systemd[1]: Started sshd@12-10.0.0.85:22-10.0.0.1:58274.service - OpenSSH per-connection server daemon (10.0.0.1:58274). Mar 21 12:36:41.365817 systemd-logind[1496]: Removed session 12. Mar 21 12:36:41.410740 sshd[3959]: Accepted publickey for core from 10.0.0.1 port 58274 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:36:41.412387 sshd-session[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:41.417257 systemd-logind[1496]: New session 13 of user core. Mar 21 12:36:41.417606 kubelet[2743]: E0321 12:36:41.417579 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:41.423184 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 21 12:36:41.558971 sshd[3964]: Connection closed by 10.0.0.1 port 58274 Mar 21 12:36:41.561373 sshd-session[3959]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:41.565998 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. Mar 21 12:36:41.567245 systemd[1]: sshd@12-10.0.0.85:22-10.0.0.1:58274.service: Deactivated successfully. Mar 21 12:36:41.570175 systemd[1]: session-13.scope: Deactivated successfully. Mar 21 12:36:41.571687 systemd-logind[1496]: Removed session 13. Mar 21 12:36:41.715761 kubelet[2743]: E0321 12:36:41.715716 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:42.433230 systemd-networkd[1450]: lxc_health: Gained IPv6LL Mar 21 12:36:42.717732 kubelet[2743]: E0321 12:36:42.717441 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:43.008209 systemd-networkd[1450]: lxc7d6b8842d86f: Gained IPv6LL Mar 21 12:36:43.008558 systemd-networkd[1450]: lxc03cdcfd3de84: Gained IPv6LL Mar 21 12:36:44.423870 containerd[1512]: time="2025-03-21T12:36:44.423820176Z" level=info msg="connecting to shim 4a1339f545438d8432371915991edb3574de4e91d10971b690c4cbb52deaa58f" address="unix:///run/containerd/s/91916b37213cd903d0be7840043751813fdb6b9a1b0c105fabbb8cbf85224a76" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:36:44.428875 containerd[1512]: time="2025-03-21T12:36:44.428832724Z" level=info msg="connecting to shim fb8c0a813304c8ef4033ab67232cbf8389f42f49a5fe356eda683c21acccd63d" address="unix:///run/containerd/s/e44945892f04fa8ea460fc85b14ce785d8950aed27e30c4a2a72001e38364c0f" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:36:44.451353 systemd[1]: Started cri-containerd-4a1339f545438d8432371915991edb3574de4e91d10971b690c4cbb52deaa58f.scope - libcontainer container 4a1339f545438d8432371915991edb3574de4e91d10971b690c4cbb52deaa58f. Mar 21 12:36:44.459269 systemd[1]: Started cri-containerd-fb8c0a813304c8ef4033ab67232cbf8389f42f49a5fe356eda683c21acccd63d.scope - libcontainer container fb8c0a813304c8ef4033ab67232cbf8389f42f49a5fe356eda683c21acccd63d. Mar 21 12:36:44.467092 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:36:44.473413 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:36:44.504383 containerd[1512]: time="2025-03-21T12:36:44.504287057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vw7pf,Uid:ca2c1752-7c49-4e12-b44e-2d198394c0a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a1339f545438d8432371915991edb3574de4e91d10971b690c4cbb52deaa58f\"" Mar 21 12:36:44.505095 kubelet[2743]: E0321 12:36:44.505067 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:44.509018 containerd[1512]: time="2025-03-21T12:36:44.508944207Z" level=info msg="CreateContainer within sandbox \"4a1339f545438d8432371915991edb3574de4e91d10971b690c4cbb52deaa58f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 21 12:36:44.511686 containerd[1512]: time="2025-03-21T12:36:44.511636947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b2m7w,Uid:c60d1832-4fe9-41ac-8a7b-da8e4e1b773d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb8c0a813304c8ef4033ab67232cbf8389f42f49a5fe356eda683c21acccd63d\"" Mar 21 12:36:44.512470 kubelet[2743]: E0321 12:36:44.512445 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:44.514384 containerd[1512]: time="2025-03-21T12:36:44.514355465Z" level=info msg="CreateContainer within sandbox \"fb8c0a813304c8ef4033ab67232cbf8389f42f49a5fe356eda683c21acccd63d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 21 12:36:44.523236 containerd[1512]: time="2025-03-21T12:36:44.522537268Z" level=info msg="Container 77fc8773f182ad48bbcfc42c431379513f52427aa3b02d7abddb581bd61d2353: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:36:44.532639 containerd[1512]: time="2025-03-21T12:36:44.532609403Z" level=info msg="Container 5c9d51ca9b6f71db04c117494edd2153062241bea30f3baf74dec01adee3fe85: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:36:44.533371 containerd[1512]: time="2025-03-21T12:36:44.533314397Z" level=info msg="CreateContainer within sandbox \"4a1339f545438d8432371915991edb3574de4e91d10971b690c4cbb52deaa58f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77fc8773f182ad48bbcfc42c431379513f52427aa3b02d7abddb581bd61d2353\"" Mar 21 12:36:44.534466 containerd[1512]: time="2025-03-21T12:36:44.534418702Z" level=info msg="StartContainer for \"77fc8773f182ad48bbcfc42c431379513f52427aa3b02d7abddb581bd61d2353\"" Mar 21 12:36:44.535364 containerd[1512]: time="2025-03-21T12:36:44.535329583Z" level=info msg="connecting to shim 77fc8773f182ad48bbcfc42c431379513f52427aa3b02d7abddb581bd61d2353" address="unix:///run/containerd/s/91916b37213cd903d0be7840043751813fdb6b9a1b0c105fabbb8cbf85224a76" protocol=ttrpc version=3 Mar 21 12:36:44.540585 containerd[1512]: time="2025-03-21T12:36:44.540558778Z" level=info msg="CreateContainer within sandbox \"fb8c0a813304c8ef4033ab67232cbf8389f42f49a5fe356eda683c21acccd63d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c9d51ca9b6f71db04c117494edd2153062241bea30f3baf74dec01adee3fe85\"" Mar 21 12:36:44.541245 containerd[1512]: time="2025-03-21T12:36:44.541177331Z" level=info msg="StartContainer for \"5c9d51ca9b6f71db04c117494edd2153062241bea30f3baf74dec01adee3fe85\"" Mar 21 12:36:44.541907 containerd[1512]: time="2025-03-21T12:36:44.541844945Z" level=info msg="connecting to shim 5c9d51ca9b6f71db04c117494edd2153062241bea30f3baf74dec01adee3fe85" address="unix:///run/containerd/s/e44945892f04fa8ea460fc85b14ce785d8950aed27e30c4a2a72001e38364c0f" protocol=ttrpc version=3 Mar 21 12:36:44.567202 systemd[1]: Started cri-containerd-77fc8773f182ad48bbcfc42c431379513f52427aa3b02d7abddb581bd61d2353.scope - libcontainer container 77fc8773f182ad48bbcfc42c431379513f52427aa3b02d7abddb581bd61d2353. Mar 21 12:36:44.571081 systemd[1]: Started cri-containerd-5c9d51ca9b6f71db04c117494edd2153062241bea30f3baf74dec01adee3fe85.scope - libcontainer container 5c9d51ca9b6f71db04c117494edd2153062241bea30f3baf74dec01adee3fe85. Mar 21 12:36:44.610285 containerd[1512]: time="2025-03-21T12:36:44.610244392Z" level=info msg="StartContainer for \"77fc8773f182ad48bbcfc42c431379513f52427aa3b02d7abddb581bd61d2353\" returns successfully" Mar 21 12:36:44.610587 containerd[1512]: time="2025-03-21T12:36:44.610438367Z" level=info msg="StartContainer for \"5c9d51ca9b6f71db04c117494edd2153062241bea30f3baf74dec01adee3fe85\" returns successfully" Mar 21 12:36:44.724205 kubelet[2743]: E0321 12:36:44.724064 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:44.727247 kubelet[2743]: E0321 12:36:44.727190 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:44.753601 kubelet[2743]: I0321 12:36:44.753520 2743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vw7pf" podStartSLOduration=29.753500248 podStartE2EDuration="29.753500248s" podCreationTimestamp="2025-03-21 12:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:36:44.739371542 +0000 UTC m=+44.265383239" watchObservedRunningTime="2025-03-21 12:36:44.753500248 +0000 UTC m=+44.279511935" Mar 21 12:36:44.754505 kubelet[2743]: I0321 12:36:44.753612 2743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-b2m7w" podStartSLOduration=29.753608992 podStartE2EDuration="29.753608992s" podCreationTimestamp="2025-03-21 12:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:36:44.753265587 +0000 UTC m=+44.279277274" watchObservedRunningTime="2025-03-21 12:36:44.753608992 +0000 UTC m=+44.279620680" Mar 21 12:36:45.728533 kubelet[2743]: E0321 12:36:45.728297 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:45.730155 kubelet[2743]: E0321 12:36:45.730110 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:46.574202 systemd[1]: Started sshd@13-10.0.0.85:22-10.0.0.1:58284.service - OpenSSH per-connection server daemon (10.0.0.1:58284). Mar 21 12:36:46.627385 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 58284 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:36:46.629098 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:46.633421 systemd-logind[1496]: New session 14 of user core. Mar 21 12:36:46.646163 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 21 12:36:46.730035 kubelet[2743]: E0321 12:36:46.729979 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:46.731111 kubelet[2743]: E0321 12:36:46.730148 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:36:46.763719 sshd[4169]: Connection closed by 10.0.0.1 port 58284 Mar 21 12:36:46.764051 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:46.767742 systemd[1]: sshd@13-10.0.0.85:22-10.0.0.1:58284.service: Deactivated successfully. Mar 21 12:36:46.769998 systemd[1]: session-14.scope: Deactivated successfully. Mar 21 12:36:46.770716 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. Mar 21 12:36:46.771673 systemd-logind[1496]: Removed session 14. Mar 21 12:36:51.781641 systemd[1]: Started sshd@14-10.0.0.85:22-10.0.0.1:38966.service - OpenSSH per-connection server daemon (10.0.0.1:38966). Mar 21 12:36:51.831126 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 38966 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:36:51.832907 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:51.838185 systemd-logind[1496]: New session 15 of user core. Mar 21 12:36:51.845162 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 21 12:36:51.954311 sshd[4186]: Connection closed by 10.0.0.1 port 38966 Mar 21 12:36:51.954708 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:51.968304 systemd[1]: sshd@14-10.0.0.85:22-10.0.0.1:38966.service: Deactivated successfully. Mar 21 12:36:51.970707 systemd[1]: session-15.scope: Deactivated successfully. Mar 21 12:36:51.972415 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. Mar 21 12:36:51.974041 systemd[1]: Started sshd@15-10.0.0.85:22-10.0.0.1:38970.service - OpenSSH per-connection server daemon (10.0.0.1:38970). Mar 21 12:36:51.974982 systemd-logind[1496]: Removed session 15. Mar 21 12:36:52.027204 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 38970 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:36:52.028740 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:52.033463 systemd-logind[1496]: New session 16 of user core. Mar 21 12:36:52.043166 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 21 12:36:52.276840 sshd[4202]: Connection closed by 10.0.0.1 port 38970 Mar 21 12:36:52.277297 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:52.287619 systemd[1]: sshd@15-10.0.0.85:22-10.0.0.1:38970.service: Deactivated successfully. Mar 21 12:36:52.290543 systemd[1]: session-16.scope: Deactivated successfully. Mar 21 12:36:52.292140 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. Mar 21 12:36:52.293581 systemd[1]: Started sshd@16-10.0.0.85:22-10.0.0.1:38984.service - OpenSSH per-connection server daemon (10.0.0.1:38984). Mar 21 12:36:52.294473 systemd-logind[1496]: Removed session 16. Mar 21 12:36:52.346088 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 38984 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:36:52.347585 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:52.352559 systemd-logind[1496]: New session 17 of user core. Mar 21 12:36:52.360165 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 21 12:36:53.673221 sshd[4215]: Connection closed by 10.0.0.1 port 38984 Mar 21 12:36:53.673950 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:53.683992 systemd[1]: sshd@16-10.0.0.85:22-10.0.0.1:38984.service: Deactivated successfully. Mar 21 12:36:53.686126 systemd[1]: session-17.scope: Deactivated successfully. Mar 21 12:36:53.688701 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. Mar 21 12:36:53.693431 systemd[1]: Started sshd@17-10.0.0.85:22-10.0.0.1:38996.service - OpenSSH per-connection server daemon (10.0.0.1:38996). Mar 21 12:36:53.696371 systemd-logind[1496]: Removed session 17. Mar 21 12:36:53.739311 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 38996 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:36:53.741126 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:53.746462 systemd-logind[1496]: New session 18 of user core. Mar 21 12:36:53.758226 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 21 12:36:54.005797 sshd[4237]: Connection closed by 10.0.0.1 port 38996 Mar 21 12:36:54.006578 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:54.016103 systemd[1]: sshd@17-10.0.0.85:22-10.0.0.1:38996.service: Deactivated successfully. Mar 21 12:36:54.018571 systemd[1]: session-18.scope: Deactivated successfully. Mar 21 12:36:54.019463 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. Mar 21 12:36:54.022620 systemd[1]: Started sshd@18-10.0.0.85:22-10.0.0.1:38998.service - OpenSSH per-connection server daemon (10.0.0.1:38998). Mar 21 12:36:54.024513 systemd-logind[1496]: Removed session 18. Mar 21 12:36:54.074630 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 38998 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:36:54.076270 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:54.081313 systemd-logind[1496]: New session 19 of user core. Mar 21 12:36:54.091221 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 21 12:36:54.211663 sshd[4251]: Connection closed by 10.0.0.1 port 38998 Mar 21 12:36:54.212067 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:54.216522 systemd[1]: sshd@18-10.0.0.85:22-10.0.0.1:38998.service: Deactivated successfully. Mar 21 12:36:54.219112 systemd[1]: session-19.scope: Deactivated successfully. Mar 21 12:36:54.220050 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. Mar 21 12:36:54.221435 systemd-logind[1496]: Removed session 19. Mar 21 12:36:59.225324 systemd[1]: Started sshd@19-10.0.0.85:22-10.0.0.1:39012.service - OpenSSH per-connection server daemon (10.0.0.1:39012). Mar 21 12:36:59.272731 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 39012 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:36:59.274433 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:59.278955 systemd-logind[1496]: New session 20 of user core. Mar 21 12:36:59.286190 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 21 12:36:59.402661 sshd[4267]: Connection closed by 10.0.0.1 port 39012 Mar 21 12:36:59.403225 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:59.407886 systemd[1]: sshd@19-10.0.0.85:22-10.0.0.1:39012.service: Deactivated successfully. Mar 21 12:36:59.410251 systemd[1]: session-20.scope: Deactivated successfully. Mar 21 12:36:59.410993 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. Mar 21 12:36:59.411908 systemd-logind[1496]: Removed session 20. Mar 21 12:37:04.416709 systemd[1]: Started sshd@20-10.0.0.85:22-10.0.0.1:43916.service - OpenSSH per-connection server daemon (10.0.0.1:43916). Mar 21 12:37:04.470949 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 43916 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:37:04.472740 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:37:04.477135 systemd-logind[1496]: New session 21 of user core. Mar 21 12:37:04.489150 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 21 12:37:04.599049 sshd[4288]: Connection closed by 10.0.0.1 port 43916 Mar 21 12:37:04.599429 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Mar 21 12:37:04.602803 systemd[1]: sshd@20-10.0.0.85:22-10.0.0.1:43916.service: Deactivated successfully. Mar 21 12:37:04.605053 systemd[1]: session-21.scope: Deactivated successfully. Mar 21 12:37:04.606903 systemd-logind[1496]: Session 21 logged out. Waiting for processes to exit. Mar 21 12:37:04.607804 systemd-logind[1496]: Removed session 21. Mar 21 12:37:09.612092 systemd[1]: Started sshd@21-10.0.0.85:22-10.0.0.1:43926.service - OpenSSH per-connection server daemon (10.0.0.1:43926). Mar 21 12:37:09.662047 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 43926 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:37:09.663589 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:37:09.667592 systemd-logind[1496]: New session 22 of user core. Mar 21 12:37:09.674150 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 21 12:37:09.776900 sshd[4304]: Connection closed by 10.0.0.1 port 43926 Mar 21 12:37:09.777277 sshd-session[4302]: pam_unix(sshd:session): session closed for user core Mar 21 12:37:09.781342 systemd[1]: sshd@21-10.0.0.85:22-10.0.0.1:43926.service: Deactivated successfully. Mar 21 12:37:09.783935 systemd[1]: session-22.scope: Deactivated successfully. Mar 21 12:37:09.784650 systemd-logind[1496]: Session 22 logged out. Waiting for processes to exit. Mar 21 12:37:09.785672 systemd-logind[1496]: Removed session 22. Mar 21 12:37:14.790448 systemd[1]: Started sshd@22-10.0.0.85:22-10.0.0.1:45404.service - OpenSSH per-connection server daemon (10.0.0.1:45404). Mar 21 12:37:14.841196 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 45404 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:37:14.843002 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:37:14.847657 systemd-logind[1496]: New session 23 of user core. Mar 21 12:37:14.858147 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 21 12:37:14.971983 sshd[4320]: Connection closed by 10.0.0.1 port 45404 Mar 21 12:37:14.972416 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Mar 21 12:37:14.985808 systemd[1]: sshd@22-10.0.0.85:22-10.0.0.1:45404.service: Deactivated successfully. Mar 21 12:37:14.988036 systemd[1]: session-23.scope: Deactivated successfully. Mar 21 12:37:14.990156 systemd-logind[1496]: Session 23 logged out. Waiting for processes to exit. Mar 21 12:37:14.991844 systemd[1]: Started sshd@23-10.0.0.85:22-10.0.0.1:45414.service - OpenSSH per-connection server daemon (10.0.0.1:45414). Mar 21 12:37:14.993418 systemd-logind[1496]: Removed session 23. Mar 21 12:37:15.043528 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 45414 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:37:15.045087 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:37:15.050079 systemd-logind[1496]: New session 24 of user core. Mar 21 12:37:15.058187 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 21 12:37:16.401219 containerd[1512]: time="2025-03-21T12:37:16.401163077Z" level=info msg="StopContainer for \"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\" with timeout 30 (s)" Mar 21 12:37:16.402913 containerd[1512]: time="2025-03-21T12:37:16.402835464Z" level=info msg="Stop container \"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\" with signal terminated" Mar 21 12:37:16.417731 systemd[1]: cri-containerd-306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8.scope: Deactivated successfully. Mar 21 12:37:16.420982 containerd[1512]: time="2025-03-21T12:37:16.420781593Z" level=info msg="received exit event container_id:\"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\" id:\"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\" pid:3527 exited_at:{seconds:1742560636 nanos:420334677}" Mar 21 12:37:16.421495 containerd[1512]: time="2025-03-21T12:37:16.421189235Z" level=info msg="TaskExit event in podsandbox handler container_id:\"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\" id:\"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\" pid:3527 exited_at:{seconds:1742560636 nanos:420334677}" Mar 21 12:37:16.433467 containerd[1512]: time="2025-03-21T12:37:16.433425776Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\" id:\"ce070878c512e02aeb773c9fa3a5e1d51f2c70e3bfb8e487156a5ba7bc41e81a\" pid:4366 exited_at:{seconds:1742560636 nanos:432863429}" Mar 21 12:37:16.435608 containerd[1512]: time="2025-03-21T12:37:16.435438687Z" level=info msg="StopContainer for \"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\" with timeout 2 (s)" Mar 21 12:37:16.435973 containerd[1512]: time="2025-03-21T12:37:16.435938585Z" level=info msg="Stop container \"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\" with signal terminated" Mar 21 12:37:16.441769 containerd[1512]: time="2025-03-21T12:37:16.441702527Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 21 12:37:16.444812 systemd-networkd[1450]: lxc_health: Link DOWN Mar 21 12:37:16.444820 systemd-networkd[1450]: lxc_health: Lost carrier Mar 21 12:37:16.447846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8-rootfs.mount: Deactivated successfully. Mar 21 12:37:16.465514 containerd[1512]: time="2025-03-21T12:37:16.465470879Z" level=info msg="StopContainer for \"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\" returns successfully" Mar 21 12:37:16.466202 containerd[1512]: time="2025-03-21T12:37:16.466169679Z" level=info msg="StopPodSandbox for \"0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c\"" Mar 21 12:37:16.469709 systemd[1]: cri-containerd-a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da.scope: Deactivated successfully. Mar 21 12:37:16.470185 systemd[1]: cri-containerd-a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da.scope: Consumed 6.895s CPU time, 125.2M memory peak, 208K read from disk, 13.3M written to disk. Mar 21 12:37:16.472130 containerd[1512]: time="2025-03-21T12:37:16.471932950Z" level=info msg="received exit event container_id:\"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\" id:\"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\" pid:3365 exited_at:{seconds:1742560636 nanos:471443410}" Mar 21 12:37:16.472130 containerd[1512]: time="2025-03-21T12:37:16.472038482Z" level=info msg="Container to stop \"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 12:37:16.472454 containerd[1512]: time="2025-03-21T12:37:16.472414312Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\" id:\"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\" pid:3365 exited_at:{seconds:1742560636 nanos:471443410}" Mar 21 12:37:16.481471 systemd[1]: cri-containerd-0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c.scope: Deactivated successfully. Mar 21 12:37:16.481988 containerd[1512]: time="2025-03-21T12:37:16.481860434Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c\" id:\"0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c\" pid:2981 exit_status:137 exited_at:{seconds:1742560636 nanos:481326270}" Mar 21 12:37:16.500181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da-rootfs.mount: Deactivated successfully. Mar 21 12:37:16.515390 containerd[1512]: time="2025-03-21T12:37:16.513565434Z" level=info msg="StopContainer for \"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\" returns successfully" Mar 21 12:37:16.514977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c-rootfs.mount: Deactivated successfully. Mar 21 12:37:16.515973 containerd[1512]: time="2025-03-21T12:37:16.515922803Z" level=info msg="StopPodSandbox for \"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\"" Mar 21 12:37:16.517110 containerd[1512]: time="2025-03-21T12:37:16.516010522Z" level=info msg="Container to stop \"30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 12:37:16.517110 containerd[1512]: time="2025-03-21T12:37:16.516040840Z" level=info msg="Container to stop \"a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 12:37:16.517110 containerd[1512]: time="2025-03-21T12:37:16.516051771Z" level=info msg="Container to stop \"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 12:37:16.517110 containerd[1512]: time="2025-03-21T12:37:16.516063073Z" level=info msg="Container to stop \"083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 12:37:16.517110 containerd[1512]: time="2025-03-21T12:37:16.516074144Z" level=info msg="Container to stop \"e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 12:37:16.518947 containerd[1512]: time="2025-03-21T12:37:16.518894601Z" level=info msg="shim disconnected" id=0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c namespace=k8s.io Mar 21 12:37:16.518947 containerd[1512]: time="2025-03-21T12:37:16.518938616Z" level=warning msg="cleaning up after shim disconnected" id=0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c namespace=k8s.io Mar 21 12:37:16.525006 systemd[1]: cri-containerd-5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe.scope: Deactivated successfully. Mar 21 12:37:16.527495 containerd[1512]: time="2025-03-21T12:37:16.518956400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 21 12:37:16.547489 containerd[1512]: time="2025-03-21T12:37:16.547446647Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\" id:\"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\" pid:2890 exit_status:137 exited_at:{seconds:1742560636 nanos:526227522}" Mar 21 12:37:16.549536 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c-shm.mount: Deactivated successfully. Mar 21 12:37:16.549669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe-rootfs.mount: Deactivated successfully. Mar 21 12:37:16.559410 containerd[1512]: time="2025-03-21T12:37:16.557441871Z" level=info msg="shim disconnected" id=5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe namespace=k8s.io Mar 21 12:37:16.559410 containerd[1512]: time="2025-03-21T12:37:16.557603722Z" level=warning msg="cleaning up after shim disconnected" id=5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe namespace=k8s.io Mar 21 12:37:16.559410 containerd[1512]: time="2025-03-21T12:37:16.557612097Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 21 12:37:16.559410 containerd[1512]: time="2025-03-21T12:37:16.557692733Z" level=info msg="TearDown network for sandbox \"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\" successfully" Mar 21 12:37:16.559410 containerd[1512]: time="2025-03-21T12:37:16.557711037Z" level=info msg="StopPodSandbox for \"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\" returns successfully" Mar 21 12:37:16.567489 containerd[1512]: time="2025-03-21T12:37:16.567425483Z" level=info msg="TearDown network for sandbox \"0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c\" successfully" Mar 21 12:37:16.567489 containerd[1512]: time="2025-03-21T12:37:16.567478895Z" level=info msg="StopPodSandbox for \"0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c\" returns successfully" Mar 21 12:37:16.569889 containerd[1512]: time="2025-03-21T12:37:16.569840183Z" level=info msg="received exit event sandbox_id:\"0f6987e97756ba9cb60dc54a608c4a3f03a3b50207fc8383b6e879a01058174c\" exit_status:137 exited_at:{seconds:1742560636 nanos:481326270}" Mar 21 12:37:16.570320 containerd[1512]: time="2025-03-21T12:37:16.570288533Z" level=info msg="received exit event sandbox_id:\"5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe\" exit_status:137 exited_at:{seconds:1742560636 nanos:526227522}" Mar 21 12:37:16.570691 containerd[1512]: time="2025-03-21T12:37:16.570659534Z" level=warning msg="cleanup warnings time=\"2025-03-21T12:37:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 21 12:37:16.677219 kubelet[2743]: I0321 12:37:16.677158 2743 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-etc-cni-netd\") pod \"07a7801f-9180-44e0-987e-7943aebb157b\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " Mar 21 12:37:16.677219 kubelet[2743]: I0321 12:37:16.677214 2743 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/07a7801f-9180-44e0-987e-7943aebb157b-hubble-tls\") pod \"07a7801f-9180-44e0-987e-7943aebb157b\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " Mar 21 12:37:16.677756 kubelet[2743]: I0321 12:37:16.677231 2743 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-host-proc-sys-kernel\") pod \"07a7801f-9180-44e0-987e-7943aebb157b\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " Mar 21 12:37:16.677756 kubelet[2743]: I0321 12:37:16.677251 2743 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07a7801f-9180-44e0-987e-7943aebb157b-cilium-config-path\") pod \"07a7801f-9180-44e0-987e-7943aebb157b\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " Mar 21 12:37:16.677756 kubelet[2743]: I0321 12:37:16.677268 2743 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-lib-modules\") pod \"07a7801f-9180-44e0-987e-7943aebb157b\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " Mar 21 12:37:16.677756 kubelet[2743]: I0321 12:37:16.677283 2743 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-cilium-run\") pod \"07a7801f-9180-44e0-987e-7943aebb157b\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " Mar 21 12:37:16.677756 kubelet[2743]: I0321 12:37:16.677298 2743 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-xtables-lock\") pod \"07a7801f-9180-44e0-987e-7943aebb157b\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " Mar 21 12:37:16.677756 kubelet[2743]: I0321 12:37:16.677313 2743 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-cilium-cgroup\") pod \"07a7801f-9180-44e0-987e-7943aebb157b\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " Mar 21 12:37:16.677917 kubelet[2743]: I0321 12:37:16.677327 2743 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-bpf-maps\") pod \"07a7801f-9180-44e0-987e-7943aebb157b\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " Mar 21 12:37:16.677917 kubelet[2743]: I0321 12:37:16.677314 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "07a7801f-9180-44e0-987e-7943aebb157b" (UID: "07a7801f-9180-44e0-987e-7943aebb157b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:37:16.677917 kubelet[2743]: I0321 12:37:16.677367 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-hostproc" (OuterVolumeSpecName: "hostproc") pod "07a7801f-9180-44e0-987e-7943aebb157b" (UID: "07a7801f-9180-44e0-987e-7943aebb157b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:37:16.677917 kubelet[2743]: I0321 12:37:16.677342 2743 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-hostproc\") pod \"07a7801f-9180-44e0-987e-7943aebb157b\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " Mar 21 12:37:16.677917 kubelet[2743]: I0321 12:37:16.677432 2743 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-host-proc-sys-net\") pod \"07a7801f-9180-44e0-987e-7943aebb157b\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " Mar 21 12:37:16.677917 kubelet[2743]: I0321 12:37:16.677457 2743 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8982cd85-5089-4cb7-8b6e-cb6d6339203c-cilium-config-path\") pod \"8982cd85-5089-4cb7-8b6e-cb6d6339203c\" (UID: \"8982cd85-5089-4cb7-8b6e-cb6d6339203c\") " Mar 21 12:37:16.678092 kubelet[2743]: I0321 12:37:16.677476 2743 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj2wh\" (UniqueName: \"kubernetes.io/projected/07a7801f-9180-44e0-987e-7943aebb157b-kube-api-access-cj2wh\") pod \"07a7801f-9180-44e0-987e-7943aebb157b\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " Mar 21 12:37:16.678092 kubelet[2743]: I0321 12:37:16.677495 2743 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/07a7801f-9180-44e0-987e-7943aebb157b-clustermesh-secrets\") pod \"07a7801f-9180-44e0-987e-7943aebb157b\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " Mar 21 12:37:16.678092 kubelet[2743]: I0321 12:37:16.677511 2743 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-cni-path\") pod \"07a7801f-9180-44e0-987e-7943aebb157b\" (UID: \"07a7801f-9180-44e0-987e-7943aebb157b\") " Mar 21 12:37:16.678092 kubelet[2743]: I0321 12:37:16.677527 2743 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckrsv\" (UniqueName: \"kubernetes.io/projected/8982cd85-5089-4cb7-8b6e-cb6d6339203c-kube-api-access-ckrsv\") pod \"8982cd85-5089-4cb7-8b6e-cb6d6339203c\" (UID: \"8982cd85-5089-4cb7-8b6e-cb6d6339203c\") " Mar 21 12:37:16.678092 kubelet[2743]: I0321 12:37:16.677572 2743 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 21 12:37:16.678092 kubelet[2743]: I0321 12:37:16.677581 2743 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 21 12:37:16.680921 kubelet[2743]: I0321 12:37:16.677393 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "07a7801f-9180-44e0-987e-7943aebb157b" (UID: "07a7801f-9180-44e0-987e-7943aebb157b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:37:16.681392 kubelet[2743]: I0321 12:37:16.680810 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07a7801f-9180-44e0-987e-7943aebb157b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "07a7801f-9180-44e0-987e-7943aebb157b" (UID: "07a7801f-9180-44e0-987e-7943aebb157b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 12:37:16.681392 kubelet[2743]: I0321 12:37:16.680835 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "07a7801f-9180-44e0-987e-7943aebb157b" (UID: "07a7801f-9180-44e0-987e-7943aebb157b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:37:16.681392 kubelet[2743]: I0321 12:37:16.680846 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "07a7801f-9180-44e0-987e-7943aebb157b" (UID: "07a7801f-9180-44e0-987e-7943aebb157b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:37:16.681392 kubelet[2743]: I0321 12:37:16.680858 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "07a7801f-9180-44e0-987e-7943aebb157b" (UID: "07a7801f-9180-44e0-987e-7943aebb157b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:37:16.681392 kubelet[2743]: I0321 12:37:16.680868 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "07a7801f-9180-44e0-987e-7943aebb157b" (UID: "07a7801f-9180-44e0-987e-7943aebb157b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:37:16.681547 kubelet[2743]: I0321 12:37:16.680877 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "07a7801f-9180-44e0-987e-7943aebb157b" (UID: "07a7801f-9180-44e0-987e-7943aebb157b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:37:16.681547 kubelet[2743]: I0321 12:37:16.681222 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8982cd85-5089-4cb7-8b6e-cb6d6339203c-kube-api-access-ckrsv" (OuterVolumeSpecName: "kube-api-access-ckrsv") pod "8982cd85-5089-4cb7-8b6e-cb6d6339203c" (UID: "8982cd85-5089-4cb7-8b6e-cb6d6339203c"). InnerVolumeSpecName "kube-api-access-ckrsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 12:37:16.681547 kubelet[2743]: I0321 12:37:16.681263 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-cni-path" (OuterVolumeSpecName: "cni-path") pod "07a7801f-9180-44e0-987e-7943aebb157b" (UID: "07a7801f-9180-44e0-987e-7943aebb157b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:37:16.681547 kubelet[2743]: I0321 12:37:16.681286 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "07a7801f-9180-44e0-987e-7943aebb157b" (UID: "07a7801f-9180-44e0-987e-7943aebb157b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 12:37:16.681547 kubelet[2743]: I0321 12:37:16.681504 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07a7801f-9180-44e0-987e-7943aebb157b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "07a7801f-9180-44e0-987e-7943aebb157b" (UID: "07a7801f-9180-44e0-987e-7943aebb157b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 12:37:16.681951 kubelet[2743]: I0321 12:37:16.681917 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8982cd85-5089-4cb7-8b6e-cb6d6339203c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8982cd85-5089-4cb7-8b6e-cb6d6339203c" (UID: "8982cd85-5089-4cb7-8b6e-cb6d6339203c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 12:37:16.684140 kubelet[2743]: I0321 12:37:16.684108 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07a7801f-9180-44e0-987e-7943aebb157b-kube-api-access-cj2wh" (OuterVolumeSpecName: "kube-api-access-cj2wh") pod "07a7801f-9180-44e0-987e-7943aebb157b" (UID: "07a7801f-9180-44e0-987e-7943aebb157b"). InnerVolumeSpecName "kube-api-access-cj2wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 12:37:16.684705 kubelet[2743]: I0321 12:37:16.684677 2743 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07a7801f-9180-44e0-987e-7943aebb157b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "07a7801f-9180-44e0-987e-7943aebb157b" (UID: "07a7801f-9180-44e0-987e-7943aebb157b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 12:37:16.778039 kubelet[2743]: I0321 12:37:16.777960 2743 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 21 12:37:16.778039 kubelet[2743]: I0321 12:37:16.778000 2743 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07a7801f-9180-44e0-987e-7943aebb157b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 21 12:37:16.778039 kubelet[2743]: I0321 12:37:16.778014 2743 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 21 12:37:16.778039 kubelet[2743]: I0321 12:37:16.778043 2743 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 21 12:37:16.778238 kubelet[2743]: I0321 12:37:16.778055 2743 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 21 12:37:16.778238 kubelet[2743]: I0321 12:37:16.778066 2743 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 21 12:37:16.778238 kubelet[2743]: I0321 12:37:16.778077 2743 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 21 12:37:16.778238 kubelet[2743]: I0321 12:37:16.778088 2743 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 21 12:37:16.778238 kubelet[2743]: I0321 12:37:16.778099 2743 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8982cd85-5089-4cb7-8b6e-cb6d6339203c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 21 12:37:16.778238 kubelet[2743]: I0321 12:37:16.778110 2743 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cj2wh\" (UniqueName: \"kubernetes.io/projected/07a7801f-9180-44e0-987e-7943aebb157b-kube-api-access-cj2wh\") on node \"localhost\" DevicePath \"\"" Mar 21 12:37:16.778238 kubelet[2743]: I0321 12:37:16.778121 2743 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/07a7801f-9180-44e0-987e-7943aebb157b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 21 12:37:16.778238 kubelet[2743]: I0321 12:37:16.778131 2743 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/07a7801f-9180-44e0-987e-7943aebb157b-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 21 12:37:16.778447 kubelet[2743]: I0321 12:37:16.778144 2743 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ckrsv\" (UniqueName: \"kubernetes.io/projected/8982cd85-5089-4cb7-8b6e-cb6d6339203c-kube-api-access-ckrsv\") on node \"localhost\" DevicePath \"\"" Mar 21 12:37:16.778447 kubelet[2743]: I0321 12:37:16.778155 2743 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/07a7801f-9180-44e0-987e-7943aebb157b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 21 12:37:16.787190 kubelet[2743]: I0321 12:37:16.787168 2743 scope.go:117] "RemoveContainer" containerID="a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da" Mar 21 12:37:16.788900 containerd[1512]: time="2025-03-21T12:37:16.788849454Z" level=info msg="RemoveContainer for \"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\"" Mar 21 12:37:16.799193 systemd[1]: Removed slice kubepods-besteffort-pod8982cd85_5089_4cb7_8b6e_cb6d6339203c.slice - libcontainer container kubepods-besteffort-pod8982cd85_5089_4cb7_8b6e_cb6d6339203c.slice. Mar 21 12:37:16.800857 systemd[1]: Removed slice kubepods-burstable-pod07a7801f_9180_44e0_987e_7943aebb157b.slice - libcontainer container kubepods-burstable-pod07a7801f_9180_44e0_987e_7943aebb157b.slice. Mar 21 12:37:16.800968 systemd[1]: kubepods-burstable-pod07a7801f_9180_44e0_987e_7943aebb157b.slice: Consumed 7.022s CPU time, 125.5M memory peak, 268K read from disk, 13.3M written to disk. Mar 21 12:37:16.818600 containerd[1512]: time="2025-03-21T12:37:16.818542557Z" level=info msg="RemoveContainer for \"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\" returns successfully" Mar 21 12:37:16.818896 kubelet[2743]: I0321 12:37:16.818857 2743 scope.go:117] "RemoveContainer" containerID="a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2" Mar 21 12:37:16.820738 containerd[1512]: time="2025-03-21T12:37:16.820691498Z" level=info msg="RemoveContainer for \"a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2\"" Mar 21 12:37:16.825001 containerd[1512]: time="2025-03-21T12:37:16.824973908Z" level=info msg="RemoveContainer for \"a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2\" returns successfully" Mar 21 12:37:16.825196 kubelet[2743]: I0321 12:37:16.825172 2743 scope.go:117] "RemoveContainer" containerID="30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae" Mar 21 12:37:16.827718 containerd[1512]: time="2025-03-21T12:37:16.827679226Z" level=info msg="RemoveContainer for \"30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae\"" Mar 21 12:37:16.832184 containerd[1512]: time="2025-03-21T12:37:16.832149306Z" level=info msg="RemoveContainer for \"30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae\" returns successfully" Mar 21 12:37:16.832410 kubelet[2743]: I0321 12:37:16.832328 2743 scope.go:117] "RemoveContainer" containerID="e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2" Mar 21 12:37:16.833711 containerd[1512]: time="2025-03-21T12:37:16.833685271Z" level=info msg="RemoveContainer for \"e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2\"" Mar 21 12:37:16.838181 containerd[1512]: time="2025-03-21T12:37:16.838136806Z" level=info msg="RemoveContainer for \"e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2\" returns successfully" Mar 21 12:37:16.838341 kubelet[2743]: I0321 12:37:16.838315 2743 scope.go:117] "RemoveContainer" containerID="083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508" Mar 21 12:37:16.839749 containerd[1512]: time="2025-03-21T12:37:16.839718028Z" level=info msg="RemoveContainer for \"083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508\"" Mar 21 12:37:16.843311 containerd[1512]: time="2025-03-21T12:37:16.843278314Z" level=info msg="RemoveContainer for \"083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508\" returns successfully" Mar 21 12:37:16.843454 kubelet[2743]: I0321 12:37:16.843426 2743 scope.go:117] "RemoveContainer" containerID="a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da" Mar 21 12:37:16.843675 containerd[1512]: time="2025-03-21T12:37:16.843602677Z" level=error msg="ContainerStatus for \"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\": not found" Mar 21 12:37:16.843774 kubelet[2743]: E0321 12:37:16.843750 2743 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\": not found" containerID="a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da" Mar 21 12:37:16.843928 kubelet[2743]: I0321 12:37:16.843781 2743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da"} err="failed to get container status \"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\": rpc error: code = NotFound desc = an error occurred when try to find container \"a042e49eb3b08bf1b45856e2da878d51c48307317bf3843d098301e591ca17da\": not found" Mar 21 12:37:16.843928 kubelet[2743]: I0321 12:37:16.843918 2743 scope.go:117] "RemoveContainer" containerID="a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2" Mar 21 12:37:16.844160 containerd[1512]: time="2025-03-21T12:37:16.844116542Z" level=error msg="ContainerStatus for \"a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2\": not found" Mar 21 12:37:16.844340 kubelet[2743]: E0321 12:37:16.844308 2743 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2\": not found" containerID="a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2" Mar 21 12:37:16.844395 kubelet[2743]: I0321 12:37:16.844340 2743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2"} err="failed to get container status \"a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2\": rpc error: code = NotFound desc = an error occurred when try to find container \"a2f1566762aa1e9e66c2edf84cefa6dc03189df54f864b90cf960032a32bdbb2\": not found" Mar 21 12:37:16.844395 kubelet[2743]: I0321 12:37:16.844362 2743 scope.go:117] "RemoveContainer" containerID="30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae" Mar 21 12:37:16.844570 containerd[1512]: time="2025-03-21T12:37:16.844528241Z" level=error msg="ContainerStatus for \"30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae\": not found" Mar 21 12:37:16.844691 kubelet[2743]: E0321 12:37:16.844666 2743 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae\": not found" containerID="30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae" Mar 21 12:37:16.844728 kubelet[2743]: I0321 12:37:16.844684 2743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae"} err="failed to get container status \"30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"30d368eb15df8401586a4f93942ee4c7069c261748db5584764129bbde05a0ae\": not found" Mar 21 12:37:16.844728 kubelet[2743]: I0321 12:37:16.844709 2743 scope.go:117] "RemoveContainer" containerID="e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2" Mar 21 12:37:16.844868 containerd[1512]: time="2025-03-21T12:37:16.844839488Z" level=error msg="ContainerStatus for \"e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2\": not found" Mar 21 12:37:16.844972 kubelet[2743]: E0321 12:37:16.844939 2743 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2\": not found" containerID="e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2" Mar 21 12:37:16.844972 kubelet[2743]: I0321 12:37:16.844955 2743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2"} err="failed to get container status \"e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2\": rpc error: code = NotFound desc = an error occurred when try to find container \"e86850b4135dbc09708cd368f595713682cc0eae8c0e34bd5a22c1255dbd0dd2\": not found" Mar 21 12:37:16.844972 kubelet[2743]: I0321 12:37:16.844966 2743 scope.go:117] "RemoveContainer" containerID="083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508" Mar 21 12:37:16.845136 containerd[1512]: time="2025-03-21T12:37:16.845103815Z" level=error msg="ContainerStatus for \"083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508\": not found" Mar 21 12:37:16.845264 kubelet[2743]: E0321 12:37:16.845225 2743 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508\": not found" containerID="083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508" Mar 21 12:37:16.845264 kubelet[2743]: I0321 12:37:16.845254 2743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508"} err="failed to get container status \"083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508\": rpc error: code = NotFound desc = an error occurred when try to find container \"083690049342856172e395801b3ac600e91c44eb59adf508a97fb24b147e4508\": not found" Mar 21 12:37:16.845350 kubelet[2743]: I0321 12:37:16.845270 2743 scope.go:117] "RemoveContainer" containerID="306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8" Mar 21 12:37:16.846642 containerd[1512]: time="2025-03-21T12:37:16.846606366Z" level=info msg="RemoveContainer for \"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\"" Mar 21 12:37:16.850264 containerd[1512]: time="2025-03-21T12:37:16.850238500Z" level=info msg="RemoveContainer for \"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\" returns successfully" Mar 21 12:37:16.850379 kubelet[2743]: I0321 12:37:16.850360 2743 scope.go:117] "RemoveContainer" containerID="306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8" Mar 21 12:37:16.850587 containerd[1512]: time="2025-03-21T12:37:16.850536420Z" level=error msg="ContainerStatus for \"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\": not found" Mar 21 12:37:16.850694 kubelet[2743]: E0321 12:37:16.850672 2743 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\": not found" containerID="306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8" Mar 21 12:37:16.850731 kubelet[2743]: I0321 12:37:16.850698 2743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8"} err="failed to get container status \"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"306bcb4b31c1f47fb976dd8e4c7757fe0745eddb316b6d508312cb74675441e8\": not found" Mar 21 12:37:17.447735 systemd[1]: var-lib-kubelet-pods-8982cd85\x2d5089\x2d4cb7\x2d8b6e\x2dcb6d6339203c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dckrsv.mount: Deactivated successfully. Mar 21 12:37:17.447877 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e896abc636eb0edfa8deb8ebc670341bbd945a4668e370e9a3ca3c0ef2068fe-shm.mount: Deactivated successfully. Mar 21 12:37:17.447983 systemd[1]: var-lib-kubelet-pods-07a7801f\x2d9180\x2d44e0\x2d987e\x2d7943aebb157b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcj2wh.mount: Deactivated successfully. Mar 21 12:37:17.448087 systemd[1]: var-lib-kubelet-pods-07a7801f\x2d9180\x2d44e0\x2d987e\x2d7943aebb157b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 21 12:37:17.448186 systemd[1]: var-lib-kubelet-pods-07a7801f\x2d9180\x2d44e0\x2d987e\x2d7943aebb157b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 21 12:37:18.364845 sshd[4336]: Connection closed by 10.0.0.1 port 45414 Mar 21 12:37:18.365315 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Mar 21 12:37:18.378072 systemd[1]: sshd@23-10.0.0.85:22-10.0.0.1:45414.service: Deactivated successfully. Mar 21 12:37:18.380272 systemd[1]: session-24.scope: Deactivated successfully. Mar 21 12:37:18.382197 systemd-logind[1496]: Session 24 logged out. Waiting for processes to exit. Mar 21 12:37:18.383641 systemd[1]: Started sshd@24-10.0.0.85:22-10.0.0.1:45424.service - OpenSSH per-connection server daemon (10.0.0.1:45424). Mar 21 12:37:18.384621 systemd-logind[1496]: Removed session 24. Mar 21 12:37:18.435155 sshd[4488]: Accepted publickey for core from 10.0.0.1 port 45424 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:37:18.436664 sshd-session[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:37:18.441593 systemd-logind[1496]: New session 25 of user core. Mar 21 12:37:18.452143 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 21 12:37:18.566078 kubelet[2743]: E0321 12:37:18.565989 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:37:18.567152 kubelet[2743]: I0321 12:37:18.567108 2743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07a7801f-9180-44e0-987e-7943aebb157b" path="/var/lib/kubelet/pods/07a7801f-9180-44e0-987e-7943aebb157b/volumes" Mar 21 12:37:18.568334 kubelet[2743]: I0321 12:37:18.568163 2743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8982cd85-5089-4cb7-8b6e-cb6d6339203c" path="/var/lib/kubelet/pods/8982cd85-5089-4cb7-8b6e-cb6d6339203c/volumes" Mar 21 12:37:18.866689 sshd[4491]: Connection closed by 10.0.0.1 port 45424 Mar 21 12:37:18.868564 sshd-session[4488]: pam_unix(sshd:session): session closed for user core Mar 21 12:37:18.881438 kubelet[2743]: I0321 12:37:18.880504 2743 topology_manager.go:215] "Topology Admit Handler" podUID="0b3ab610-1bbd-4fda-ad37-890392097b48" podNamespace="kube-system" podName="cilium-xvqrk" Mar 21 12:37:18.881438 kubelet[2743]: E0321 12:37:18.880570 2743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07a7801f-9180-44e0-987e-7943aebb157b" containerName="mount-bpf-fs" Mar 21 12:37:18.881438 kubelet[2743]: E0321 12:37:18.880581 2743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07a7801f-9180-44e0-987e-7943aebb157b" containerName="clean-cilium-state" Mar 21 12:37:18.881438 kubelet[2743]: E0321 12:37:18.880589 2743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07a7801f-9180-44e0-987e-7943aebb157b" containerName="mount-cgroup" Mar 21 12:37:18.881438 kubelet[2743]: E0321 12:37:18.880594 2743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07a7801f-9180-44e0-987e-7943aebb157b" containerName="apply-sysctl-overwrites" Mar 21 12:37:18.881438 kubelet[2743]: E0321 12:37:18.880601 2743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07a7801f-9180-44e0-987e-7943aebb157b" containerName="cilium-agent" Mar 21 12:37:18.881438 kubelet[2743]: E0321 12:37:18.880608 2743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8982cd85-5089-4cb7-8b6e-cb6d6339203c" containerName="cilium-operator" Mar 21 12:37:18.881438 kubelet[2743]: I0321 12:37:18.880627 2743 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a7801f-9180-44e0-987e-7943aebb157b" containerName="cilium-agent" Mar 21 12:37:18.881438 kubelet[2743]: I0321 12:37:18.880634 2743 memory_manager.go:354] "RemoveStaleState removing state" podUID="8982cd85-5089-4cb7-8b6e-cb6d6339203c" containerName="cilium-operator" Mar 21 12:37:18.885657 systemd[1]: sshd@24-10.0.0.85:22-10.0.0.1:45424.service: Deactivated successfully. Mar 21 12:37:18.889085 systemd[1]: session-25.scope: Deactivated successfully. Mar 21 12:37:18.895461 systemd-logind[1496]: Session 25 logged out. Waiting for processes to exit. Mar 21 12:37:18.903453 systemd[1]: Started sshd@25-10.0.0.85:22-10.0.0.1:45430.service - OpenSSH per-connection server daemon (10.0.0.1:45430). Mar 21 12:37:18.904818 systemd-logind[1496]: Removed session 25. Mar 21 12:37:18.917269 systemd[1]: Created slice kubepods-burstable-pod0b3ab610_1bbd_4fda_ad37_890392097b48.slice - libcontainer container kubepods-burstable-pod0b3ab610_1bbd_4fda_ad37_890392097b48.slice. Mar 21 12:37:18.951465 sshd[4502]: Accepted publickey for core from 10.0.0.1 port 45430 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:37:18.953128 sshd-session[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:37:18.957706 systemd-logind[1496]: New session 26 of user core. Mar 21 12:37:18.967244 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 21 12:37:18.991165 kubelet[2743]: I0321 12:37:18.991102 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b3ab610-1bbd-4fda-ad37-890392097b48-cilium-run\") pod \"cilium-xvqrk\" (UID: \"0b3ab610-1bbd-4fda-ad37-890392097b48\") " pod="kube-system/cilium-xvqrk" Mar 21 12:37:18.991165 kubelet[2743]: I0321 12:37:18.991143 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b3ab610-1bbd-4fda-ad37-890392097b48-cilium-cgroup\") pod \"cilium-xvqrk\" (UID: \"0b3ab610-1bbd-4fda-ad37-890392097b48\") " pod="kube-system/cilium-xvqrk" Mar 21 12:37:18.991165 kubelet[2743]: I0321 12:37:18.991171 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b3ab610-1bbd-4fda-ad37-890392097b48-bpf-maps\") pod \"cilium-xvqrk\" (UID: \"0b3ab610-1bbd-4fda-ad37-890392097b48\") " pod="kube-system/cilium-xvqrk" Mar 21 12:37:18.991370 kubelet[2743]: I0321 12:37:18.991186 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b3ab610-1bbd-4fda-ad37-890392097b48-cni-path\") pod \"cilium-xvqrk\" (UID: \"0b3ab610-1bbd-4fda-ad37-890392097b48\") " pod="kube-system/cilium-xvqrk" Mar 21 12:37:18.991370 kubelet[2743]: I0321 12:37:18.991202 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b3ab610-1bbd-4fda-ad37-890392097b48-lib-modules\") pod \"cilium-xvqrk\" (UID: \"0b3ab610-1bbd-4fda-ad37-890392097b48\") " pod="kube-system/cilium-xvqrk" Mar 21 12:37:18.991370 kubelet[2743]: I0321 12:37:18.991254 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b3ab610-1bbd-4fda-ad37-890392097b48-host-proc-sys-net\") pod \"cilium-xvqrk\" (UID: \"0b3ab610-1bbd-4fda-ad37-890392097b48\") " pod="kube-system/cilium-xvqrk" Mar 21 12:37:18.991370 kubelet[2743]: I0321 12:37:18.991313 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b3ab610-1bbd-4fda-ad37-890392097b48-hubble-tls\") pod \"cilium-xvqrk\" (UID: \"0b3ab610-1bbd-4fda-ad37-890392097b48\") " pod="kube-system/cilium-xvqrk" Mar 21 12:37:18.991370 kubelet[2743]: I0321 12:37:18.991362 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b3ab610-1bbd-4fda-ad37-890392097b48-hostproc\") pod \"cilium-xvqrk\" (UID: \"0b3ab610-1bbd-4fda-ad37-890392097b48\") " pod="kube-system/cilium-xvqrk" Mar 21 12:37:18.991501 kubelet[2743]: I0321 12:37:18.991401 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b3ab610-1bbd-4fda-ad37-890392097b48-etc-cni-netd\") pod \"cilium-xvqrk\" (UID: \"0b3ab610-1bbd-4fda-ad37-890392097b48\") " pod="kube-system/cilium-xvqrk" Mar 21 12:37:18.991501 kubelet[2743]: I0321 12:37:18.991421 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b3ab610-1bbd-4fda-ad37-890392097b48-cilium-config-path\") pod \"cilium-xvqrk\" (UID: \"0b3ab610-1bbd-4fda-ad37-890392097b48\") " pod="kube-system/cilium-xvqrk" Mar 21 12:37:18.991501 kubelet[2743]: I0321 12:37:18.991438 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b3ab610-1bbd-4fda-ad37-890392097b48-host-proc-sys-kernel\") pod \"cilium-xvqrk\" (UID: \"0b3ab610-1bbd-4fda-ad37-890392097b48\") " pod="kube-system/cilium-xvqrk" Mar 21 12:37:18.991501 kubelet[2743]: I0321 12:37:18.991457 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b3ab610-1bbd-4fda-ad37-890392097b48-xtables-lock\") pod \"cilium-xvqrk\" (UID: \"0b3ab610-1bbd-4fda-ad37-890392097b48\") " pod="kube-system/cilium-xvqrk" Mar 21 12:37:18.991501 kubelet[2743]: I0321 12:37:18.991484 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwmjl\" (UniqueName: \"kubernetes.io/projected/0b3ab610-1bbd-4fda-ad37-890392097b48-kube-api-access-mwmjl\") pod \"cilium-xvqrk\" (UID: \"0b3ab610-1bbd-4fda-ad37-890392097b48\") " pod="kube-system/cilium-xvqrk" Mar 21 12:37:18.991618 kubelet[2743]: I0321 12:37:18.991504 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b3ab610-1bbd-4fda-ad37-890392097b48-clustermesh-secrets\") pod \"cilium-xvqrk\" (UID: \"0b3ab610-1bbd-4fda-ad37-890392097b48\") " pod="kube-system/cilium-xvqrk" Mar 21 12:37:18.991618 kubelet[2743]: I0321 12:37:18.991519 2743 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0b3ab610-1bbd-4fda-ad37-890392097b48-cilium-ipsec-secrets\") pod \"cilium-xvqrk\" (UID: \"0b3ab610-1bbd-4fda-ad37-890392097b48\") " pod="kube-system/cilium-xvqrk" Mar 21 12:37:19.019437 sshd[4505]: Connection closed by 10.0.0.1 port 45430 Mar 21 12:37:19.019958 sshd-session[4502]: pam_unix(sshd:session): session closed for user core Mar 21 12:37:19.030281 systemd[1]: sshd@25-10.0.0.85:22-10.0.0.1:45430.service: Deactivated successfully. Mar 21 12:37:19.032568 systemd[1]: session-26.scope: Deactivated successfully. Mar 21 12:37:19.034299 systemd-logind[1496]: Session 26 logged out. Waiting for processes to exit. Mar 21 12:37:19.035719 systemd[1]: Started sshd@26-10.0.0.85:22-10.0.0.1:45438.service - OpenSSH per-connection server daemon (10.0.0.1:45438). Mar 21 12:37:19.036675 systemd-logind[1496]: Removed session 26. Mar 21 12:37:19.086535 sshd[4511]: Accepted publickey for core from 10.0.0.1 port 45438 ssh2: RSA SHA256:tWdbp2URq4GouANCov/TDTEWakxUPy2XCHa12NMpRZA Mar 21 12:37:19.088143 sshd-session[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:37:19.094072 systemd-logind[1496]: New session 27 of user core. Mar 21 12:37:19.104322 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 21 12:37:19.221622 kubelet[2743]: E0321 12:37:19.221561 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:37:19.222228 containerd[1512]: time="2025-03-21T12:37:19.222160509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xvqrk,Uid:0b3ab610-1bbd-4fda-ad37-890392097b48,Namespace:kube-system,Attempt:0,}" Mar 21 12:37:19.246010 containerd[1512]: time="2025-03-21T12:37:19.245950697Z" level=info msg="connecting to shim 0aa089facf23eb88e37b86b6af6d8c40f6e81f0cdabfbbc3a46d1e5e3631485e" address="unix:///run/containerd/s/cb522bdf725494f82d62e68b661160d6cb0f5594261867d9686a8d93954871a3" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:37:19.269185 systemd[1]: Started cri-containerd-0aa089facf23eb88e37b86b6af6d8c40f6e81f0cdabfbbc3a46d1e5e3631485e.scope - libcontainer container 0aa089facf23eb88e37b86b6af6d8c40f6e81f0cdabfbbc3a46d1e5e3631485e. Mar 21 12:37:19.315610 containerd[1512]: time="2025-03-21T12:37:19.315563416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xvqrk,Uid:0b3ab610-1bbd-4fda-ad37-890392097b48,Namespace:kube-system,Attempt:0,} returns sandbox id \"0aa089facf23eb88e37b86b6af6d8c40f6e81f0cdabfbbc3a46d1e5e3631485e\"" Mar 21 12:37:19.316393 kubelet[2743]: E0321 12:37:19.316362 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:37:19.318324 containerd[1512]: time="2025-03-21T12:37:19.318290877Z" level=info msg="CreateContainer within sandbox \"0aa089facf23eb88e37b86b6af6d8c40f6e81f0cdabfbbc3a46d1e5e3631485e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 21 12:37:19.334707 containerd[1512]: time="2025-03-21T12:37:19.334659168Z" level=info msg="Container 115c6cac75c84567fb05277d02195da60d5f63502ce3b6c79d4f93601606bd96: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:19.350408 containerd[1512]: time="2025-03-21T12:37:19.350292182Z" level=info msg="CreateContainer within sandbox \"0aa089facf23eb88e37b86b6af6d8c40f6e81f0cdabfbbc3a46d1e5e3631485e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"115c6cac75c84567fb05277d02195da60d5f63502ce3b6c79d4f93601606bd96\"" Mar 21 12:37:19.351196 containerd[1512]: time="2025-03-21T12:37:19.351163720Z" level=info msg="StartContainer for \"115c6cac75c84567fb05277d02195da60d5f63502ce3b6c79d4f93601606bd96\"" Mar 21 12:37:19.352319 containerd[1512]: time="2025-03-21T12:37:19.352275188Z" level=info msg="connecting to shim 115c6cac75c84567fb05277d02195da60d5f63502ce3b6c79d4f93601606bd96" address="unix:///run/containerd/s/cb522bdf725494f82d62e68b661160d6cb0f5594261867d9686a8d93954871a3" protocol=ttrpc version=3 Mar 21 12:37:19.377154 systemd[1]: Started cri-containerd-115c6cac75c84567fb05277d02195da60d5f63502ce3b6c79d4f93601606bd96.scope - libcontainer container 115c6cac75c84567fb05277d02195da60d5f63502ce3b6c79d4f93601606bd96. Mar 21 12:37:19.412513 containerd[1512]: time="2025-03-21T12:37:19.412325774Z" level=info msg="StartContainer for \"115c6cac75c84567fb05277d02195da60d5f63502ce3b6c79d4f93601606bd96\" returns successfully" Mar 21 12:37:19.423091 systemd[1]: cri-containerd-115c6cac75c84567fb05277d02195da60d5f63502ce3b6c79d4f93601606bd96.scope: Deactivated successfully. Mar 21 12:37:19.424447 containerd[1512]: time="2025-03-21T12:37:19.424385728Z" level=info msg="received exit event container_id:\"115c6cac75c84567fb05277d02195da60d5f63502ce3b6c79d4f93601606bd96\" id:\"115c6cac75c84567fb05277d02195da60d5f63502ce3b6c79d4f93601606bd96\" pid:4583 exited_at:{seconds:1742560639 nanos:423974681}" Mar 21 12:37:19.424518 containerd[1512]: time="2025-03-21T12:37:19.424447707Z" level=info msg="TaskExit event in podsandbox handler container_id:\"115c6cac75c84567fb05277d02195da60d5f63502ce3b6c79d4f93601606bd96\" id:\"115c6cac75c84567fb05277d02195da60d5f63502ce3b6c79d4f93601606bd96\" pid:4583 exited_at:{seconds:1742560639 nanos:423974681}" Mar 21 12:37:19.564213 kubelet[2743]: E0321 12:37:19.564059 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:37:19.798400 kubelet[2743]: E0321 12:37:19.798360 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:37:19.800341 containerd[1512]: time="2025-03-21T12:37:19.800295987Z" level=info msg="CreateContainer within sandbox \"0aa089facf23eb88e37b86b6af6d8c40f6e81f0cdabfbbc3a46d1e5e3631485e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 21 12:37:19.852400 containerd[1512]: time="2025-03-21T12:37:19.852273179Z" level=info msg="Container d6d7c87ae067684a9d4e35ffe513d7be573e4894e34597fdab44ddf895711750: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:19.858798 containerd[1512]: time="2025-03-21T12:37:19.858752902Z" level=info msg="CreateContainer within sandbox \"0aa089facf23eb88e37b86b6af6d8c40f6e81f0cdabfbbc3a46d1e5e3631485e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d6d7c87ae067684a9d4e35ffe513d7be573e4894e34597fdab44ddf895711750\"" Mar 21 12:37:19.859414 containerd[1512]: time="2025-03-21T12:37:19.859279250Z" level=info msg="StartContainer for \"d6d7c87ae067684a9d4e35ffe513d7be573e4894e34597fdab44ddf895711750\"" Mar 21 12:37:19.860264 containerd[1512]: time="2025-03-21T12:37:19.860232465Z" level=info msg="connecting to shim d6d7c87ae067684a9d4e35ffe513d7be573e4894e34597fdab44ddf895711750" address="unix:///run/containerd/s/cb522bdf725494f82d62e68b661160d6cb0f5594261867d9686a8d93954871a3" protocol=ttrpc version=3 Mar 21 12:37:19.883162 systemd[1]: Started cri-containerd-d6d7c87ae067684a9d4e35ffe513d7be573e4894e34597fdab44ddf895711750.scope - libcontainer container d6d7c87ae067684a9d4e35ffe513d7be573e4894e34597fdab44ddf895711750. Mar 21 12:37:19.915164 containerd[1512]: time="2025-03-21T12:37:19.915118926Z" level=info msg="StartContainer for \"d6d7c87ae067684a9d4e35ffe513d7be573e4894e34597fdab44ddf895711750\" returns successfully" Mar 21 12:37:19.919179 systemd[1]: cri-containerd-d6d7c87ae067684a9d4e35ffe513d7be573e4894e34597fdab44ddf895711750.scope: Deactivated successfully. Mar 21 12:37:19.919542 containerd[1512]: time="2025-03-21T12:37:19.919473972Z" level=info msg="received exit event container_id:\"d6d7c87ae067684a9d4e35ffe513d7be573e4894e34597fdab44ddf895711750\" id:\"d6d7c87ae067684a9d4e35ffe513d7be573e4894e34597fdab44ddf895711750\" pid:4627 exited_at:{seconds:1742560639 nanos:919273528}" Mar 21 12:37:19.919605 containerd[1512]: time="2025-03-21T12:37:19.919555468Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6d7c87ae067684a9d4e35ffe513d7be573e4894e34597fdab44ddf895711750\" id:\"d6d7c87ae067684a9d4e35ffe513d7be573e4894e34597fdab44ddf895711750\" pid:4627 exited_at:{seconds:1742560639 nanos:919273528}" Mar 21 12:37:20.622149 kubelet[2743]: E0321 12:37:20.622090 2743 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 21 12:37:20.802998 kubelet[2743]: E0321 12:37:20.802927 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:37:20.805096 containerd[1512]: time="2025-03-21T12:37:20.805052224Z" level=info msg="CreateContainer within sandbox \"0aa089facf23eb88e37b86b6af6d8c40f6e81f0cdabfbbc3a46d1e5e3631485e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 21 12:37:20.819359 containerd[1512]: time="2025-03-21T12:37:20.819310982Z" level=info msg="Container cbcf4c720011715e4b666d2f6753f71769cbb12c6d084c2b68282b3a19eef632: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:20.828252 containerd[1512]: time="2025-03-21T12:37:20.828208215Z" level=info msg="CreateContainer within sandbox \"0aa089facf23eb88e37b86b6af6d8c40f6e81f0cdabfbbc3a46d1e5e3631485e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cbcf4c720011715e4b666d2f6753f71769cbb12c6d084c2b68282b3a19eef632\"" Mar 21 12:37:20.828870 containerd[1512]: time="2025-03-21T12:37:20.828699975Z" level=info msg="StartContainer for \"cbcf4c720011715e4b666d2f6753f71769cbb12c6d084c2b68282b3a19eef632\"" Mar 21 12:37:20.830158 containerd[1512]: time="2025-03-21T12:37:20.830111595Z" level=info msg="connecting to shim cbcf4c720011715e4b666d2f6753f71769cbb12c6d084c2b68282b3a19eef632" address="unix:///run/containerd/s/cb522bdf725494f82d62e68b661160d6cb0f5594261867d9686a8d93954871a3" protocol=ttrpc version=3 Mar 21 12:37:20.851173 systemd[1]: Started cri-containerd-cbcf4c720011715e4b666d2f6753f71769cbb12c6d084c2b68282b3a19eef632.scope - libcontainer container cbcf4c720011715e4b666d2f6753f71769cbb12c6d084c2b68282b3a19eef632. Mar 21 12:37:20.892820 systemd[1]: cri-containerd-cbcf4c720011715e4b666d2f6753f71769cbb12c6d084c2b68282b3a19eef632.scope: Deactivated successfully. Mar 21 12:37:20.893994 containerd[1512]: time="2025-03-21T12:37:20.893758130Z" level=info msg="received exit event container_id:\"cbcf4c720011715e4b666d2f6753f71769cbb12c6d084c2b68282b3a19eef632\" id:\"cbcf4c720011715e4b666d2f6753f71769cbb12c6d084c2b68282b3a19eef632\" pid:4671 exited_at:{seconds:1742560640 nanos:893554031}" Mar 21 12:37:20.893994 containerd[1512]: time="2025-03-21T12:37:20.893931743Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbcf4c720011715e4b666d2f6753f71769cbb12c6d084c2b68282b3a19eef632\" id:\"cbcf4c720011715e4b666d2f6753f71769cbb12c6d084c2b68282b3a19eef632\" pid:4671 exited_at:{seconds:1742560640 nanos:893554031}" Mar 21 12:37:20.904822 containerd[1512]: time="2025-03-21T12:37:20.904761873Z" level=info msg="StartContainer for \"cbcf4c720011715e4b666d2f6753f71769cbb12c6d084c2b68282b3a19eef632\" returns successfully" Mar 21 12:37:20.918829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbcf4c720011715e4b666d2f6753f71769cbb12c6d084c2b68282b3a19eef632-rootfs.mount: Deactivated successfully. Mar 21 12:37:21.807486 kubelet[2743]: E0321 12:37:21.807429 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:37:21.809713 containerd[1512]: time="2025-03-21T12:37:21.809633195Z" level=info msg="CreateContainer within sandbox \"0aa089facf23eb88e37b86b6af6d8c40f6e81f0cdabfbbc3a46d1e5e3631485e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 21 12:37:21.818051 containerd[1512]: time="2025-03-21T12:37:21.817309663Z" level=info msg="Container 0144ef32274fff56983f14987c95022bbc19c5cdae1382bcd51acbc4baedcb3b: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:21.825506 containerd[1512]: time="2025-03-21T12:37:21.825459347Z" level=info msg="CreateContainer within sandbox \"0aa089facf23eb88e37b86b6af6d8c40f6e81f0cdabfbbc3a46d1e5e3631485e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0144ef32274fff56983f14987c95022bbc19c5cdae1382bcd51acbc4baedcb3b\"" Mar 21 12:37:21.825925 containerd[1512]: time="2025-03-21T12:37:21.825904308Z" level=info msg="StartContainer for \"0144ef32274fff56983f14987c95022bbc19c5cdae1382bcd51acbc4baedcb3b\"" Mar 21 12:37:21.826998 containerd[1512]: time="2025-03-21T12:37:21.826768410Z" level=info msg="connecting to shim 0144ef32274fff56983f14987c95022bbc19c5cdae1382bcd51acbc4baedcb3b" address="unix:///run/containerd/s/cb522bdf725494f82d62e68b661160d6cb0f5594261867d9686a8d93954871a3" protocol=ttrpc version=3 Mar 21 12:37:21.852184 systemd[1]: Started cri-containerd-0144ef32274fff56983f14987c95022bbc19c5cdae1382bcd51acbc4baedcb3b.scope - libcontainer container 0144ef32274fff56983f14987c95022bbc19c5cdae1382bcd51acbc4baedcb3b. Mar 21 12:37:21.878569 systemd[1]: cri-containerd-0144ef32274fff56983f14987c95022bbc19c5cdae1382bcd51acbc4baedcb3b.scope: Deactivated successfully. Mar 21 12:37:21.878953 containerd[1512]: time="2025-03-21T12:37:21.878915584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0144ef32274fff56983f14987c95022bbc19c5cdae1382bcd51acbc4baedcb3b\" id:\"0144ef32274fff56983f14987c95022bbc19c5cdae1382bcd51acbc4baedcb3b\" pid:4711 exited_at:{seconds:1742560641 nanos:878685084}" Mar 21 12:37:21.880444 containerd[1512]: time="2025-03-21T12:37:21.880402237Z" level=info msg="received exit event container_id:\"0144ef32274fff56983f14987c95022bbc19c5cdae1382bcd51acbc4baedcb3b\" id:\"0144ef32274fff56983f14987c95022bbc19c5cdae1382bcd51acbc4baedcb3b\" pid:4711 exited_at:{seconds:1742560641 nanos:878685084}" Mar 21 12:37:21.888456 containerd[1512]: time="2025-03-21T12:37:21.888417424Z" level=info msg="StartContainer for \"0144ef32274fff56983f14987c95022bbc19c5cdae1382bcd51acbc4baedcb3b\" returns successfully" Mar 21 12:37:21.901400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0144ef32274fff56983f14987c95022bbc19c5cdae1382bcd51acbc4baedcb3b-rootfs.mount: Deactivated successfully. Mar 21 12:37:22.567339 kubelet[2743]: I0321 12:37:22.567278 2743 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-21T12:37:22Z","lastTransitionTime":"2025-03-21T12:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 21 12:37:22.812832 kubelet[2743]: E0321 12:37:22.812791 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:37:22.814938 containerd[1512]: time="2025-03-21T12:37:22.814877566Z" level=info msg="CreateContainer within sandbox \"0aa089facf23eb88e37b86b6af6d8c40f6e81f0cdabfbbc3a46d1e5e3631485e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 21 12:37:22.824079 containerd[1512]: time="2025-03-21T12:37:22.823901593Z" level=info msg="Container f5d695ca931fffa5cb77fdd0cd8d2c576afd8753e3e846ed01d887ecd00b2ddc: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:22.828337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3621795368.mount: Deactivated successfully. Mar 21 12:37:22.834118 containerd[1512]: time="2025-03-21T12:37:22.834068073Z" level=info msg="CreateContainer within sandbox \"0aa089facf23eb88e37b86b6af6d8c40f6e81f0cdabfbbc3a46d1e5e3631485e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f5d695ca931fffa5cb77fdd0cd8d2c576afd8753e3e846ed01d887ecd00b2ddc\"" Mar 21 12:37:22.834780 containerd[1512]: time="2025-03-21T12:37:22.834559282Z" level=info msg="StartContainer for \"f5d695ca931fffa5cb77fdd0cd8d2c576afd8753e3e846ed01d887ecd00b2ddc\"" Mar 21 12:37:22.835448 containerd[1512]: time="2025-03-21T12:37:22.835417962Z" level=info msg="connecting to shim f5d695ca931fffa5cb77fdd0cd8d2c576afd8753e3e846ed01d887ecd00b2ddc" address="unix:///run/containerd/s/cb522bdf725494f82d62e68b661160d6cb0f5594261867d9686a8d93954871a3" protocol=ttrpc version=3 Mar 21 12:37:22.858154 systemd[1]: Started cri-containerd-f5d695ca931fffa5cb77fdd0cd8d2c576afd8753e3e846ed01d887ecd00b2ddc.scope - libcontainer container f5d695ca931fffa5cb77fdd0cd8d2c576afd8753e3e846ed01d887ecd00b2ddc. Mar 21 12:37:22.892185 containerd[1512]: time="2025-03-21T12:37:22.892141133Z" level=info msg="StartContainer for \"f5d695ca931fffa5cb77fdd0cd8d2c576afd8753e3e846ed01d887ecd00b2ddc\" returns successfully" Mar 21 12:37:22.958724 containerd[1512]: time="2025-03-21T12:37:22.958631882Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5d695ca931fffa5cb77fdd0cd8d2c576afd8753e3e846ed01d887ecd00b2ddc\" id:\"8e35ede81cafe38bde1dbdb108dbdaa60bb4e4546bea566d33ed836cdb5c5557\" pid:4780 exited_at:{seconds:1742560642 nanos:958291772}" Mar 21 12:37:23.313052 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 21 12:37:23.819385 kubelet[2743]: E0321 12:37:23.819331 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:37:25.223186 kubelet[2743]: E0321 12:37:25.223132 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:37:25.471397 containerd[1512]: time="2025-03-21T12:37:25.471337886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5d695ca931fffa5cb77fdd0cd8d2c576afd8753e3e846ed01d887ecd00b2ddc\" id:\"7f1c7ae5e322a2f8dcc52c5b0cf354b5ef86cd5c9a8147696208de62fbb57146\" pid:5089 exit_status:1 exited_at:{seconds:1742560645 nanos:470994541}" Mar 21 12:37:26.391954 systemd-networkd[1450]: lxc_health: Link UP Mar 21 12:37:26.394481 systemd-networkd[1450]: lxc_health: Gained carrier Mar 21 12:37:26.565136 kubelet[2743]: E0321 12:37:26.564662 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:37:27.224496 kubelet[2743]: E0321 12:37:27.223971 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:37:27.239177 kubelet[2743]: I0321 12:37:27.239105 2743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xvqrk" podStartSLOduration=9.23908521 podStartE2EDuration="9.23908521s" podCreationTimestamp="2025-03-21 12:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:37:23.834630932 +0000 UTC m=+83.360642619" watchObservedRunningTime="2025-03-21 12:37:27.23908521 +0000 UTC m=+86.765096897" Mar 21 12:37:27.591649 containerd[1512]: time="2025-03-21T12:37:27.591460585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5d695ca931fffa5cb77fdd0cd8d2c576afd8753e3e846ed01d887ecd00b2ddc\" id:\"bce284f9f0edf66d85a9c0af70aaef05189b3a4237389056b14eeaa58b7ba9ae\" pid:5352 exited_at:{seconds:1742560647 nanos:591095469}" Mar 21 12:37:27.825707 kubelet[2743]: E0321 12:37:27.825661 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:37:28.385250 systemd-networkd[1450]: lxc_health: Gained IPv6LL Mar 21 12:37:28.827682 kubelet[2743]: E0321 12:37:28.827640 2743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 21 12:37:29.721953 containerd[1512]: time="2025-03-21T12:37:29.721906129Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5d695ca931fffa5cb77fdd0cd8d2c576afd8753e3e846ed01d887ecd00b2ddc\" id:\"78b91bf269038e25fdef2a97f7a9dbf347c60f2c93e1c518f98b2e64323233c4\" pid:5386 exited_at:{seconds:1742560649 nanos:721284505}" Mar 21 12:37:31.810618 containerd[1512]: time="2025-03-21T12:37:31.810555551Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5d695ca931fffa5cb77fdd0cd8d2c576afd8753e3e846ed01d887ecd00b2ddc\" id:\"a5e00992cfe9acfa9a48e075aed1aa059dadaab2df19b1618597f2155c94d189\" pid:5408 exited_at:{seconds:1742560651 nanos:810153305}" Mar 21 12:37:33.914329 containerd[1512]: time="2025-03-21T12:37:33.914274801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5d695ca931fffa5cb77fdd0cd8d2c576afd8753e3e846ed01d887ecd00b2ddc\" id:\"2fa43bb8f59be9eab109bd007ed3e1ffd4ea76e03ead805e261f1d499c8b70e7\" pid:5432 exited_at:{seconds:1742560653 nanos:913940665}" Mar 21 12:37:33.920630 sshd[4518]: Connection closed by 10.0.0.1 port 45438 Mar 21 12:37:33.921178 sshd-session[4511]: pam_unix(sshd:session): session closed for user core Mar 21 12:37:33.925997 systemd[1]: sshd@26-10.0.0.85:22-10.0.0.1:45438.service: Deactivated successfully. Mar 21 12:37:33.928412 systemd[1]: session-27.scope: Deactivated successfully. Mar 21 12:37:33.929162 systemd-logind[1496]: Session 27 logged out. Waiting for processes to exit. Mar 21 12:37:33.930124 systemd-logind[1496]: Removed session 27.