Oct 9 00:57:47.902161 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 23:33:43 -00 2024 Oct 9 00:57:47.902182 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 00:57:47.902193 kernel: BIOS-provided physical RAM map: Oct 9 00:57:47.902200 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 9 00:57:47.902206 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 9 00:57:47.902212 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 9 00:57:47.902219 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 9 00:57:47.902225 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 9 00:57:47.902231 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 9 00:57:47.902237 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 9 00:57:47.902246 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Oct 9 00:57:47.902252 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Oct 9 00:57:47.902258 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 9 00:57:47.902264 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 9 00:57:47.902272 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 9 00:57:47.902278 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 9 00:57:47.902287 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 9 00:57:47.902294 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 00:57:47.902300 kernel: BIOS-e820: [mem 0x00000000ffe00000-0x00000000ffffffff] reserved Oct 9 00:57:47.902307 kernel: NX (Execute Disable) protection: active Oct 9 00:57:47.902313 kernel: APIC: Static calls initialized Oct 9 00:57:47.902320 kernel: e820: update [mem 0x9b66b018-0x9b674c57] usable ==> usable Oct 9 00:57:47.902327 kernel: e820: update [mem 0x9b66b018-0x9b674c57] usable ==> usable Oct 9 00:57:47.902333 kernel: e820: update [mem 0x9b62e018-0x9b66ae57] usable ==> usable Oct 9 00:57:47.902340 kernel: e820: update [mem 0x9b62e018-0x9b66ae57] usable ==> usable Oct 9 00:57:47.902346 kernel: extended physical RAM map: Oct 9 00:57:47.902353 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 9 00:57:47.902362 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 9 00:57:47.902368 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 9 00:57:47.902375 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 9 00:57:47.902382 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 9 00:57:47.902388 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Oct 9 00:57:47.902395 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Oct 9 00:57:47.902401 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b62e017] usable Oct 9 00:57:47.902408 kernel: reserve setup_data: [mem 0x000000009b62e018-0x000000009b66ae57] usable Oct 9 00:57:47.902415 kernel: reserve setup_data: [mem 0x000000009b66ae58-0x000000009b66b017] usable Oct 9 00:57:47.902421 kernel: reserve setup_data: [mem 0x000000009b66b018-0x000000009b674c57] usable Oct 9 00:57:47.902428 kernel: reserve setup_data: [mem 0x000000009b674c58-0x000000009c8eefff] usable Oct 9 00:57:47.902436 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Oct 9 00:57:47.902443 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Oct 9 00:57:47.902453 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 9 00:57:47.902460 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Oct 9 00:57:47.902467 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Oct 9 00:57:47.902474 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 9 00:57:47.902483 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 00:57:47.902490 kernel: reserve setup_data: [mem 0x00000000ffe00000-0x00000000ffffffff] reserved Oct 9 00:57:47.902497 kernel: efi: EFI v2.7 by EDK II Oct 9 00:57:47.902504 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b6b3018 RNG=0x9cb73018 Oct 9 00:57:47.902511 kernel: random: crng init done Oct 9 00:57:47.902518 kernel: efi: Remove mem127: MMIO range=[0xffe00000-0xffffffff] (2MB) from e820 map Oct 9 00:57:47.902525 kernel: e820: remove [mem 0xffe00000-0xffffffff] reserved Oct 9 00:57:47.902531 kernel: secureboot: Secure boot disabled Oct 9 00:57:47.902538 kernel: SMBIOS 2.8 present. Oct 9 00:57:47.902545 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Oct 9 00:57:47.902554 kernel: Hypervisor detected: KVM Oct 9 00:57:47.902561 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 00:57:47.902568 kernel: kvm-clock: using sched offset of 4530272636 cycles Oct 9 00:57:47.902575 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 00:57:47.902583 kernel: tsc: Detected 2794.750 MHz processor Oct 9 00:57:47.902590 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 00:57:47.902598 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 00:57:47.902605 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Oct 9 00:57:47.902612 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 9 00:57:47.902619 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 00:57:47.902628 kernel: Using GB pages for direct mapping Oct 9 00:57:47.902635 kernel: ACPI: Early table checksum verification disabled Oct 9 00:57:47.902643 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 9 00:57:47.902650 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 9 00:57:47.902657 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:57:47.902664 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:57:47.902671 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 9 00:57:47.902678 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:57:47.902685 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:57:47.902694 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:57:47.902701 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:57:47.902708 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 9 00:57:47.902715 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 9 00:57:47.902722 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Oct 9 00:57:47.902730 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 9 00:57:47.902739 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 9 00:57:47.902750 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 9 00:57:47.902803 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 9 00:57:47.902814 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 9 00:57:47.902824 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 9 00:57:47.902833 kernel: No NUMA configuration found Oct 9 00:57:47.902843 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Oct 9 00:57:47.902853 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Oct 9 00:57:47.902863 kernel: Zone ranges: Oct 9 00:57:47.902873 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 00:57:47.902883 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Oct 9 00:57:47.902892 kernel: Normal empty Oct 9 00:57:47.902906 kernel: Movable zone start for each node Oct 9 00:57:47.902915 kernel: Early memory node ranges Oct 9 00:57:47.902925 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 9 00:57:47.902934 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 9 00:57:47.902942 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 9 00:57:47.902949 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Oct 9 00:57:47.902956 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Oct 9 00:57:47.902963 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Oct 9 00:57:47.902970 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Oct 9 00:57:47.902980 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 00:57:47.902987 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 9 00:57:47.902994 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 9 00:57:47.903001 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 00:57:47.903008 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Oct 9 00:57:47.903015 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Oct 9 00:57:47.903022 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Oct 9 00:57:47.903029 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 00:57:47.903036 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 00:57:47.903043 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 00:57:47.903052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 00:57:47.903059 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 00:57:47.903066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 00:57:47.903073 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 00:57:47.903080 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 00:57:47.903086 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 00:57:47.903093 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 00:57:47.903101 kernel: TSC deadline timer available Oct 9 00:57:47.903116 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 9 00:57:47.903124 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 00:57:47.903131 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 9 00:57:47.903140 kernel: kvm-guest: setup PV sched yield Oct 9 00:57:47.903147 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 9 00:57:47.903155 kernel: Booting paravirtualized kernel on KVM Oct 9 00:57:47.903162 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 00:57:47.903170 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 9 00:57:47.903177 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u524288 Oct 9 00:57:47.903185 kernel: pcpu-alloc: s196904 r8192 d32472 u524288 alloc=1*2097152 Oct 9 00:57:47.903194 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 9 00:57:47.903201 kernel: kvm-guest: PV spinlocks enabled Oct 9 00:57:47.903208 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 9 00:57:47.903217 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 00:57:47.903226 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 00:57:47.903233 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 00:57:47.903241 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 00:57:47.903250 kernel: Fallback order for Node 0: 0 Oct 9 00:57:47.903257 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Oct 9 00:57:47.903265 kernel: Policy zone: DMA32 Oct 9 00:57:47.903272 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 00:57:47.903280 kernel: Memory: 2395860K/2567000K available (12288K kernel code, 2305K rwdata, 22728K rodata, 42872K init, 2316K bss, 170884K reserved, 0K cma-reserved) Oct 9 00:57:47.903287 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 9 00:57:47.903295 kernel: ftrace: allocating 37786 entries in 148 pages Oct 9 00:57:47.903302 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 00:57:47.903309 kernel: Dynamic Preempt: voluntary Oct 9 00:57:47.903319 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 00:57:47.903327 kernel: rcu: RCU event tracing is enabled. Oct 9 00:57:47.903334 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 9 00:57:47.903342 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 00:57:47.903349 kernel: Rude variant of Tasks RCU enabled. Oct 9 00:57:47.903357 kernel: Tracing variant of Tasks RCU enabled. Oct 9 00:57:47.903364 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 00:57:47.903371 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 9 00:57:47.903379 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 9 00:57:47.903388 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 00:57:47.903395 kernel: Console: colour dummy device 80x25 Oct 9 00:57:47.903403 kernel: printk: console [ttyS0] enabled Oct 9 00:57:47.903410 kernel: ACPI: Core revision 20230628 Oct 9 00:57:47.903417 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 00:57:47.903425 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 00:57:47.903432 kernel: x2apic enabled Oct 9 00:57:47.903440 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 00:57:47.903447 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 9 00:57:47.903457 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 9 00:57:47.903464 kernel: kvm-guest: setup PV IPIs Oct 9 00:57:47.903471 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 00:57:47.903478 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 9 00:57:47.903486 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 9 00:57:47.903493 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 9 00:57:47.903500 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 9 00:57:47.903508 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 9 00:57:47.903515 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 00:57:47.903524 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 00:57:47.903532 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 00:57:47.903539 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 00:57:47.903547 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 9 00:57:47.903554 kernel: RETBleed: Mitigation: untrained return thunk Oct 9 00:57:47.903561 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 00:57:47.903569 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 00:57:47.903576 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 9 00:57:47.903584 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 9 00:57:47.903594 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 9 00:57:47.903601 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 00:57:47.903609 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 00:57:47.903617 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 00:57:47.903627 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 00:57:47.903637 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 9 00:57:47.903648 kernel: Freeing SMP alternatives memory: 32K Oct 9 00:57:47.903659 kernel: pid_max: default: 32768 minimum: 301 Oct 9 00:57:47.903670 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 00:57:47.903680 kernel: landlock: Up and running. Oct 9 00:57:47.903690 kernel: SELinux: Initializing. Oct 9 00:57:47.903700 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 00:57:47.903711 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 00:57:47.903721 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 9 00:57:47.903731 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:57:47.903742 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:57:47.903752 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:57:47.903794 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 9 00:57:47.903804 kernel: ... version: 0 Oct 9 00:57:47.903813 kernel: ... bit width: 48 Oct 9 00:57:47.903823 kernel: ... generic registers: 6 Oct 9 00:57:47.903833 kernel: ... value mask: 0000ffffffffffff Oct 9 00:57:47.903843 kernel: ... max period: 00007fffffffffff Oct 9 00:57:47.903853 kernel: ... fixed-purpose events: 0 Oct 9 00:57:47.903860 kernel: ... event mask: 000000000000003f Oct 9 00:57:47.903868 kernel: signal: max sigframe size: 1776 Oct 9 00:57:47.903878 kernel: rcu: Hierarchical SRCU implementation. Oct 9 00:57:47.903885 kernel: rcu: Max phase no-delay instances is 400. Oct 9 00:57:47.903893 kernel: smp: Bringing up secondary CPUs ... Oct 9 00:57:47.903900 kernel: smpboot: x86: Booting SMP configuration: Oct 9 00:57:47.903915 kernel: .... node #0, CPUs: #1 #2 #3 Oct 9 00:57:47.903928 kernel: smp: Brought up 1 node, 4 CPUs Oct 9 00:57:47.903936 kernel: smpboot: Max logical packages: 1 Oct 9 00:57:47.903944 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 9 00:57:47.903951 kernel: devtmpfs: initialized Oct 9 00:57:47.903958 kernel: x86/mm: Memory block size: 128MB Oct 9 00:57:47.903970 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 9 00:57:47.903977 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 9 00:57:47.903985 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Oct 9 00:57:47.903992 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 9 00:57:47.903999 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 9 00:57:47.904007 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 00:57:47.904014 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 9 00:57:47.904022 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 00:57:47.904031 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 00:57:47.904039 kernel: audit: initializing netlink subsys (disabled) Oct 9 00:57:47.904046 kernel: audit: type=2000 audit(1728435468.219:1): state=initialized audit_enabled=0 res=1 Oct 9 00:57:47.904053 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 00:57:47.904060 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 00:57:47.904068 kernel: cpuidle: using governor menu Oct 9 00:57:47.904075 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 00:57:47.904082 kernel: dca service started, version 1.12.1 Oct 9 00:57:47.904090 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 9 00:57:47.904099 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 9 00:57:47.904106 kernel: PCI: Using configuration type 1 for base access Oct 9 00:57:47.904114 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 00:57:47.904121 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 00:57:47.904128 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 00:57:47.904136 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 00:57:47.904143 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 00:57:47.904150 kernel: ACPI: Added _OSI(Module Device) Oct 9 00:57:47.904158 kernel: ACPI: Added _OSI(Processor Device) Oct 9 00:57:47.904167 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 00:57:47.904174 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 00:57:47.904182 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 00:57:47.904189 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 00:57:47.904196 kernel: ACPI: Interpreter enabled Oct 9 00:57:47.904204 kernel: ACPI: PM: (supports S0 S3 S5) Oct 9 00:57:47.904211 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 00:57:47.904218 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 00:57:47.904226 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 00:57:47.904235 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 9 00:57:47.904242 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 00:57:47.904416 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 00:57:47.904553 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 9 00:57:47.904703 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 9 00:57:47.904715 kernel: PCI host bridge to bus 0000:00 Oct 9 00:57:47.904866 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 00:57:47.904995 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 00:57:47.905109 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 00:57:47.905217 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 9 00:57:47.905326 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 9 00:57:47.905457 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Oct 9 00:57:47.905590 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 00:57:47.905733 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 9 00:57:47.905900 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 9 00:57:47.906033 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Oct 9 00:57:47.906152 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Oct 9 00:57:47.906292 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Oct 9 00:57:47.906426 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Oct 9 00:57:47.906547 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 00:57:47.906683 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 00:57:47.906828 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Oct 9 00:57:47.906950 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Oct 9 00:57:47.907070 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Oct 9 00:57:47.907227 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 9 00:57:47.907364 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Oct 9 00:57:47.907485 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Oct 9 00:57:47.907610 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Oct 9 00:57:47.907738 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 00:57:47.907890 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Oct 9 00:57:47.908019 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Oct 9 00:57:47.908171 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Oct 9 00:57:47.908297 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Oct 9 00:57:47.908425 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 9 00:57:47.908549 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 9 00:57:47.908677 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 9 00:57:47.908824 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Oct 9 00:57:47.908968 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Oct 9 00:57:47.909112 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 9 00:57:47.909233 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Oct 9 00:57:47.909244 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 00:57:47.909256 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 00:57:47.909264 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 00:57:47.909272 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 00:57:47.909279 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 9 00:57:47.909287 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 9 00:57:47.909295 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 9 00:57:47.909302 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 9 00:57:47.909310 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 9 00:57:47.909318 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 9 00:57:47.909327 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 9 00:57:47.909335 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 9 00:57:47.909343 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 9 00:57:47.909351 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 9 00:57:47.909358 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 9 00:57:47.909366 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 9 00:57:47.909373 kernel: iommu: Default domain type: Translated Oct 9 00:57:47.909381 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 00:57:47.909389 kernel: efivars: Registered efivars operations Oct 9 00:57:47.909398 kernel: PCI: Using ACPI for IRQ routing Oct 9 00:57:47.909406 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 00:57:47.909413 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 9 00:57:47.909421 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Oct 9 00:57:47.909428 kernel: e820: reserve RAM buffer [mem 0x9b62e018-0x9bffffff] Oct 9 00:57:47.909436 kernel: e820: reserve RAM buffer [mem 0x9b66b018-0x9bffffff] Oct 9 00:57:47.909444 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Oct 9 00:57:47.909451 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Oct 9 00:57:47.909573 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 9 00:57:47.909692 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 9 00:57:47.909862 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 00:57:47.909873 kernel: vgaarb: loaded Oct 9 00:57:47.909881 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 00:57:47.909889 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 00:57:47.909896 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 00:57:47.909904 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 00:57:47.909911 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 00:57:47.909923 kernel: pnp: PnP ACPI init Oct 9 00:57:47.910055 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 9 00:57:47.910066 kernel: pnp: PnP ACPI: found 6 devices Oct 9 00:57:47.910074 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 00:57:47.910081 kernel: NET: Registered PF_INET protocol family Oct 9 00:57:47.910089 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 00:57:47.910096 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 00:57:47.910104 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 00:57:47.910114 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 00:57:47.910122 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 00:57:47.910129 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 00:57:47.910137 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 00:57:47.910145 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 00:57:47.910152 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 00:57:47.910160 kernel: NET: Registered PF_XDP protocol family Oct 9 00:57:47.910277 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Oct 9 00:57:47.910400 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Oct 9 00:57:47.910535 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 00:57:47.910660 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 00:57:47.910795 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 00:57:47.910906 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 9 00:57:47.911014 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 9 00:57:47.911121 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Oct 9 00:57:47.911131 kernel: PCI: CLS 0 bytes, default 64 Oct 9 00:57:47.911143 kernel: Initialise system trusted keyrings Oct 9 00:57:47.911166 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 00:57:47.911176 kernel: Key type asymmetric registered Oct 9 00:57:47.911187 kernel: Asymmetric key parser 'x509' registered Oct 9 00:57:47.911197 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 00:57:47.911207 kernel: io scheduler mq-deadline registered Oct 9 00:57:47.911217 kernel: io scheduler kyber registered Oct 9 00:57:47.911228 kernel: io scheduler bfq registered Oct 9 00:57:47.911239 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 00:57:47.911254 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 9 00:57:47.911265 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 9 00:57:47.911276 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 9 00:57:47.911286 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 00:57:47.911297 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 00:57:47.911308 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 00:57:47.911319 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 00:57:47.911329 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 00:57:47.911459 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 9 00:57:47.911474 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 00:57:47.911586 kernel: rtc_cmos 00:04: registered as rtc0 Oct 9 00:57:47.911697 kernel: rtc_cmos 00:04: setting system clock to 2024-10-09T00:57:47 UTC (1728435467) Oct 9 00:57:47.911893 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 9 00:57:47.911905 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 9 00:57:47.911913 kernel: efifb: probing for efifb Oct 9 00:57:47.911921 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 9 00:57:47.911929 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 9 00:57:47.911940 kernel: efifb: scrolling: redraw Oct 9 00:57:47.911948 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 9 00:57:47.911955 kernel: Console: switching to colour frame buffer device 160x50 Oct 9 00:57:47.911963 kernel: fb0: EFI VGA frame buffer device Oct 9 00:57:47.911973 kernel: pstore: Using crash dump compression: deflate Oct 9 00:57:47.911981 kernel: pstore: Registered efi_pstore as persistent store backend Oct 9 00:57:47.911991 kernel: NET: Registered PF_INET6 protocol family Oct 9 00:57:47.911999 kernel: Segment Routing with IPv6 Oct 9 00:57:47.912007 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 00:57:47.912018 kernel: NET: Registered PF_PACKET protocol family Oct 9 00:57:47.912029 kernel: Key type dns_resolver registered Oct 9 00:57:47.912040 kernel: IPI shorthand broadcast: enabled Oct 9 00:57:47.912051 kernel: sched_clock: Marking stable (565010735, 135249823)->(767668007, -67407449) Oct 9 00:57:47.912062 kernel: registered taskstats version 1 Oct 9 00:57:47.912073 kernel: Loading compiled-in X.509 certificates Oct 9 00:57:47.912087 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 03ae66f5ce294ce3ab718ee0d7c4a4a6e8c5aae6' Oct 9 00:57:47.912098 kernel: Key type .fscrypt registered Oct 9 00:57:47.912109 kernel: Key type fscrypt-provisioning registered Oct 9 00:57:47.912120 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 00:57:47.912131 kernel: ima: Allocated hash algorithm: sha1 Oct 9 00:57:47.912142 kernel: ima: No architecture policies found Oct 9 00:57:47.912152 kernel: clk: Disabling unused clocks Oct 9 00:57:47.912160 kernel: Freeing unused kernel image (initmem) memory: 42872K Oct 9 00:57:47.912167 kernel: Write protecting the kernel read-only data: 36864k Oct 9 00:57:47.912178 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Oct 9 00:57:47.912186 kernel: Run /init as init process Oct 9 00:57:47.912193 kernel: with arguments: Oct 9 00:57:47.912201 kernel: /init Oct 9 00:57:47.912209 kernel: with environment: Oct 9 00:57:47.912216 kernel: HOME=/ Oct 9 00:57:47.912224 kernel: TERM=linux Oct 9 00:57:47.912232 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 00:57:47.912242 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 00:57:47.912254 systemd[1]: Detected virtualization kvm. Oct 9 00:57:47.912262 systemd[1]: Detected architecture x86-64. Oct 9 00:57:47.912270 systemd[1]: Running in initrd. Oct 9 00:57:47.912278 systemd[1]: No hostname configured, using default hostname. Oct 9 00:57:47.912286 systemd[1]: Hostname set to . Oct 9 00:57:47.912294 systemd[1]: Initializing machine ID from VM UUID. Oct 9 00:57:47.912303 systemd[1]: Queued start job for default target initrd.target. Oct 9 00:57:47.912313 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:57:47.912321 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:57:47.912330 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 00:57:47.912338 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 00:57:47.912347 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 00:57:47.912355 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 00:57:47.912365 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 00:57:47.912376 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 00:57:47.912384 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:57:47.912393 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:57:47.912401 systemd[1]: Reached target paths.target - Path Units. Oct 9 00:57:47.912409 systemd[1]: Reached target slices.target - Slice Units. Oct 9 00:57:47.912417 systemd[1]: Reached target swap.target - Swaps. Oct 9 00:57:47.912425 systemd[1]: Reached target timers.target - Timer Units. Oct 9 00:57:47.912434 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 00:57:47.912444 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 00:57:47.912452 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 00:57:47.912461 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 00:57:47.912469 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:57:47.912477 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 00:57:47.912486 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:57:47.912494 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 00:57:47.912502 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 00:57:47.912510 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 00:57:47.912521 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 00:57:47.912529 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 00:57:47.912537 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 00:57:47.912546 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 00:57:47.912554 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:57:47.912562 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 00:57:47.912570 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:57:47.912578 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 00:57:47.912608 systemd-journald[193]: Collecting audit messages is disabled. Oct 9 00:57:47.912629 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 00:57:47.912638 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:57:47.912647 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:57:47.912655 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 00:57:47.912664 systemd-journald[193]: Journal started Oct 9 00:57:47.912683 systemd-journald[193]: Runtime Journal (/run/log/journal/a6af9629b47c48cca4a74833a8c34214) is 6.0M, max 48.3M, 42.2M free. Oct 9 00:57:47.914927 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 00:57:47.916789 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 00:57:47.919330 systemd-modules-load[194]: Inserted module 'overlay' Oct 9 00:57:47.920941 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 00:57:47.925305 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:57:47.934913 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:57:47.935554 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:57:47.942887 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 00:57:47.956135 dracut-cmdline[221]: dracut-dracut-053 Oct 9 00:57:47.957967 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 00:57:47.958936 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 00:57:47.965021 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 9 00:57:47.965938 kernel: Bridge firewalling registered Oct 9 00:57:47.967403 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 00:57:47.971930 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:57:47.982403 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:57:47.990948 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 00:57:48.019132 systemd-resolved[264]: Positive Trust Anchors: Oct 9 00:57:48.019147 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 00:57:48.019177 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 00:57:48.021580 systemd-resolved[264]: Defaulting to hostname 'linux'. Oct 9 00:57:48.022604 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 00:57:48.029045 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:57:48.046796 kernel: SCSI subsystem initialized Oct 9 00:57:48.055792 kernel: Loading iSCSI transport class v2.0-870. Oct 9 00:57:48.065797 kernel: iscsi: registered transport (tcp) Oct 9 00:57:48.086791 kernel: iscsi: registered transport (qla4xxx) Oct 9 00:57:48.086813 kernel: QLogic iSCSI HBA Driver Oct 9 00:57:48.135804 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 00:57:48.140988 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 00:57:48.166515 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 00:57:48.166565 kernel: device-mapper: uevent: version 1.0.3 Oct 9 00:57:48.166578 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 00:57:48.207803 kernel: raid6: avx2x4 gen() 30695 MB/s Oct 9 00:57:48.224787 kernel: raid6: avx2x2 gen() 31113 MB/s Oct 9 00:57:48.241871 kernel: raid6: avx2x1 gen() 26061 MB/s Oct 9 00:57:48.241895 kernel: raid6: using algorithm avx2x2 gen() 31113 MB/s Oct 9 00:57:48.259877 kernel: raid6: .... xor() 19994 MB/s, rmw enabled Oct 9 00:57:48.259915 kernel: raid6: using avx2x2 recovery algorithm Oct 9 00:57:48.279801 kernel: xor: automatically using best checksumming function avx Oct 9 00:57:48.432815 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 00:57:48.446281 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 00:57:48.454996 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:57:48.470421 systemd-udevd[414]: Using default interface naming scheme 'v255'. Oct 9 00:57:48.476109 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:57:48.483919 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 00:57:48.499562 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Oct 9 00:57:48.533092 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 00:57:48.550902 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 00:57:48.615233 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:57:48.625918 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 00:57:48.639000 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 00:57:48.642175 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 00:57:48.645181 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:57:48.647653 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 00:57:48.654822 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 9 00:57:48.656825 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 00:57:48.656966 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 00:57:48.671799 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 9 00:57:48.673807 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 00:57:48.679529 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 00:57:48.687336 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 00:57:48.687371 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 00:57:48.687383 kernel: GPT:9289727 != 19775487 Oct 9 00:57:48.687393 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 00:57:48.687403 kernel: GPT:9289727 != 19775487 Oct 9 00:57:48.687413 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 00:57:48.687423 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:57:48.687433 kernel: libata version 3.00 loaded. Oct 9 00:57:48.687443 kernel: AES CTR mode by8 optimization enabled Oct 9 00:57:48.679871 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:57:48.686267 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:57:48.687428 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:57:48.689048 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:57:48.691776 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:57:48.699035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:57:48.706576 kernel: ahci 0000:00:1f.2: version 3.0 Oct 9 00:57:48.706808 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 9 00:57:48.706826 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 9 00:57:48.708140 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 9 00:57:48.711571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:57:48.717276 kernel: BTRFS: device fsid 6ed52ce5-b2f8-4d16-8889-677a209bc377 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (460) Oct 9 00:57:48.717318 kernel: scsi host0: ahci Oct 9 00:57:48.717560 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459) Oct 9 00:57:48.724067 kernel: scsi host1: ahci Oct 9 00:57:48.724443 kernel: scsi host2: ahci Oct 9 00:57:48.724626 kernel: scsi host3: ahci Oct 9 00:57:48.724906 kernel: scsi host4: ahci Oct 9 00:57:48.728569 kernel: scsi host5: ahci Oct 9 00:57:48.728818 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Oct 9 00:57:48.728835 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Oct 9 00:57:48.728848 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Oct 9 00:57:48.730791 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Oct 9 00:57:48.730815 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Oct 9 00:57:48.731866 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Oct 9 00:57:48.735394 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 00:57:48.742837 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 00:57:48.747940 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 00:57:48.749346 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 00:57:48.764658 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 00:57:48.778895 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 00:57:48.781375 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:57:48.789179 disk-uuid[568]: Primary Header is updated. Oct 9 00:57:48.789179 disk-uuid[568]: Secondary Entries is updated. Oct 9 00:57:48.789179 disk-uuid[568]: Secondary Header is updated. Oct 9 00:57:48.793808 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:57:48.798801 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:57:48.800224 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:57:49.044971 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 9 00:57:49.045046 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 9 00:57:49.045058 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 9 00:57:49.045812 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 9 00:57:49.046798 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 9 00:57:49.047796 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 9 00:57:49.048796 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 9 00:57:49.048810 kernel: ata3.00: applying bridge limits Oct 9 00:57:49.049794 kernel: ata3.00: configured for UDMA/100 Oct 9 00:57:49.051793 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 9 00:57:49.090331 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 9 00:57:49.090569 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 9 00:57:49.103797 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 9 00:57:49.799797 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:57:49.799920 disk-uuid[572]: The operation has completed successfully. Oct 9 00:57:49.828549 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 00:57:49.828675 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 00:57:49.849897 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 00:57:49.853488 sh[593]: Success Oct 9 00:57:49.865801 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 9 00:57:49.899763 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 00:57:49.917442 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 00:57:49.920755 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 00:57:49.933963 kernel: BTRFS info (device dm-0): first mount of filesystem 6ed52ce5-b2f8-4d16-8889-677a209bc377 Oct 9 00:57:49.933997 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 00:57:49.934008 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 00:57:49.936262 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 00:57:49.936276 kernel: BTRFS info (device dm-0): using free space tree Oct 9 00:57:49.941691 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 00:57:49.942284 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 00:57:49.961884 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 00:57:49.963976 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 00:57:49.973824 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:57:49.973853 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 00:57:49.973864 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:57:49.977787 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:57:49.986664 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 00:57:49.988884 kernel: BTRFS info (device vda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:57:49.999676 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 00:57:50.009925 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 00:57:50.071072 ignition[687]: Ignition 2.19.0 Oct 9 00:57:50.071086 ignition[687]: Stage: fetch-offline Oct 9 00:57:50.071131 ignition[687]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:57:50.071143 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:57:50.071258 ignition[687]: parsed url from cmdline: "" Oct 9 00:57:50.071263 ignition[687]: no config URL provided Oct 9 00:57:50.071271 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 00:57:50.071282 ignition[687]: no config at "/usr/lib/ignition/user.ign" Oct 9 00:57:50.071316 ignition[687]: op(1): [started] loading QEMU firmware config module Oct 9 00:57:50.071322 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 9 00:57:50.079337 ignition[687]: op(1): [finished] loading QEMU firmware config module Oct 9 00:57:50.091232 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 00:57:50.099176 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 00:57:50.123432 ignition[687]: parsing config with SHA512: f2a39e8e4657e69a5a81e0e1517a7a604bfb32b3c76c865c66e617dd18d4e78aec80767210e19a7deb348fa8c63ed0ae4d5e2a3ca13986330b39dd613d9f77f2 Oct 9 00:57:50.124459 systemd-networkd[781]: lo: Link UP Oct 9 00:57:50.124467 systemd-networkd[781]: lo: Gained carrier Oct 9 00:57:50.127231 systemd-networkd[781]: Enumeration completed Oct 9 00:57:50.127315 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 00:57:50.129193 systemd[1]: Reached target network.target - Network. Oct 9 00:57:50.129676 unknown[687]: fetched base config from "system" Oct 9 00:57:50.130413 ignition[687]: fetch-offline: fetch-offline passed Oct 9 00:57:50.129686 unknown[687]: fetched user config from "qemu" Oct 9 00:57:50.130483 ignition[687]: Ignition finished successfully Oct 9 00:57:50.133057 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 00:57:50.134205 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 9 00:57:50.134367 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:57:50.134371 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 00:57:50.135798 systemd-networkd[781]: eth0: Link UP Oct 9 00:57:50.135802 systemd-networkd[781]: eth0: Gained carrier Oct 9 00:57:50.135809 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:57:50.145023 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 00:57:50.155844 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 00:57:50.157600 ignition[784]: Ignition 2.19.0 Oct 9 00:57:50.157610 ignition[784]: Stage: kargs Oct 9 00:57:50.157798 ignition[784]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:57:50.157809 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:57:50.158584 ignition[784]: kargs: kargs passed Oct 9 00:57:50.158620 ignition[784]: Ignition finished successfully Oct 9 00:57:50.162725 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 00:57:50.168043 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 00:57:50.180201 ignition[794]: Ignition 2.19.0 Oct 9 00:57:50.180214 ignition[794]: Stage: disks Oct 9 00:57:50.180407 ignition[794]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:57:50.180420 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:57:50.181476 ignition[794]: disks: disks passed Oct 9 00:57:50.181529 ignition[794]: Ignition finished successfully Oct 9 00:57:50.187344 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 00:57:50.189742 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 00:57:50.191106 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 00:57:50.192531 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 00:57:50.194916 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 00:57:50.196153 systemd[1]: Reached target basic.target - Basic System. Oct 9 00:57:50.208945 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 00:57:50.222942 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 00:57:50.229788 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 00:57:50.238927 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 00:57:50.321790 kernel: EXT4-fs (vda9): mounted filesystem ba2945c1-be14-41c0-8c54-84d676c7a16b r/w with ordered data mode. Quota mode: none. Oct 9 00:57:50.322131 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 00:57:50.323041 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 00:57:50.333911 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 00:57:50.335499 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 00:57:50.336474 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 00:57:50.336523 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 00:57:50.336550 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 00:57:50.345479 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 00:57:50.349603 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Oct 9 00:57:50.349630 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:57:50.349641 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 00:57:50.349722 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 00:57:50.354220 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:57:50.356816 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:57:50.357978 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 00:57:50.387981 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 00:57:50.392303 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Oct 9 00:57:50.396423 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 00:57:50.400866 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 00:57:50.475897 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 00:57:50.493921 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 00:57:50.497932 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 00:57:50.502797 kernel: BTRFS info (device vda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:57:50.505521 systemd-resolved[264]: Detected conflict on linux IN A 10.0.0.51 Oct 9 00:57:50.505538 systemd-resolved[264]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Oct 9 00:57:50.522414 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 00:57:50.524630 ignition[928]: INFO : Ignition 2.19.0 Oct 9 00:57:50.524630 ignition[928]: INFO : Stage: mount Oct 9 00:57:50.524630 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:57:50.524630 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:57:50.524630 ignition[928]: INFO : mount: mount passed Oct 9 00:57:50.524630 ignition[928]: INFO : Ignition finished successfully Oct 9 00:57:50.526268 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 00:57:50.532874 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 00:57:50.932482 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 00:57:50.944995 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 00:57:50.952938 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) Oct 9 00:57:50.952979 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:57:50.953000 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 00:57:50.953980 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:57:50.957786 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:57:50.958972 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 00:57:50.981276 ignition[958]: INFO : Ignition 2.19.0 Oct 9 00:57:50.982462 ignition[958]: INFO : Stage: files Oct 9 00:57:50.982462 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:57:50.982462 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:57:50.985638 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Oct 9 00:57:50.985638 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 00:57:50.985638 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 00:57:50.989866 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 00:57:50.989866 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 00:57:50.989866 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 00:57:50.989866 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 00:57:50.989866 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 00:57:50.986698 unknown[958]: wrote ssh authorized keys file for user: core Oct 9 00:57:51.030194 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 00:57:51.100616 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 00:57:51.100616 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 9 00:57:51.104516 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Oct 9 00:57:51.603846 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 9 00:57:51.703442 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 9 00:57:51.703442 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 9 00:57:51.707575 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 00:57:51.707575 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 00:57:51.707575 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 00:57:51.707575 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 00:57:51.707575 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 00:57:51.707575 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 00:57:51.707575 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 00:57:51.707575 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 00:57:51.707575 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 00:57:51.707575 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 00:57:51.707575 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 00:57:51.707575 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 00:57:51.707575 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 9 00:57:52.005844 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 9 00:57:52.142195 systemd-networkd[781]: eth0: Gained IPv6LL Oct 9 00:57:52.379344 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 00:57:52.379344 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 9 00:57:52.383339 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 00:57:52.383339 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 00:57:52.383339 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 9 00:57:52.383339 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 9 00:57:52.383339 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 00:57:52.383339 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 00:57:52.383339 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 9 00:57:52.383339 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 9 00:57:52.405625 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 00:57:52.410907 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 00:57:52.412534 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 9 00:57:52.412534 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 9 00:57:52.412534 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 00:57:52.412534 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 00:57:52.412534 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 00:57:52.412534 ignition[958]: INFO : files: files passed Oct 9 00:57:52.412534 ignition[958]: INFO : Ignition finished successfully Oct 9 00:57:52.424040 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 00:57:52.434104 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 00:57:52.437298 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 00:57:52.440096 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 00:57:52.441168 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 00:57:52.447283 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Oct 9 00:57:52.451113 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:57:52.451113 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:57:52.454228 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:57:52.457890 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 00:57:52.460656 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 00:57:52.478977 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 00:57:52.507369 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 00:57:52.508574 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 00:57:52.511445 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 00:57:52.513714 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 00:57:52.516120 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 00:57:52.533963 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 00:57:52.551193 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 00:57:52.567916 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 00:57:52.577434 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:57:52.579707 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:57:52.582034 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 00:57:52.583831 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 00:57:52.584835 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 00:57:52.587322 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 00:57:52.589337 systemd[1]: Stopped target basic.target - Basic System. Oct 9 00:57:52.591126 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 00:57:52.593310 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 00:57:52.595601 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 00:57:52.597789 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 00:57:52.599829 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 00:57:52.602252 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 00:57:52.624146 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 00:57:52.626144 systemd[1]: Stopped target swap.target - Swaps. Oct 9 00:57:52.627722 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 00:57:52.628714 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 00:57:52.630975 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:57:52.633100 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:57:52.635400 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 00:57:52.636365 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:57:52.638929 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 00:57:52.639915 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 00:57:52.642116 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 00:57:52.643173 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 00:57:52.645894 systemd[1]: Stopped target paths.target - Path Units. Oct 9 00:57:52.647667 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 00:57:52.651819 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:57:52.654537 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 00:57:52.656360 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 00:57:52.658249 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 00:57:52.659144 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 00:57:52.661100 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 00:57:52.662004 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 00:57:52.664077 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 00:57:52.665260 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 00:57:52.667796 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 00:57:52.668797 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 00:57:52.679913 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 00:57:52.688000 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 00:57:52.689181 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:57:52.690335 ignition[1014]: INFO : Ignition 2.19.0 Oct 9 00:57:52.690335 ignition[1014]: INFO : Stage: umount Oct 9 00:57:52.693044 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:57:52.693044 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:57:52.693044 ignition[1014]: INFO : umount: umount passed Oct 9 00:57:52.693044 ignition[1014]: INFO : Ignition finished successfully Oct 9 00:57:52.706981 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 00:57:52.708915 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 00:57:52.710080 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:57:52.712507 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 00:57:52.713660 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 00:57:52.718462 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 00:57:52.718593 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 00:57:52.723728 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 00:57:52.723870 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 00:57:52.728000 systemd[1]: Stopped target network.target - Network. Oct 9 00:57:52.732090 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 00:57:52.732164 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 00:57:52.735660 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 00:57:52.735727 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 00:57:52.739217 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 00:57:52.739285 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 00:57:52.742259 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 00:57:52.742315 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 00:57:52.745702 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 00:57:52.748102 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 00:57:52.751278 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 00:57:52.751870 systemd-networkd[781]: eth0: DHCPv6 lease lost Oct 9 00:57:52.753454 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 00:57:52.754583 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 00:57:52.757514 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 00:57:52.758577 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:57:52.771948 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 00:57:52.772944 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 00:57:52.773022 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 00:57:52.775275 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:57:52.778301 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 00:57:52.778431 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 00:57:52.783132 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 00:57:52.783197 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:57:52.795277 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 00:57:52.795330 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 00:57:52.797377 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 00:57:52.797426 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:57:52.800074 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 00:57:52.800250 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:57:52.802246 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 00:57:52.802358 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 00:57:52.805501 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 00:57:52.805574 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 00:57:52.807205 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 00:57:52.807247 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:57:52.809261 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 00:57:52.809311 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 00:57:52.811801 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 00:57:52.811861 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 00:57:52.813711 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 00:57:52.813791 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:57:52.826993 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 00:57:52.829055 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 00:57:52.829123 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:57:52.831418 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:57:52.831468 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:57:52.834865 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 00:57:52.834986 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 00:57:52.984111 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 00:57:52.984256 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 00:57:52.986758 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 00:57:52.988058 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 00:57:52.988124 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 00:57:53.006018 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 00:57:53.012679 systemd[1]: Switching root. Oct 9 00:57:53.045212 systemd-journald[193]: Journal stopped Oct 9 00:57:54.651749 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 9 00:57:54.651998 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 00:57:54.652020 kernel: SELinux: policy capability open_perms=1 Oct 9 00:57:54.652035 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 00:57:54.652056 kernel: SELinux: policy capability always_check_network=0 Oct 9 00:57:54.652071 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 00:57:54.652087 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 00:57:54.652101 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 00:57:54.652116 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 00:57:54.652131 kernel: audit: type=1403 audit(1728435473.748:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 00:57:54.652147 systemd[1]: Successfully loaded SELinux policy in 52.957ms. Oct 9 00:57:54.652173 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.765ms. Oct 9 00:57:54.652191 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 00:57:54.652211 systemd[1]: Detected virtualization kvm. Oct 9 00:57:54.652227 systemd[1]: Detected architecture x86-64. Oct 9 00:57:54.652242 systemd[1]: Detected first boot. Oct 9 00:57:54.652269 systemd[1]: Initializing machine ID from VM UUID. Oct 9 00:57:54.652288 zram_generator::config[1058]: No configuration found. Oct 9 00:57:54.652306 systemd[1]: Populated /etc with preset unit settings. Oct 9 00:57:54.652321 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 00:57:54.652336 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 00:57:54.652362 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 00:57:54.652380 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 00:57:54.652396 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 00:57:54.652413 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 00:57:54.652438 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 00:57:54.652455 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 00:57:54.652470 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 00:57:54.652487 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 00:57:54.652503 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 00:57:54.652523 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:57:54.652539 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:57:54.652555 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 00:57:54.652581 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 00:57:54.652598 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 00:57:54.652614 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 00:57:54.652629 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 00:57:54.652645 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:57:54.652661 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 00:57:54.652681 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 00:57:54.652698 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 00:57:54.652715 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 00:57:54.652730 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:57:54.652746 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 00:57:54.652761 systemd[1]: Reached target slices.target - Slice Units. Oct 9 00:57:54.652794 systemd[1]: Reached target swap.target - Swaps. Oct 9 00:57:54.652818 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 00:57:54.652834 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 00:57:54.652855 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:57:54.652871 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 00:57:54.652886 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:57:54.652903 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 00:57:54.652918 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 00:57:54.652934 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 00:57:54.652950 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 00:57:54.652971 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:57:54.652987 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 00:57:54.653003 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 00:57:54.653019 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 00:57:54.653036 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 00:57:54.653052 systemd[1]: Reached target machines.target - Containers. Oct 9 00:57:54.653067 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 00:57:54.653084 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:57:54.653100 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 00:57:54.653121 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 00:57:54.653138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:57:54.653154 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 00:57:54.653173 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:57:54.653189 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 00:57:54.653211 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:57:54.653228 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 00:57:54.653244 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 00:57:54.653265 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 00:57:54.653282 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 00:57:54.653298 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 00:57:54.653315 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 00:57:54.653330 kernel: fuse: init (API version 7.39) Oct 9 00:57:54.653346 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 00:57:54.653363 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 00:57:54.653380 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 00:57:54.653395 kernel: ACPI: bus type drm_connector registered Oct 9 00:57:54.653414 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 00:57:54.653453 systemd-journald[1132]: Collecting audit messages is disabled. Oct 9 00:57:54.653488 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 00:57:54.653504 kernel: loop: module loaded Oct 9 00:57:54.653520 systemd[1]: Stopped verity-setup.service. Oct 9 00:57:54.653536 systemd-journald[1132]: Journal started Oct 9 00:57:54.653576 systemd-journald[1132]: Runtime Journal (/run/log/journal/a6af9629b47c48cca4a74833a8c34214) is 6.0M, max 48.3M, 42.2M free. Oct 9 00:57:54.408321 systemd[1]: Queued start job for default target multi-user.target. Oct 9 00:57:54.654274 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:57:54.429414 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 00:57:54.429975 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 00:57:54.660794 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 00:57:54.661685 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 00:57:54.663048 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 00:57:54.664430 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 00:57:54.665535 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 00:57:54.666805 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 00:57:54.668081 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 00:57:54.669357 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 00:57:54.670826 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:57:54.672377 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 00:57:54.672545 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 00:57:54.674205 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:57:54.674375 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:57:54.675960 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 00:57:54.676128 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 00:57:54.677492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:57:54.677667 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:57:54.679373 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 00:57:54.679538 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 00:57:54.681068 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:57:54.681242 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:57:54.682745 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 00:57:54.684427 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 00:57:54.686034 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 00:57:54.699224 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 00:57:54.710838 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 00:57:54.713047 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 00:57:54.714182 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 00:57:54.714209 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 00:57:54.716301 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 00:57:54.719866 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 00:57:54.722344 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 00:57:54.723469 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:57:54.725748 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 00:57:54.729348 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 00:57:54.730867 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 00:57:54.733990 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 00:57:54.735176 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 00:57:54.736249 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:57:54.742944 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 00:57:54.746929 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 00:57:54.749443 systemd-journald[1132]: Time spent on flushing to /var/log/journal/a6af9629b47c48cca4a74833a8c34214 is 23.017ms for 1024 entries. Oct 9 00:57:54.749443 systemd-journald[1132]: System Journal (/var/log/journal/a6af9629b47c48cca4a74833a8c34214) is 8.0M, max 195.6M, 187.6M free. Oct 9 00:57:54.798148 systemd-journald[1132]: Received client request to flush runtime journal. Oct 9 00:57:54.798206 kernel: loop0: detected capacity change from 0 to 140992 Oct 9 00:57:54.750961 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 00:57:54.752588 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 00:57:54.754159 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 00:57:54.761747 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:57:54.773966 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 00:57:54.775669 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 00:57:54.778390 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 00:57:54.781939 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 00:57:54.795857 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:57:54.798609 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 9 00:57:54.801593 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 00:57:54.814701 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 00:57:54.820125 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 00:57:54.824954 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 00:57:54.827572 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 00:57:54.828460 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 00:57:54.854802 kernel: loop1: detected capacity change from 0 to 138192 Oct 9 00:57:54.855885 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Oct 9 00:57:54.855903 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Oct 9 00:57:54.862311 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:57:54.889875 kernel: loop2: detected capacity change from 0 to 211296 Oct 9 00:57:54.920803 kernel: loop3: detected capacity change from 0 to 140992 Oct 9 00:57:54.932800 kernel: loop4: detected capacity change from 0 to 138192 Oct 9 00:57:54.945816 kernel: loop5: detected capacity change from 0 to 211296 Oct 9 00:57:54.954381 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 9 00:57:54.955138 (sd-merge)[1196]: Merged extensions into '/usr'. Oct 9 00:57:54.959441 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 00:57:54.959597 systemd[1]: Reloading... Oct 9 00:57:55.019919 zram_generator::config[1222]: No configuration found. Oct 9 00:57:55.091912 ldconfig[1167]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 00:57:55.142595 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:57:55.191983 systemd[1]: Reloading finished in 231 ms. Oct 9 00:57:55.223729 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 00:57:55.225375 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 00:57:55.241951 systemd[1]: Starting ensure-sysext.service... Oct 9 00:57:55.249709 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 00:57:55.256810 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Oct 9 00:57:55.256831 systemd[1]: Reloading... Oct 9 00:57:55.274839 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 00:57:55.275283 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 00:57:55.276343 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 00:57:55.276655 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Oct 9 00:57:55.276734 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Oct 9 00:57:55.280189 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 00:57:55.280202 systemd-tmpfiles[1260]: Skipping /boot Oct 9 00:57:55.292673 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 00:57:55.292687 systemd-tmpfiles[1260]: Skipping /boot Oct 9 00:57:55.322805 zram_generator::config[1290]: No configuration found. Oct 9 00:57:55.428218 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:57:55.477790 systemd[1]: Reloading finished in 220 ms. Oct 9 00:57:55.496129 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 00:57:55.497936 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:57:55.515811 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:57:55.518292 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 00:57:55.521047 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 00:57:55.526271 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 00:57:55.529366 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:57:55.531965 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 00:57:55.537653 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:57:55.538054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:57:55.541025 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:57:55.546312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:57:55.550605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:57:55.551826 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:57:55.554044 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 00:57:55.555069 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:57:55.556121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:57:55.557400 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:57:55.559223 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:57:55.559402 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:57:55.574649 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 00:57:55.576642 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:57:55.576959 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:57:55.579504 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Oct 9 00:57:55.581358 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:57:55.582798 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:57:55.584674 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:57:55.591581 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:57:55.593193 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:57:55.595378 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 00:57:55.599984 augenrules[1361]: No rules Oct 9 00:57:55.596906 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:57:55.598286 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:57:55.598572 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:57:55.601726 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 00:57:55.605511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:57:55.606468 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:57:55.608920 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:57:55.609146 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:57:55.628989 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 00:57:55.630804 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:57:55.633102 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 00:57:55.637469 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:57:55.651202 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:57:55.652502 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:57:55.655055 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:57:55.662122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 00:57:55.664898 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:57:55.676090 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:57:55.678007 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:57:55.682075 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 00:57:55.692256 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1378) Oct 9 00:57:55.685898 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:57:55.694065 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1378) Oct 9 00:57:55.687585 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 00:57:55.689554 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:57:55.689845 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:57:55.691593 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:57:55.691837 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:57:55.700337 augenrules[1385]: /sbin/augenrules: No change Oct 9 00:57:55.702854 systemd[1]: Finished ensure-sysext.service. Oct 9 00:57:55.708813 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1390) Oct 9 00:57:55.712474 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 00:57:55.712757 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 00:57:55.715580 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:57:55.715945 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:57:55.718074 augenrules[1423]: No rules Oct 9 00:57:55.719851 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:57:55.720101 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:57:55.733808 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 00:57:55.736831 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 00:57:55.736914 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 00:57:55.745014 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 00:57:55.746202 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 00:57:55.754320 systemd-resolved[1329]: Positive Trust Anchors: Oct 9 00:57:55.754336 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 00:57:55.754379 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 00:57:55.759203 systemd-resolved[1329]: Defaulting to hostname 'linux'. Oct 9 00:57:55.761903 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 00:57:55.765679 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:57:55.783798 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 00:57:55.792789 kernel: ACPI: button: Power Button [PWRF] Oct 9 00:57:55.812027 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 9 00:57:55.816691 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 9 00:57:55.821993 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 9 00:57:55.822218 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 9 00:57:55.829839 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 9 00:57:55.832325 systemd-networkd[1409]: lo: Link UP Oct 9 00:57:55.832754 systemd-networkd[1409]: lo: Gained carrier Oct 9 00:57:55.833875 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 00:57:55.837104 systemd-networkd[1409]: Enumeration completed Oct 9 00:57:55.837655 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:57:55.837735 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 00:57:55.841585 systemd-networkd[1409]: eth0: Link UP Oct 9 00:57:55.841594 systemd-networkd[1409]: eth0: Gained carrier Oct 9 00:57:55.841607 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:57:55.843030 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 00:57:55.845033 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 00:57:55.851965 systemd[1]: Reached target network.target - Network. Oct 9 00:57:55.862070 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 00:57:55.862823 systemd-networkd[1409]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 00:57:55.863553 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 00:57:55.863686 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Oct 9 00:57:55.865125 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 00:57:57.217447 systemd-resolved[1329]: Clock change detected. Flushing caches. Oct 9 00:57:57.217600 systemd-timesyncd[1437]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 9 00:57:57.217642 systemd-timesyncd[1437]: Initial clock synchronization to Wed 2024-10-09 00:57:57.217415 UTC. Oct 9 00:57:57.233385 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 00:57:57.292552 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 00:57:57.295296 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:57:57.300407 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:57:57.300664 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:57:57.305505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:57:57.308939 kernel: kvm_amd: TSC scaling supported Oct 9 00:57:57.308980 kernel: kvm_amd: Nested Virtualization enabled Oct 9 00:57:57.308993 kernel: kvm_amd: Nested Paging enabled Oct 9 00:57:57.309005 kernel: kvm_amd: LBR virtualization supported Oct 9 00:57:57.310778 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 9 00:57:57.310806 kernel: kvm_amd: Virtual GIF supported Oct 9 00:57:57.329558 kernel: EDAC MC: Ver: 3.0.0 Oct 9 00:57:57.362136 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 00:57:57.375209 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:57:57.385802 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 00:57:57.392663 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 00:57:57.420697 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 00:57:57.422185 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:57:57.423301 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 00:57:57.424455 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 00:57:57.425706 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 00:57:57.427135 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 00:57:57.428317 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 00:57:57.429571 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 00:57:57.430793 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 00:57:57.430819 systemd[1]: Reached target paths.target - Path Units. Oct 9 00:57:57.431716 systemd[1]: Reached target timers.target - Timer Units. Oct 9 00:57:57.433325 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 00:57:57.435848 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 00:57:57.443735 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 00:57:57.446094 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 00:57:57.447868 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 00:57:57.449159 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 00:57:57.450245 systemd[1]: Reached target basic.target - Basic System. Oct 9 00:57:57.451331 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 00:57:57.451361 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 00:57:57.452428 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 00:57:57.454741 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 00:57:57.458852 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 00:57:57.462390 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 00:57:57.463695 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 00:57:57.465685 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 00:57:57.466468 jq[1466]: false Oct 9 00:57:57.467059 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 00:57:57.472623 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 00:57:57.475089 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 00:57:57.479869 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 00:57:57.485639 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 00:57:57.487394 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 00:57:57.488013 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 00:57:57.495748 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 00:57:57.498160 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 00:57:57.503366 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 00:57:57.506034 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 00:57:57.506062 dbus-daemon[1465]: [system] SELinux support is enabled Oct 9 00:57:57.506305 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 00:57:57.506472 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 00:57:57.512129 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 00:57:57.512493 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 00:57:57.515178 update_engine[1475]: I20241009 00:57:57.513900 1475 main.cc:92] Flatcar Update Engine starting Oct 9 00:57:57.516534 update_engine[1475]: I20241009 00:57:57.516486 1475 update_check_scheduler.cc:74] Next update check in 3m3s Oct 9 00:57:57.516775 jq[1482]: true Oct 9 00:57:57.522999 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 00:57:57.523329 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 00:57:57.523362 (ntainerd)[1486]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 00:57:57.528207 extend-filesystems[1467]: Found loop3 Oct 9 00:57:57.531331 extend-filesystems[1467]: Found loop4 Oct 9 00:57:57.531331 extend-filesystems[1467]: Found loop5 Oct 9 00:57:57.531331 extend-filesystems[1467]: Found sr0 Oct 9 00:57:57.531331 extend-filesystems[1467]: Found vda Oct 9 00:57:57.531331 extend-filesystems[1467]: Found vda1 Oct 9 00:57:57.531331 extend-filesystems[1467]: Found vda2 Oct 9 00:57:57.531331 extend-filesystems[1467]: Found vda3 Oct 9 00:57:57.531331 extend-filesystems[1467]: Found usr Oct 9 00:57:57.531331 extend-filesystems[1467]: Found vda4 Oct 9 00:57:57.531331 extend-filesystems[1467]: Found vda6 Oct 9 00:57:57.531331 extend-filesystems[1467]: Found vda7 Oct 9 00:57:57.531331 extend-filesystems[1467]: Found vda9 Oct 9 00:57:57.531331 extend-filesystems[1467]: Checking size of /dev/vda9 Oct 9 00:57:57.530211 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 00:57:57.560703 tar[1484]: linux-amd64/helm Oct 9 00:57:57.560971 extend-filesystems[1467]: Resized partition /dev/vda9 Oct 9 00:57:57.563725 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 9 00:57:57.563758 jq[1487]: true Oct 9 00:57:57.530244 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 00:57:57.563972 extend-filesystems[1506]: resize2fs 1.47.1 (20-May-2024) Oct 9 00:57:57.536719 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 00:57:57.536743 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 00:57:57.546058 systemd[1]: Started update-engine.service - Update Engine. Oct 9 00:57:57.551770 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 00:57:57.582242 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1400) Oct 9 00:57:57.586700 systemd-logind[1474]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 00:57:57.590669 systemd-logind[1474]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 00:57:57.600349 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 9 00:57:57.601769 systemd-logind[1474]: New seat seat0. Oct 9 00:57:57.605438 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 00:57:57.630931 extend-filesystems[1506]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 00:57:57.630931 extend-filesystems[1506]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 00:57:57.630931 extend-filesystems[1506]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 9 00:57:57.637275 extend-filesystems[1467]: Resized filesystem in /dev/vda9 Oct 9 00:57:57.632778 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 00:57:57.633048 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 00:57:57.650955 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 00:57:57.653831 bash[1518]: Updated "/home/core/.ssh/authorized_keys" Oct 9 00:57:57.655703 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 00:57:57.658466 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 9 00:57:57.730729 containerd[1486]: time="2024-10-09T00:57:57.730565630Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 00:57:57.754172 containerd[1486]: time="2024-10-09T00:57:57.754109325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:57:57.755858 containerd[1486]: time="2024-10-09T00:57:57.755818048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:57:57.755858 containerd[1486]: time="2024-10-09T00:57:57.755851171Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 00:57:57.755912 containerd[1486]: time="2024-10-09T00:57:57.755869485Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 00:57:57.756099 containerd[1486]: time="2024-10-09T00:57:57.756075982Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 00:57:57.756127 containerd[1486]: time="2024-10-09T00:57:57.756098975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 00:57:57.756203 containerd[1486]: time="2024-10-09T00:57:57.756177112Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:57:57.756203 containerd[1486]: time="2024-10-09T00:57:57.756194715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:57:57.756475 containerd[1486]: time="2024-10-09T00:57:57.756448280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:57:57.756475 containerd[1486]: time="2024-10-09T00:57:57.756471353Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 00:57:57.756539 containerd[1486]: time="2024-10-09T00:57:57.756487514Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:57:57.756539 containerd[1486]: time="2024-10-09T00:57:57.756499276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 00:57:57.756646 containerd[1486]: time="2024-10-09T00:57:57.756628618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:57:57.756920 containerd[1486]: time="2024-10-09T00:57:57.756902392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:57:57.757087 containerd[1486]: time="2024-10-09T00:57:57.757050008Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:57:57.757087 containerd[1486]: time="2024-10-09T00:57:57.757068663Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 00:57:57.757210 containerd[1486]: time="2024-10-09T00:57:57.757195531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 00:57:57.757322 containerd[1486]: time="2024-10-09T00:57:57.757273838Z" level=info msg="metadata content store policy set" policy=shared Oct 9 00:57:57.763801 containerd[1486]: time="2024-10-09T00:57:57.763781129Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 00:57:57.763836 containerd[1486]: time="2024-10-09T00:57:57.763824140Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 00:57:57.763856 containerd[1486]: time="2024-10-09T00:57:57.763842023Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 00:57:57.763875 containerd[1486]: time="2024-10-09T00:57:57.763858965Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 00:57:57.763893 containerd[1486]: time="2024-10-09T00:57:57.763875426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 00:57:57.764034 containerd[1486]: time="2024-10-09T00:57:57.764018194Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 00:57:57.764305 containerd[1486]: time="2024-10-09T00:57:57.764287970Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 00:57:57.764426 containerd[1486]: time="2024-10-09T00:57:57.764409878Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 00:57:57.764446 containerd[1486]: time="2024-10-09T00:57:57.764431078Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 00:57:57.764465 containerd[1486]: time="2024-10-09T00:57:57.764450624Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 00:57:57.764483 containerd[1486]: time="2024-10-09T00:57:57.764466053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 00:57:57.764501 containerd[1486]: time="2024-10-09T00:57:57.764480641Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 00:57:57.764501 containerd[1486]: time="2024-10-09T00:57:57.764495619Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 00:57:57.764626 containerd[1486]: time="2024-10-09T00:57:57.764549019Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 00:57:57.764626 containerd[1486]: time="2024-10-09T00:57:57.764570680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 00:57:57.764626 containerd[1486]: time="2024-10-09T00:57:57.764585547Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 00:57:57.764626 containerd[1486]: time="2024-10-09T00:57:57.764599724Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 00:57:57.764626 containerd[1486]: time="2024-10-09T00:57:57.764613550Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 00:57:57.764712 containerd[1486]: time="2024-10-09T00:57:57.764635571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.764712 containerd[1486]: time="2024-10-09T00:57:57.764650670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.764712 containerd[1486]: time="2024-10-09T00:57:57.764664506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.764712 containerd[1486]: time="2024-10-09T00:57:57.764679073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.764712 containerd[1486]: time="2024-10-09T00:57:57.764693570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.764712 containerd[1486]: time="2024-10-09T00:57:57.764708909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.764824 containerd[1486]: time="2024-10-09T00:57:57.764723496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.764824 containerd[1486]: time="2024-10-09T00:57:57.764737723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.764824 containerd[1486]: time="2024-10-09T00:57:57.764752921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.764824 containerd[1486]: time="2024-10-09T00:57:57.764768300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.764824 containerd[1486]: time="2024-10-09T00:57:57.764782186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.764824 containerd[1486]: time="2024-10-09T00:57:57.764795722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.764824 containerd[1486]: time="2024-10-09T00:57:57.764810429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.764937 containerd[1486]: time="2024-10-09T00:57:57.764827671Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 00:57:57.764937 containerd[1486]: time="2024-10-09T00:57:57.764849092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.764937 containerd[1486]: time="2024-10-09T00:57:57.764863579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.764937 containerd[1486]: time="2024-10-09T00:57:57.764875872Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 00:57:57.765770 containerd[1486]: time="2024-10-09T00:57:57.765727469Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 00:57:57.765770 containerd[1486]: time="2024-10-09T00:57:57.765752506Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 00:57:57.765770 containerd[1486]: time="2024-10-09T00:57:57.765765700Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 00:57:57.765841 containerd[1486]: time="2024-10-09T00:57:57.765779396Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 00:57:57.765841 containerd[1486]: time="2024-10-09T00:57:57.765790948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.765841 containerd[1486]: time="2024-10-09T00:57:57.765804673Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 00:57:57.765841 containerd[1486]: time="2024-10-09T00:57:57.765816796Z" level=info msg="NRI interface is disabled by configuration." Oct 9 00:57:57.765841 containerd[1486]: time="2024-10-09T00:57:57.765828899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 00:57:57.766243 containerd[1486]: time="2024-10-09T00:57:57.766184646Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 00:57:57.766375 containerd[1486]: time="2024-10-09T00:57:57.766246331Z" level=info msg="Connect containerd service" Oct 9 00:57:57.766375 containerd[1486]: time="2024-10-09T00:57:57.766288240Z" level=info msg="using legacy CRI server" Oct 9 00:57:57.766375 containerd[1486]: time="2024-10-09T00:57:57.766296616Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 00:57:57.766425 containerd[1486]: time="2024-10-09T00:57:57.766391333Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 00:57:57.767075 containerd[1486]: time="2024-10-09T00:57:57.767045049Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 00:57:57.767401 containerd[1486]: time="2024-10-09T00:57:57.767386549Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 00:57:57.767490 containerd[1486]: time="2024-10-09T00:57:57.767447734Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 00:57:57.767600 containerd[1486]: time="2024-10-09T00:57:57.767564062Z" level=info msg="Start subscribing containerd event" Oct 9 00:57:57.767635 containerd[1486]: time="2024-10-09T00:57:57.767618444Z" level=info msg="Start recovering state" Oct 9 00:57:57.767701 containerd[1486]: time="2024-10-09T00:57:57.767685480Z" level=info msg="Start event monitor" Oct 9 00:57:57.767725 containerd[1486]: time="2024-10-09T00:57:57.767708313Z" level=info msg="Start snapshots syncer" Oct 9 00:57:57.767725 containerd[1486]: time="2024-10-09T00:57:57.767721127Z" level=info msg="Start cni network conf syncer for default" Oct 9 00:57:57.767758 containerd[1486]: time="2024-10-09T00:57:57.767730164Z" level=info msg="Start streaming server" Oct 9 00:57:57.768770 containerd[1486]: time="2024-10-09T00:57:57.767804233Z" level=info msg="containerd successfully booted in 0.039231s" Oct 9 00:57:57.767901 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 00:57:57.769552 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 00:57:57.799120 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 00:57:57.812731 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 00:57:57.820745 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 00:57:57.821007 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 00:57:57.823684 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 00:57:57.840666 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 00:57:57.850817 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 00:57:57.853120 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 00:57:57.854453 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 00:57:57.974912 tar[1484]: linux-amd64/LICENSE Oct 9 00:57:57.975028 tar[1484]: linux-amd64/README.md Oct 9 00:57:57.989854 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 00:57:59.125690 systemd-networkd[1409]: eth0: Gained IPv6LL Oct 9 00:57:59.129356 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 00:57:59.131361 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 00:57:59.138732 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 9 00:57:59.141099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:57:59.143263 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 00:57:59.163526 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 9 00:57:59.163836 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 9 00:57:59.165726 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 00:57:59.169055 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 00:57:59.761544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:57:59.763251 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 00:57:59.765581 systemd[1]: Startup finished in 699ms (kernel) + 6.043s (initrd) + 4.703s (userspace) = 11.446s. Oct 9 00:57:59.769181 (kubelet)[1578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 00:58:00.268653 kubelet[1578]: E1009 00:58:00.268560 1578 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 00:58:00.273272 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 00:58:00.273530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 00:58:00.273906 systemd[1]: kubelet.service: Consumed 1.012s CPU time. Oct 9 00:58:03.580767 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 00:58:03.581928 systemd[1]: Started sshd@0-10.0.0.51:22-10.0.0.1:54356.service - OpenSSH per-connection server daemon (10.0.0.1:54356). Oct 9 00:58:03.696631 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 54356 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:58:03.698729 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:58:03.707289 systemd-logind[1474]: New session 1 of user core. Oct 9 00:58:03.708555 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 00:58:03.718698 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 00:58:03.729691 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 00:58:03.744821 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 00:58:03.747879 (systemd)[1596]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 00:58:03.850132 systemd[1596]: Queued start job for default target default.target. Oct 9 00:58:03.858806 systemd[1596]: Created slice app.slice - User Application Slice. Oct 9 00:58:03.858833 systemd[1596]: Reached target paths.target - Paths. Oct 9 00:58:03.858846 systemd[1596]: Reached target timers.target - Timers. Oct 9 00:58:03.860400 systemd[1596]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 00:58:03.871580 systemd[1596]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 00:58:03.871715 systemd[1596]: Reached target sockets.target - Sockets. Oct 9 00:58:03.871734 systemd[1596]: Reached target basic.target - Basic System. Oct 9 00:58:03.871772 systemd[1596]: Reached target default.target - Main User Target. Oct 9 00:58:03.871809 systemd[1596]: Startup finished in 116ms. Oct 9 00:58:03.872287 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 00:58:03.873802 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 00:58:03.940874 systemd[1]: Started sshd@1-10.0.0.51:22-10.0.0.1:54368.service - OpenSSH per-connection server daemon (10.0.0.1:54368). Oct 9 00:58:03.974102 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 54368 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:58:03.976114 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:58:03.980886 systemd-logind[1474]: New session 2 of user core. Oct 9 00:58:03.993626 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 00:58:04.047304 sshd[1607]: pam_unix(sshd:session): session closed for user core Oct 9 00:58:04.054022 systemd[1]: sshd@1-10.0.0.51:22-10.0.0.1:54368.service: Deactivated successfully. Oct 9 00:58:04.055577 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 00:58:04.056818 systemd-logind[1474]: Session 2 logged out. Waiting for processes to exit. Oct 9 00:58:04.066850 systemd[1]: Started sshd@2-10.0.0.51:22-10.0.0.1:54372.service - OpenSSH per-connection server daemon (10.0.0.1:54372). Oct 9 00:58:04.067729 systemd-logind[1474]: Removed session 2. Oct 9 00:58:04.095966 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 54372 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:58:04.097292 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:58:04.100897 systemd-logind[1474]: New session 3 of user core. Oct 9 00:58:04.110616 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 00:58:04.159501 sshd[1614]: pam_unix(sshd:session): session closed for user core Oct 9 00:58:04.172460 systemd[1]: sshd@2-10.0.0.51:22-10.0.0.1:54372.service: Deactivated successfully. Oct 9 00:58:04.174116 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 00:58:04.175454 systemd-logind[1474]: Session 3 logged out. Waiting for processes to exit. Oct 9 00:58:04.176797 systemd[1]: Started sshd@3-10.0.0.51:22-10.0.0.1:54388.service - OpenSSH per-connection server daemon (10.0.0.1:54388). Oct 9 00:58:04.177635 systemd-logind[1474]: Removed session 3. Oct 9 00:58:04.210927 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 54388 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:58:04.212677 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:58:04.217249 systemd-logind[1474]: New session 4 of user core. Oct 9 00:58:04.226635 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 00:58:04.280506 sshd[1622]: pam_unix(sshd:session): session closed for user core Oct 9 00:58:04.286990 systemd[1]: sshd@3-10.0.0.51:22-10.0.0.1:54388.service: Deactivated successfully. Oct 9 00:58:04.288751 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 00:58:04.290767 systemd-logind[1474]: Session 4 logged out. Waiting for processes to exit. Oct 9 00:58:04.298834 systemd[1]: Started sshd@4-10.0.0.51:22-10.0.0.1:54392.service - OpenSSH per-connection server daemon (10.0.0.1:54392). Oct 9 00:58:04.299899 systemd-logind[1474]: Removed session 4. Oct 9 00:58:04.327637 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 54392 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:58:04.329390 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:58:04.333427 systemd-logind[1474]: New session 5 of user core. Oct 9 00:58:04.343654 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 00:58:04.401350 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 00:58:04.401695 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:58:04.420541 sudo[1632]: pam_unix(sudo:session): session closed for user root Oct 9 00:58:04.422327 sshd[1629]: pam_unix(sshd:session): session closed for user core Oct 9 00:58:04.433048 systemd[1]: sshd@4-10.0.0.51:22-10.0.0.1:54392.service: Deactivated successfully. Oct 9 00:58:04.434555 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 00:58:04.436082 systemd-logind[1474]: Session 5 logged out. Waiting for processes to exit. Oct 9 00:58:04.437268 systemd[1]: Started sshd@5-10.0.0.51:22-10.0.0.1:54406.service - OpenSSH per-connection server daemon (10.0.0.1:54406). Oct 9 00:58:04.438013 systemd-logind[1474]: Removed session 5. Oct 9 00:58:04.469430 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 54406 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:58:04.470813 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:58:04.474591 systemd-logind[1474]: New session 6 of user core. Oct 9 00:58:04.488628 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 00:58:04.541671 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 00:58:04.542004 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:58:04.545727 sudo[1641]: pam_unix(sudo:session): session closed for user root Oct 9 00:58:04.551980 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 00:58:04.552386 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:58:04.571889 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:58:04.602158 augenrules[1663]: No rules Oct 9 00:58:04.603263 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:58:04.603545 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:58:04.604735 sudo[1640]: pam_unix(sudo:session): session closed for user root Oct 9 00:58:04.606374 sshd[1637]: pam_unix(sshd:session): session closed for user core Oct 9 00:58:04.618406 systemd[1]: sshd@5-10.0.0.51:22-10.0.0.1:54406.service: Deactivated successfully. Oct 9 00:58:04.620033 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 00:58:04.621726 systemd-logind[1474]: Session 6 logged out. Waiting for processes to exit. Oct 9 00:58:04.632762 systemd[1]: Started sshd@6-10.0.0.51:22-10.0.0.1:54418.service - OpenSSH per-connection server daemon (10.0.0.1:54418). Oct 9 00:58:04.633761 systemd-logind[1474]: Removed session 6. Oct 9 00:58:04.662097 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 54418 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:58:04.663676 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:58:04.667595 systemd-logind[1474]: New session 7 of user core. Oct 9 00:58:04.674649 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 00:58:04.728133 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 00:58:04.728465 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:58:05.001735 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 00:58:05.001882 (dockerd)[1694]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 00:58:05.239604 dockerd[1694]: time="2024-10-09T00:58:05.239500491Z" level=info msg="Starting up" Oct 9 00:58:05.658937 dockerd[1694]: time="2024-10-09T00:58:05.658877042Z" level=info msg="Loading containers: start." Oct 9 00:58:05.824539 kernel: Initializing XFRM netlink socket Oct 9 00:58:05.918908 systemd-networkd[1409]: docker0: Link UP Oct 9 00:58:05.955064 dockerd[1694]: time="2024-10-09T00:58:05.954997307Z" level=info msg="Loading containers: done." Oct 9 00:58:05.968493 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3759064496-merged.mount: Deactivated successfully. Oct 9 00:58:05.971569 dockerd[1694]: time="2024-10-09T00:58:05.971527461Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 00:58:05.971652 dockerd[1694]: time="2024-10-09T00:58:05.971629342Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 00:58:05.971770 dockerd[1694]: time="2024-10-09T00:58:05.971746783Z" level=info msg="Daemon has completed initialization" Oct 9 00:58:06.007999 dockerd[1694]: time="2024-10-09T00:58:06.007915078Z" level=info msg="API listen on /run/docker.sock" Oct 9 00:58:06.008196 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 00:58:06.618385 containerd[1486]: time="2024-10-09T00:58:06.618349222Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 00:58:07.392725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2918250410.mount: Deactivated successfully. Oct 9 00:58:08.615470 containerd[1486]: time="2024-10-09T00:58:08.615395593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:08.616201 containerd[1486]: time="2024-10-09T00:58:08.616166198Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213841" Oct 9 00:58:08.618867 containerd[1486]: time="2024-10-09T00:58:08.618814183Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:08.621504 containerd[1486]: time="2024-10-09T00:58:08.621465595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:08.624794 containerd[1486]: time="2024-10-09T00:58:08.623145254Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 2.004752931s" Oct 9 00:58:08.624794 containerd[1486]: time="2024-10-09T00:58:08.623216127Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 9 00:58:08.645387 containerd[1486]: time="2024-10-09T00:58:08.645351811Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 00:58:10.523792 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 00:58:10.534669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:58:10.679166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:58:10.684669 (kubelet)[1971]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 00:58:10.857891 containerd[1486]: time="2024-10-09T00:58:10.857747312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:10.858817 containerd[1486]: time="2024-10-09T00:58:10.858773767Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208673" Oct 9 00:58:10.860119 containerd[1486]: time="2024-10-09T00:58:10.860078573Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:10.863407 containerd[1486]: time="2024-10-09T00:58:10.863358012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:10.864325 containerd[1486]: time="2024-10-09T00:58:10.864290351Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 2.218902873s" Oct 9 00:58:10.864325 containerd[1486]: time="2024-10-09T00:58:10.864320156Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 9 00:58:10.883445 kubelet[1971]: E1009 00:58:10.883361 1971 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 00:58:10.888258 containerd[1486]: time="2024-10-09T00:58:10.887798679Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 00:58:10.891029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 00:58:10.891237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 00:58:12.471763 containerd[1486]: time="2024-10-09T00:58:12.471709060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:12.472470 containerd[1486]: time="2024-10-09T00:58:12.472413351Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320456" Oct 9 00:58:12.473577 containerd[1486]: time="2024-10-09T00:58:12.473548399Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:12.475984 containerd[1486]: time="2024-10-09T00:58:12.475957406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:12.476894 containerd[1486]: time="2024-10-09T00:58:12.476865679Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 1.589033608s" Oct 9 00:58:12.476931 containerd[1486]: time="2024-10-09T00:58:12.476892740Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 9 00:58:12.499269 containerd[1486]: time="2024-10-09T00:58:12.499224472Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 00:58:13.488416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028198299.mount: Deactivated successfully. Oct 9 00:58:14.164659 containerd[1486]: time="2024-10-09T00:58:14.164582818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:14.166001 containerd[1486]: time="2024-10-09T00:58:14.165934753Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601750" Oct 9 00:58:14.167590 containerd[1486]: time="2024-10-09T00:58:14.167554800Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:14.170361 containerd[1486]: time="2024-10-09T00:58:14.170317010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:14.170928 containerd[1486]: time="2024-10-09T00:58:14.170888231Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 1.671631218s" Oct 9 00:58:14.170928 containerd[1486]: time="2024-10-09T00:58:14.170920631Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 9 00:58:14.192145 containerd[1486]: time="2024-10-09T00:58:14.192106435Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 00:58:14.854962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2389632313.mount: Deactivated successfully. Oct 9 00:58:15.786318 containerd[1486]: time="2024-10-09T00:58:15.786254622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:15.786929 containerd[1486]: time="2024-10-09T00:58:15.786872320Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 00:58:15.788053 containerd[1486]: time="2024-10-09T00:58:15.787986650Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:15.792251 containerd[1486]: time="2024-10-09T00:58:15.792213836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:15.793460 containerd[1486]: time="2024-10-09T00:58:15.793425608Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.601278808s" Oct 9 00:58:15.793489 containerd[1486]: time="2024-10-09T00:58:15.793457047Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 00:58:15.815589 containerd[1486]: time="2024-10-09T00:58:15.815544511Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 00:58:16.363500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3833076024.mount: Deactivated successfully. Oct 9 00:58:16.369315 containerd[1486]: time="2024-10-09T00:58:16.369276648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:16.370046 containerd[1486]: time="2024-10-09T00:58:16.369987982Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 9 00:58:16.371102 containerd[1486]: time="2024-10-09T00:58:16.371072345Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:16.373263 containerd[1486]: time="2024-10-09T00:58:16.373231113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:16.374083 containerd[1486]: time="2024-10-09T00:58:16.374043526Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 558.459782ms" Oct 9 00:58:16.374083 containerd[1486]: time="2024-10-09T00:58:16.374076388Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 00:58:16.395480 containerd[1486]: time="2024-10-09T00:58:16.395407293Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 00:58:16.973297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2128901425.mount: Deactivated successfully. Oct 9 00:58:18.833267 containerd[1486]: time="2024-10-09T00:58:18.833197069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:18.834144 containerd[1486]: time="2024-10-09T00:58:18.834091395Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Oct 9 00:58:18.835213 containerd[1486]: time="2024-10-09T00:58:18.835180037Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:18.838094 containerd[1486]: time="2024-10-09T00:58:18.838057953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:18.839100 containerd[1486]: time="2024-10-09T00:58:18.839062837Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.44361093s" Oct 9 00:58:18.839100 containerd[1486]: time="2024-10-09T00:58:18.839096089Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 9 00:58:20.929040 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 00:58:20.938783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:58:20.949528 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 00:58:20.949634 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 00:58:20.949927 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:58:20.953313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:58:20.971005 systemd[1]: Reloading requested from client PID 2203 ('systemctl') (unit session-7.scope)... Oct 9 00:58:20.971020 systemd[1]: Reloading... Oct 9 00:58:21.048657 zram_generator::config[2245]: No configuration found. Oct 9 00:58:21.201192 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:58:21.276035 systemd[1]: Reloading finished in 304 ms. Oct 9 00:58:21.329760 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:58:21.332793 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 00:58:21.333032 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:58:21.334542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:58:21.478439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:58:21.483454 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 00:58:21.522525 kubelet[2292]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:58:21.522525 kubelet[2292]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 00:58:21.522525 kubelet[2292]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:58:21.522973 kubelet[2292]: I1009 00:58:21.522586 2292 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 00:58:21.784546 kubelet[2292]: I1009 00:58:21.784436 2292 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 00:58:21.784546 kubelet[2292]: I1009 00:58:21.784463 2292 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 00:58:21.784719 kubelet[2292]: I1009 00:58:21.784698 2292 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 00:58:21.801380 kubelet[2292]: E1009 00:58:21.801338 2292 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:21.802066 kubelet[2292]: I1009 00:58:21.802044 2292 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 00:58:21.811057 kubelet[2292]: I1009 00:58:21.811033 2292 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 00:58:21.812003 kubelet[2292]: I1009 00:58:21.811981 2292 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 00:58:21.812145 kubelet[2292]: I1009 00:58:21.812125 2292 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 00:58:21.812226 kubelet[2292]: I1009 00:58:21.812148 2292 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 00:58:21.812226 kubelet[2292]: I1009 00:58:21.812157 2292 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 00:58:21.812274 kubelet[2292]: I1009 00:58:21.812268 2292 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:58:21.812384 kubelet[2292]: I1009 00:58:21.812364 2292 kubelet.go:396] "Attempting to sync node with API server" Oct 9 00:58:21.812384 kubelet[2292]: I1009 00:58:21.812379 2292 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 00:58:21.812425 kubelet[2292]: I1009 00:58:21.812404 2292 kubelet.go:312] "Adding apiserver pod source" Oct 9 00:58:21.812444 kubelet[2292]: I1009 00:58:21.812433 2292 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 00:58:21.813127 kubelet[2292]: W1009 00:58:21.813083 2292 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:21.813127 kubelet[2292]: E1009 00:58:21.813124 2292 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:21.813489 kubelet[2292]: I1009 00:58:21.813459 2292 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 00:58:21.813489 kubelet[2292]: W1009 00:58:21.813466 2292 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:21.813555 kubelet[2292]: E1009 00:58:21.813502 2292 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:21.815911 kubelet[2292]: I1009 00:58:21.815882 2292 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 00:58:21.815992 kubelet[2292]: W1009 00:58:21.815970 2292 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 00:58:21.816908 kubelet[2292]: I1009 00:58:21.816724 2292 server.go:1256] "Started kubelet" Oct 9 00:58:21.818256 kubelet[2292]: I1009 00:58:21.817750 2292 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 00:58:21.818998 kubelet[2292]: I1009 00:58:21.818740 2292 server.go:461] "Adding debug handlers to kubelet server" Oct 9 00:58:21.818998 kubelet[2292]: I1009 00:58:21.818845 2292 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 00:58:21.819335 kubelet[2292]: I1009 00:58:21.819228 2292 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 00:58:21.822119 kubelet[2292]: I1009 00:58:21.822099 2292 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 00:58:21.823247 kubelet[2292]: E1009 00:58:21.823084 2292 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:58:21.823247 kubelet[2292]: I1009 00:58:21.823124 2292 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 00:58:21.823247 kubelet[2292]: I1009 00:58:21.823230 2292 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 00:58:21.823371 kubelet[2292]: I1009 00:58:21.823270 2292 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 00:58:21.823539 kubelet[2292]: E1009 00:58:21.823504 2292 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fca2ee66f569f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 00:58:21.816703472 +0000 UTC m=+0.329284960,LastTimestamp:2024-10-09 00:58:21.816703472 +0000 UTC m=+0.329284960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 00:58:21.823883 kubelet[2292]: W1009 00:58:21.823569 2292 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:21.823883 kubelet[2292]: E1009 00:58:21.823650 2292 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:21.823883 kubelet[2292]: E1009 00:58:21.823791 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="200ms" Oct 9 00:58:21.824353 kubelet[2292]: I1009 00:58:21.824319 2292 factory.go:221] Registration of the systemd container factory successfully Oct 9 00:58:21.824619 kubelet[2292]: I1009 00:58:21.824398 2292 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 00:58:21.825100 kubelet[2292]: E1009 00:58:21.825068 2292 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 00:58:21.825307 kubelet[2292]: I1009 00:58:21.825290 2292 factory.go:221] Registration of the containerd container factory successfully Oct 9 00:58:21.838705 kubelet[2292]: I1009 00:58:21.838677 2292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 00:58:21.839903 kubelet[2292]: I1009 00:58:21.839876 2292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 00:58:21.839903 kubelet[2292]: I1009 00:58:21.839901 2292 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 00:58:21.839982 kubelet[2292]: I1009 00:58:21.839918 2292 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 00:58:21.839982 kubelet[2292]: E1009 00:58:21.839957 2292 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 00:58:21.843069 kubelet[2292]: W1009 00:58:21.843037 2292 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:21.843069 kubelet[2292]: E1009 00:58:21.843066 2292 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:21.843322 kubelet[2292]: I1009 00:58:21.843305 2292 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 00:58:21.843322 kubelet[2292]: I1009 00:58:21.843319 2292 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 00:58:21.843373 kubelet[2292]: I1009 00:58:21.843334 2292 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:58:21.925257 kubelet[2292]: I1009 00:58:21.925212 2292 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:58:21.925696 kubelet[2292]: E1009 00:58:21.925656 2292 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Oct 9 00:58:21.940760 kubelet[2292]: E1009 00:58:21.940723 2292 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 00:58:22.024350 kubelet[2292]: E1009 00:58:22.024315 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="400ms" Oct 9 00:58:22.127967 kubelet[2292]: I1009 00:58:22.127862 2292 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:58:22.128500 kubelet[2292]: E1009 00:58:22.128462 2292 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Oct 9 00:58:22.141669 kubelet[2292]: E1009 00:58:22.141617 2292 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 00:58:22.309475 kubelet[2292]: I1009 00:58:22.309408 2292 policy_none.go:49] "None policy: Start" Oct 9 00:58:22.310316 kubelet[2292]: I1009 00:58:22.310284 2292 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 00:58:22.310316 kubelet[2292]: I1009 00:58:22.310310 2292 state_mem.go:35] "Initializing new in-memory state store" Oct 9 00:58:22.318859 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 00:58:22.332781 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 00:58:22.347464 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 00:58:22.348909 kubelet[2292]: I1009 00:58:22.348884 2292 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 00:58:22.349306 kubelet[2292]: I1009 00:58:22.349283 2292 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 00:58:22.350428 kubelet[2292]: E1009 00:58:22.350408 2292 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 9 00:58:22.425234 kubelet[2292]: E1009 00:58:22.425118 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="800ms" Oct 9 00:58:22.529879 kubelet[2292]: I1009 00:58:22.529859 2292 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:58:22.530213 kubelet[2292]: E1009 00:58:22.530123 2292 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Oct 9 00:58:22.542331 kubelet[2292]: I1009 00:58:22.542280 2292 topology_manager.go:215] "Topology Admit Handler" podUID="afba9ed7f33fdc69a6c86efc5e8edeb5" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 00:58:22.543498 kubelet[2292]: I1009 00:58:22.543481 2292 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 00:58:22.544314 kubelet[2292]: I1009 00:58:22.544300 2292 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 00:58:22.549664 systemd[1]: Created slice kubepods-burstable-podafba9ed7f33fdc69a6c86efc5e8edeb5.slice - libcontainer container kubepods-burstable-podafba9ed7f33fdc69a6c86efc5e8edeb5.slice. Oct 9 00:58:22.564335 systemd[1]: Created slice kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice - libcontainer container kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice. Oct 9 00:58:22.567953 systemd[1]: Created slice kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice - libcontainer container kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice. Oct 9 00:58:22.627680 kubelet[2292]: I1009 00:58:22.627629 2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/afba9ed7f33fdc69a6c86efc5e8edeb5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"afba9ed7f33fdc69a6c86efc5e8edeb5\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:58:22.627680 kubelet[2292]: I1009 00:58:22.627671 2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/afba9ed7f33fdc69a6c86efc5e8edeb5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"afba9ed7f33fdc69a6c86efc5e8edeb5\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:58:22.627680 kubelet[2292]: I1009 00:58:22.627693 2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:22.628101 kubelet[2292]: I1009 00:58:22.627766 2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:22.628101 kubelet[2292]: I1009 00:58:22.627828 2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:22.628101 kubelet[2292]: I1009 00:58:22.627857 2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/afba9ed7f33fdc69a6c86efc5e8edeb5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"afba9ed7f33fdc69a6c86efc5e8edeb5\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:58:22.628101 kubelet[2292]: I1009 00:58:22.627909 2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:22.628101 kubelet[2292]: I1009 00:58:22.627944 2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:22.628217 kubelet[2292]: I1009 00:58:22.627980 2292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 9 00:58:22.679317 kubelet[2292]: W1009 00:58:22.679193 2292 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:22.679317 kubelet[2292]: E1009 00:58:22.679259 2292 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:22.863120 kubelet[2292]: E1009 00:58:22.863091 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:22.863642 containerd[1486]: time="2024-10-09T00:58:22.863596306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:afba9ed7f33fdc69a6c86efc5e8edeb5,Namespace:kube-system,Attempt:0,}" Oct 9 00:58:22.866916 kubelet[2292]: E1009 00:58:22.866883 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:22.867273 containerd[1486]: time="2024-10-09T00:58:22.867244096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 9 00:58:22.870592 kubelet[2292]: E1009 00:58:22.870568 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:22.870976 containerd[1486]: time="2024-10-09T00:58:22.870936449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 9 00:58:23.189677 kubelet[2292]: W1009 00:58:23.189605 2292 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:23.189677 kubelet[2292]: E1009 00:58:23.189663 2292 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:23.226162 kubelet[2292]: E1009 00:58:23.226105 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="1.6s" Oct 9 00:58:23.312936 kubelet[2292]: W1009 00:58:23.312893 2292 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:23.312936 kubelet[2292]: E1009 00:58:23.312942 2292 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:23.316148 kubelet[2292]: W1009 00:58:23.316124 2292 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:23.316194 kubelet[2292]: E1009 00:58:23.316150 2292 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused Oct 9 00:58:23.331280 kubelet[2292]: I1009 00:58:23.331254 2292 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:58:23.331587 kubelet[2292]: E1009 00:58:23.331571 2292 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Oct 9 00:58:23.391324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1139695906.mount: Deactivated successfully. Oct 9 00:58:23.401185 containerd[1486]: time="2024-10-09T00:58:23.401131955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:58:23.402357 containerd[1486]: time="2024-10-09T00:58:23.402314442Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:58:23.403384 containerd[1486]: time="2024-10-09T00:58:23.403336849Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 00:58:23.404479 containerd[1486]: time="2024-10-09T00:58:23.404408489Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:58:23.405618 containerd[1486]: time="2024-10-09T00:58:23.405568534Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 00:58:23.406758 containerd[1486]: time="2024-10-09T00:58:23.406722367Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:58:23.407638 containerd[1486]: time="2024-10-09T00:58:23.407585255Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 00:58:23.410040 containerd[1486]: time="2024-10-09T00:58:23.410008389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:58:23.410857 containerd[1486]: time="2024-10-09T00:58:23.410827394Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 539.80278ms" Oct 9 00:58:23.414353 containerd[1486]: time="2024-10-09T00:58:23.414321115Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 550.641152ms" Oct 9 00:58:23.415091 containerd[1486]: time="2024-10-09T00:58:23.415055221Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 547.754179ms" Oct 9 00:58:23.546048 containerd[1486]: time="2024-10-09T00:58:23.544713971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:23.546048 containerd[1486]: time="2024-10-09T00:58:23.544783672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:23.546048 containerd[1486]: time="2024-10-09T00:58:23.544797418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:23.546048 containerd[1486]: time="2024-10-09T00:58:23.544875875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:23.546048 containerd[1486]: time="2024-10-09T00:58:23.544617240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:23.546048 containerd[1486]: time="2024-10-09T00:58:23.545860892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:23.546048 containerd[1486]: time="2024-10-09T00:58:23.545872434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:23.546048 containerd[1486]: time="2024-10-09T00:58:23.545932677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:23.548640 containerd[1486]: time="2024-10-09T00:58:23.548411574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:23.548640 containerd[1486]: time="2024-10-09T00:58:23.548473971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:23.548640 containerd[1486]: time="2024-10-09T00:58:23.548488008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:23.549640 containerd[1486]: time="2024-10-09T00:58:23.549582490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:23.572629 systemd[1]: Started cri-containerd-9f88a5d506e93791a175f72ba13544562d014f4dad45544d1c719dd0b6156219.scope - libcontainer container 9f88a5d506e93791a175f72ba13544562d014f4dad45544d1c719dd0b6156219. Oct 9 00:58:23.574049 systemd[1]: Started cri-containerd-ed128fa9c70881d13afb8413bef1e89efcf9aff331fadcca662880693fd03c3f.scope - libcontainer container ed128fa9c70881d13afb8413bef1e89efcf9aff331fadcca662880693fd03c3f. Oct 9 00:58:23.579638 systemd[1]: Started cri-containerd-0c69700d405f9971877837c4fd092d5daa322aa351014b80409c88196171e31c.scope - libcontainer container 0c69700d405f9971877837c4fd092d5daa322aa351014b80409c88196171e31c. Oct 9 00:58:23.613495 containerd[1486]: time="2024-10-09T00:58:23.613293177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:afba9ed7f33fdc69a6c86efc5e8edeb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f88a5d506e93791a175f72ba13544562d014f4dad45544d1c719dd0b6156219\"" Oct 9 00:58:23.615661 kubelet[2292]: E1009 00:58:23.615629 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:23.618649 containerd[1486]: time="2024-10-09T00:58:23.618609676Z" level=info msg="CreateContainer within sandbox \"9f88a5d506e93791a175f72ba13544562d014f4dad45544d1c719dd0b6156219\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 00:58:23.621148 containerd[1486]: time="2024-10-09T00:58:23.621055812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed128fa9c70881d13afb8413bef1e89efcf9aff331fadcca662880693fd03c3f\"" Oct 9 00:58:23.621697 kubelet[2292]: E1009 00:58:23.621672 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:23.623282 containerd[1486]: time="2024-10-09T00:58:23.623243003Z" level=info msg="CreateContainer within sandbox \"ed128fa9c70881d13afb8413bef1e89efcf9aff331fadcca662880693fd03c3f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 00:58:23.627080 containerd[1486]: time="2024-10-09T00:58:23.627037458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c69700d405f9971877837c4fd092d5daa322aa351014b80409c88196171e31c\"" Oct 9 00:58:23.627618 kubelet[2292]: E1009 00:58:23.627594 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:23.629875 containerd[1486]: time="2024-10-09T00:58:23.629828431Z" level=info msg="CreateContainer within sandbox \"0c69700d405f9971877837c4fd092d5daa322aa351014b80409c88196171e31c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 00:58:23.645600 containerd[1486]: time="2024-10-09T00:58:23.645547144Z" level=info msg="CreateContainer within sandbox \"9f88a5d506e93791a175f72ba13544562d014f4dad45544d1c719dd0b6156219\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f14d0b4e10daea3b3708f12e34c637a8e6a4e92b8444a8c75349b181220b1056\"" Oct 9 00:58:23.646098 containerd[1486]: time="2024-10-09T00:58:23.646068241Z" level=info msg="StartContainer for \"f14d0b4e10daea3b3708f12e34c637a8e6a4e92b8444a8c75349b181220b1056\"" Oct 9 00:58:23.653190 containerd[1486]: time="2024-10-09T00:58:23.653152103Z" level=info msg="CreateContainer within sandbox \"ed128fa9c70881d13afb8413bef1e89efcf9aff331fadcca662880693fd03c3f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"923fcd1dc91b2593b8754d13f7378f2054fbc75c4a3ea4363864dd79a4e2dafe\"" Oct 9 00:58:23.653829 containerd[1486]: time="2024-10-09T00:58:23.653667680Z" level=info msg="StartContainer for \"923fcd1dc91b2593b8754d13f7378f2054fbc75c4a3ea4363864dd79a4e2dafe\"" Oct 9 00:58:23.668920 containerd[1486]: time="2024-10-09T00:58:23.668872599Z" level=info msg="CreateContainer within sandbox \"0c69700d405f9971877837c4fd092d5daa322aa351014b80409c88196171e31c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8efb46371c34de1b5c856b270c7d3b34374995f28a8e0eb87ef66320c587d668\"" Oct 9 00:58:23.669309 containerd[1486]: time="2024-10-09T00:58:23.669263812Z" level=info msg="StartContainer for \"8efb46371c34de1b5c856b270c7d3b34374995f28a8e0eb87ef66320c587d668\"" Oct 9 00:58:23.671884 systemd[1]: Started cri-containerd-f14d0b4e10daea3b3708f12e34c637a8e6a4e92b8444a8c75349b181220b1056.scope - libcontainer container f14d0b4e10daea3b3708f12e34c637a8e6a4e92b8444a8c75349b181220b1056. Oct 9 00:58:23.683686 systemd[1]: Started cri-containerd-923fcd1dc91b2593b8754d13f7378f2054fbc75c4a3ea4363864dd79a4e2dafe.scope - libcontainer container 923fcd1dc91b2593b8754d13f7378f2054fbc75c4a3ea4363864dd79a4e2dafe. Oct 9 00:58:23.700634 systemd[1]: Started cri-containerd-8efb46371c34de1b5c856b270c7d3b34374995f28a8e0eb87ef66320c587d668.scope - libcontainer container 8efb46371c34de1b5c856b270c7d3b34374995f28a8e0eb87ef66320c587d668. Oct 9 00:58:23.722324 containerd[1486]: time="2024-10-09T00:58:23.722174279Z" level=info msg="StartContainer for \"f14d0b4e10daea3b3708f12e34c637a8e6a4e92b8444a8c75349b181220b1056\" returns successfully" Oct 9 00:58:23.736042 containerd[1486]: time="2024-10-09T00:58:23.736000393Z" level=info msg="StartContainer for \"923fcd1dc91b2593b8754d13f7378f2054fbc75c4a3ea4363864dd79a4e2dafe\" returns successfully" Oct 9 00:58:23.747018 containerd[1486]: time="2024-10-09T00:58:23.746974560Z" level=info msg="StartContainer for \"8efb46371c34de1b5c856b270c7d3b34374995f28a8e0eb87ef66320c587d668\" returns successfully" Oct 9 00:58:23.851634 kubelet[2292]: E1009 00:58:23.851331 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:23.854401 kubelet[2292]: E1009 00:58:23.854160 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:23.856391 kubelet[2292]: E1009 00:58:23.856300 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:24.829758 kubelet[2292]: E1009 00:58:24.829707 2292 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 9 00:58:24.848731 kubelet[2292]: E1009 00:58:24.848709 2292 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Oct 9 00:58:24.858269 kubelet[2292]: E1009 00:58:24.858249 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:24.932792 kubelet[2292]: I1009 00:58:24.932753 2292 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:58:24.939137 kubelet[2292]: I1009 00:58:24.939095 2292 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 00:58:25.816317 kubelet[2292]: I1009 00:58:25.816263 2292 apiserver.go:52] "Watching apiserver" Oct 9 00:58:25.824037 kubelet[2292]: I1009 00:58:25.824014 2292 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 00:58:27.073392 systemd[1]: Reloading requested from client PID 2573 ('systemctl') (unit session-7.scope)... Oct 9 00:58:27.073406 systemd[1]: Reloading... Oct 9 00:58:27.163562 zram_generator::config[2615]: No configuration found. Oct 9 00:58:27.931983 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:58:28.027862 systemd[1]: Reloading finished in 954 ms. Oct 9 00:58:28.074308 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:58:28.096062 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 00:58:28.096397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:58:28.110807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:58:28.251575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:58:28.256822 (kubelet)[2657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 00:58:28.307911 kubelet[2657]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:58:28.307911 kubelet[2657]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 00:58:28.307911 kubelet[2657]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:58:28.308554 kubelet[2657]: I1009 00:58:28.307949 2657 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 00:58:28.312328 kubelet[2657]: I1009 00:58:28.312291 2657 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 00:58:28.312328 kubelet[2657]: I1009 00:58:28.312314 2657 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 00:58:28.312528 kubelet[2657]: I1009 00:58:28.312481 2657 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 00:58:28.313976 kubelet[2657]: I1009 00:58:28.313951 2657 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 00:58:28.316153 kubelet[2657]: I1009 00:58:28.316115 2657 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 00:58:28.325797 kubelet[2657]: I1009 00:58:28.325755 2657 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 00:58:28.327441 kubelet[2657]: I1009 00:58:28.326287 2657 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 00:58:28.327441 kubelet[2657]: I1009 00:58:28.326527 2657 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 00:58:28.327441 kubelet[2657]: I1009 00:58:28.326561 2657 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 00:58:28.327441 kubelet[2657]: I1009 00:58:28.326575 2657 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 00:58:28.327441 kubelet[2657]: I1009 00:58:28.326619 2657 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:58:28.327441 kubelet[2657]: I1009 00:58:28.326734 2657 kubelet.go:396] "Attempting to sync node with API server" Oct 9 00:58:28.327819 kubelet[2657]: I1009 00:58:28.326752 2657 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 00:58:28.327819 kubelet[2657]: I1009 00:58:28.326783 2657 kubelet.go:312] "Adding apiserver pod source" Oct 9 00:58:28.327819 kubelet[2657]: I1009 00:58:28.326803 2657 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 00:58:28.327819 kubelet[2657]: I1009 00:58:28.327797 2657 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 00:58:28.328046 kubelet[2657]: I1009 00:58:28.328009 2657 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 00:58:28.328595 kubelet[2657]: I1009 00:58:28.328439 2657 server.go:1256] "Started kubelet" Oct 9 00:58:28.329811 sudo[2672]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 9 00:58:28.330145 sudo[2672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 9 00:58:28.332704 kubelet[2657]: I1009 00:58:28.331310 2657 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 00:58:28.332704 kubelet[2657]: I1009 00:58:28.332221 2657 server.go:461] "Adding debug handlers to kubelet server" Oct 9 00:58:28.333174 kubelet[2657]: I1009 00:58:28.333112 2657 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 00:58:28.333394 kubelet[2657]: I1009 00:58:28.333278 2657 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 00:58:28.334105 kubelet[2657]: I1009 00:58:28.334087 2657 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 00:58:28.336429 kubelet[2657]: E1009 00:58:28.336405 2657 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:58:28.336481 kubelet[2657]: I1009 00:58:28.336439 2657 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 00:58:28.336581 kubelet[2657]: I1009 00:58:28.336561 2657 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 00:58:28.336702 kubelet[2657]: I1009 00:58:28.336685 2657 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 00:58:28.346141 kubelet[2657]: I1009 00:58:28.346096 2657 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 00:58:28.348947 kubelet[2657]: E1009 00:58:28.348907 2657 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 00:58:28.350034 kubelet[2657]: I1009 00:58:28.350017 2657 factory.go:221] Registration of the containerd container factory successfully Oct 9 00:58:28.350283 kubelet[2657]: I1009 00:58:28.350270 2657 factory.go:221] Registration of the systemd container factory successfully Oct 9 00:58:28.358237 kubelet[2657]: I1009 00:58:28.358202 2657 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 00:58:28.359436 kubelet[2657]: I1009 00:58:28.359411 2657 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 00:58:28.359498 kubelet[2657]: I1009 00:58:28.359442 2657 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 00:58:28.359498 kubelet[2657]: I1009 00:58:28.359460 2657 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 00:58:28.359585 kubelet[2657]: E1009 00:58:28.359505 2657 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 00:58:28.383609 kubelet[2657]: I1009 00:58:28.383580 2657 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 00:58:28.383609 kubelet[2657]: I1009 00:58:28.383601 2657 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 00:58:28.383609 kubelet[2657]: I1009 00:58:28.383617 2657 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:58:28.383799 kubelet[2657]: I1009 00:58:28.383772 2657 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 00:58:28.383799 kubelet[2657]: I1009 00:58:28.383791 2657 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 00:58:28.383799 kubelet[2657]: I1009 00:58:28.383797 2657 policy_none.go:49] "None policy: Start" Oct 9 00:58:28.384800 kubelet[2657]: I1009 00:58:28.384747 2657 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 00:58:28.384800 kubelet[2657]: I1009 00:58:28.384783 2657 state_mem.go:35] "Initializing new in-memory state store" Oct 9 00:58:28.384964 kubelet[2657]: I1009 00:58:28.384946 2657 state_mem.go:75] "Updated machine memory state" Oct 9 00:58:28.389736 kubelet[2657]: I1009 00:58:28.389712 2657 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 00:58:28.390303 kubelet[2657]: I1009 00:58:28.389986 2657 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 00:58:28.441363 kubelet[2657]: I1009 00:58:28.441325 2657 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:58:28.448266 kubelet[2657]: I1009 00:58:28.448239 2657 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 9 00:58:28.448435 kubelet[2657]: I1009 00:58:28.448386 2657 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 00:58:28.459916 kubelet[2657]: I1009 00:58:28.459876 2657 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 00:58:28.460528 kubelet[2657]: I1009 00:58:28.460049 2657 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 00:58:28.460528 kubelet[2657]: I1009 00:58:28.460078 2657 topology_manager.go:215] "Topology Admit Handler" podUID="afba9ed7f33fdc69a6c86efc5e8edeb5" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 00:58:28.639298 kubelet[2657]: I1009 00:58:28.639109 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:28.639298 kubelet[2657]: I1009 00:58:28.639160 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:28.639298 kubelet[2657]: I1009 00:58:28.639256 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:28.639298 kubelet[2657]: I1009 00:58:28.639286 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 9 00:58:28.639496 kubelet[2657]: I1009 00:58:28.639314 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/afba9ed7f33fdc69a6c86efc5e8edeb5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"afba9ed7f33fdc69a6c86efc5e8edeb5\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:58:28.639496 kubelet[2657]: I1009 00:58:28.639341 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:28.639496 kubelet[2657]: I1009 00:58:28.639382 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:28.639496 kubelet[2657]: I1009 00:58:28.639405 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/afba9ed7f33fdc69a6c86efc5e8edeb5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"afba9ed7f33fdc69a6c86efc5e8edeb5\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:58:28.639496 kubelet[2657]: I1009 00:58:28.639427 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/afba9ed7f33fdc69a6c86efc5e8edeb5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"afba9ed7f33fdc69a6c86efc5e8edeb5\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:58:28.767820 kubelet[2657]: E1009 00:58:28.767789 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:28.769311 kubelet[2657]: E1009 00:58:28.768073 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:28.769311 kubelet[2657]: E1009 00:58:28.769216 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:29.042617 sudo[2672]: pam_unix(sudo:session): session closed for user root Oct 9 00:58:29.329761 kubelet[2657]: I1009 00:58:29.329583 2657 apiserver.go:52] "Watching apiserver" Oct 9 00:58:29.336898 kubelet[2657]: I1009 00:58:29.336830 2657 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 00:58:29.369857 kubelet[2657]: E1009 00:58:29.369669 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:29.377608 kubelet[2657]: E1009 00:58:29.376983 2657 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 00:58:29.377608 kubelet[2657]: E1009 00:58:29.377492 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:29.378079 kubelet[2657]: E1009 00:58:29.378003 2657 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 9 00:58:29.378468 kubelet[2657]: E1009 00:58:29.378445 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:29.410647 kubelet[2657]: I1009 00:58:29.410598 2657 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.4105433 podStartE2EDuration="1.4105433s" podCreationTimestamp="2024-10-09 00:58:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:58:29.400902614 +0000 UTC m=+1.140151956" watchObservedRunningTime="2024-10-09 00:58:29.4105433 +0000 UTC m=+1.149792642" Oct 9 00:58:29.421955 kubelet[2657]: I1009 00:58:29.421874 2657 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.421832126 podStartE2EDuration="1.421832126s" podCreationTimestamp="2024-10-09 00:58:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:58:29.410834726 +0000 UTC m=+1.150084068" watchObservedRunningTime="2024-10-09 00:58:29.421832126 +0000 UTC m=+1.161081468" Oct 9 00:58:29.430661 kubelet[2657]: I1009 00:58:29.430573 2657 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.430528843 podStartE2EDuration="1.430528843s" podCreationTimestamp="2024-10-09 00:58:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:58:29.422045216 +0000 UTC m=+1.161294558" watchObservedRunningTime="2024-10-09 00:58:29.430528843 +0000 UTC m=+1.169778185" Oct 9 00:58:30.197560 sudo[1674]: pam_unix(sudo:session): session closed for user root Oct 9 00:58:30.199319 sshd[1671]: pam_unix(sshd:session): session closed for user core Oct 9 00:58:30.201985 systemd[1]: sshd@6-10.0.0.51:22-10.0.0.1:54418.service: Deactivated successfully. Oct 9 00:58:30.204070 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 00:58:30.204293 systemd[1]: session-7.scope: Consumed 4.189s CPU time, 187.0M memory peak, 0B memory swap peak. Oct 9 00:58:30.205657 systemd-logind[1474]: Session 7 logged out. Waiting for processes to exit. Oct 9 00:58:30.206779 systemd-logind[1474]: Removed session 7. Oct 9 00:58:30.371022 kubelet[2657]: E1009 00:58:30.370859 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:30.371417 kubelet[2657]: E1009 00:58:30.371046 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:32.584460 kubelet[2657]: E1009 00:58:32.584416 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:34.035675 kubelet[2657]: E1009 00:58:34.035623 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:34.376493 kubelet[2657]: E1009 00:58:34.376370 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:37.173796 kubelet[2657]: E1009 00:58:37.173765 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:37.379537 kubelet[2657]: E1009 00:58:37.379457 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:41.527940 kubelet[2657]: I1009 00:58:41.527905 2657 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 00:58:41.528422 containerd[1486]: time="2024-10-09T00:58:41.528324594Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 00:58:41.528832 kubelet[2657]: I1009 00:58:41.528573 2657 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 00:58:42.252586 kubelet[2657]: I1009 00:58:42.251746 2657 topology_manager.go:215] "Topology Admit Handler" podUID="04059549-496f-4b85-8807-72494a463eb4" podNamespace="kube-system" podName="kube-proxy-vh598" Oct 9 00:58:42.254875 kubelet[2657]: I1009 00:58:42.254261 2657 topology_manager.go:215] "Topology Admit Handler" podUID="c55227cc-27af-435a-ac2f-0bf33d67dae7" podNamespace="kube-system" podName="cilium-zgvbs" Oct 9 00:58:42.261448 systemd[1]: Created slice kubepods-besteffort-pod04059549_496f_4b85_8807_72494a463eb4.slice - libcontainer container kubepods-besteffort-pod04059549_496f_4b85_8807_72494a463eb4.slice. Oct 9 00:58:42.274077 systemd[1]: Created slice kubepods-burstable-podc55227cc_27af_435a_ac2f_0bf33d67dae7.slice - libcontainer container kubepods-burstable-podc55227cc_27af_435a_ac2f_0bf33d67dae7.slice. Oct 9 00:58:42.328638 update_engine[1475]: I20241009 00:58:42.328541 1475 update_attempter.cc:509] Updating boot flags... Oct 9 00:58:42.329800 kubelet[2657]: I1009 00:58:42.329248 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq29b\" (UniqueName: \"kubernetes.io/projected/04059549-496f-4b85-8807-72494a463eb4-kube-api-access-rq29b\") pod \"kube-proxy-vh598\" (UID: \"04059549-496f-4b85-8807-72494a463eb4\") " pod="kube-system/kube-proxy-vh598" Oct 9 00:58:42.329800 kubelet[2657]: I1009 00:58:42.329303 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-hostproc\") pod \"cilium-zgvbs\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " pod="kube-system/cilium-zgvbs" Oct 9 00:58:42.329800 kubelet[2657]: I1009 00:58:42.329332 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-bpf-maps\") pod \"cilium-zgvbs\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " pod="kube-system/cilium-zgvbs" Oct 9 00:58:42.329800 kubelet[2657]: I1009 00:58:42.329360 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-xtables-lock\") pod \"cilium-zgvbs\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " pod="kube-system/cilium-zgvbs" Oct 9 00:58:42.329800 kubelet[2657]: I1009 00:58:42.329388 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04059549-496f-4b85-8807-72494a463eb4-xtables-lock\") pod \"kube-proxy-vh598\" (UID: \"04059549-496f-4b85-8807-72494a463eb4\") " pod="kube-system/kube-proxy-vh598" Oct 9 00:58:42.329800 kubelet[2657]: I1009 00:58:42.329414 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04059549-496f-4b85-8807-72494a463eb4-lib-modules\") pod \"kube-proxy-vh598\" (UID: \"04059549-496f-4b85-8807-72494a463eb4\") " pod="kube-system/kube-proxy-vh598" Oct 9 00:58:42.329952 kubelet[2657]: I1009 00:58:42.329453 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-cilium-run\") pod \"cilium-zgvbs\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " pod="kube-system/cilium-zgvbs" Oct 9 00:58:42.329952 kubelet[2657]: I1009 00:58:42.329484 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-host-proc-sys-kernel\") pod \"cilium-zgvbs\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " pod="kube-system/cilium-zgvbs" Oct 9 00:58:42.329952 kubelet[2657]: I1009 00:58:42.329529 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c55227cc-27af-435a-ac2f-0bf33d67dae7-hubble-tls\") pod \"cilium-zgvbs\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " pod="kube-system/cilium-zgvbs" Oct 9 00:58:42.329952 kubelet[2657]: I1009 00:58:42.329562 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-528s8\" (UniqueName: \"kubernetes.io/projected/c55227cc-27af-435a-ac2f-0bf33d67dae7-kube-api-access-528s8\") pod \"cilium-zgvbs\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " pod="kube-system/cilium-zgvbs" Oct 9 00:58:42.329952 kubelet[2657]: I1009 00:58:42.329589 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-cni-path\") pod \"cilium-zgvbs\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " pod="kube-system/cilium-zgvbs" Oct 9 00:58:42.329952 kubelet[2657]: I1009 00:58:42.329614 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c55227cc-27af-435a-ac2f-0bf33d67dae7-cilium-config-path\") pod \"cilium-zgvbs\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " pod="kube-system/cilium-zgvbs" Oct 9 00:58:42.330093 kubelet[2657]: I1009 00:58:42.329643 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-host-proc-sys-net\") pod \"cilium-zgvbs\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " pod="kube-system/cilium-zgvbs" Oct 9 00:58:42.330093 kubelet[2657]: I1009 00:58:42.329673 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-etc-cni-netd\") pod \"cilium-zgvbs\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " pod="kube-system/cilium-zgvbs" Oct 9 00:58:42.330093 kubelet[2657]: I1009 00:58:42.329702 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c55227cc-27af-435a-ac2f-0bf33d67dae7-clustermesh-secrets\") pod \"cilium-zgvbs\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " pod="kube-system/cilium-zgvbs" Oct 9 00:58:42.330093 kubelet[2657]: I1009 00:58:42.329745 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/04059549-496f-4b85-8807-72494a463eb4-kube-proxy\") pod \"kube-proxy-vh598\" (UID: \"04059549-496f-4b85-8807-72494a463eb4\") " pod="kube-system/kube-proxy-vh598" Oct 9 00:58:42.330093 kubelet[2657]: I1009 00:58:42.329790 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-cilium-cgroup\") pod \"cilium-zgvbs\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " pod="kube-system/cilium-zgvbs" Oct 9 00:58:42.330093 kubelet[2657]: I1009 00:58:42.329817 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-lib-modules\") pod \"cilium-zgvbs\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " pod="kube-system/cilium-zgvbs" Oct 9 00:58:42.355538 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2741) Oct 9 00:58:42.389682 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2745) Oct 9 00:58:42.561168 kubelet[2657]: I1009 00:58:42.560824 2657 topology_manager.go:215] "Topology Admit Handler" podUID="cac9d2d9-f1c9-43ba-9354-5be8d280e066" podNamespace="kube-system" podName="cilium-operator-5cc964979-qgqrf" Oct 9 00:58:42.568490 systemd[1]: Created slice kubepods-besteffort-podcac9d2d9_f1c9_43ba_9354_5be8d280e066.slice - libcontainer container kubepods-besteffort-podcac9d2d9_f1c9_43ba_9354_5be8d280e066.slice. Oct 9 00:58:42.571758 kubelet[2657]: E1009 00:58:42.571629 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:42.572324 containerd[1486]: time="2024-10-09T00:58:42.572257613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vh598,Uid:04059549-496f-4b85-8807-72494a463eb4,Namespace:kube-system,Attempt:0,}" Oct 9 00:58:42.577261 kubelet[2657]: E1009 00:58:42.577226 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:42.578149 containerd[1486]: time="2024-10-09T00:58:42.577814145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zgvbs,Uid:c55227cc-27af-435a-ac2f-0bf33d67dae7,Namespace:kube-system,Attempt:0,}" Oct 9 00:58:42.589207 kubelet[2657]: E1009 00:58:42.589179 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:42.607986 containerd[1486]: time="2024-10-09T00:58:42.607731542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:42.607986 containerd[1486]: time="2024-10-09T00:58:42.607784343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:42.607986 containerd[1486]: time="2024-10-09T00:58:42.607797838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:42.608487 containerd[1486]: time="2024-10-09T00:58:42.608404110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:42.610004 containerd[1486]: time="2024-10-09T00:58:42.609890231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:42.610004 containerd[1486]: time="2024-10-09T00:58:42.609961226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:42.610004 containerd[1486]: time="2024-10-09T00:58:42.609982126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:42.610129 containerd[1486]: time="2024-10-09T00:58:42.610079992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:42.628727 systemd[1]: Started cri-containerd-a0e0dc111c814ec57ced507e125d847f396b2e1b32ae77317bdda93c58d810ad.scope - libcontainer container a0e0dc111c814ec57ced507e125d847f396b2e1b32ae77317bdda93c58d810ad. Oct 9 00:58:42.630857 kubelet[2657]: I1009 00:58:42.630721 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cac9d2d9-f1c9-43ba-9354-5be8d280e066-cilium-config-path\") pod \"cilium-operator-5cc964979-qgqrf\" (UID: \"cac9d2d9-f1c9-43ba-9354-5be8d280e066\") " pod="kube-system/cilium-operator-5cc964979-qgqrf" Oct 9 00:58:42.630857 kubelet[2657]: I1009 00:58:42.630757 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc2kv\" (UniqueName: \"kubernetes.io/projected/cac9d2d9-f1c9-43ba-9354-5be8d280e066-kube-api-access-gc2kv\") pod \"cilium-operator-5cc964979-qgqrf\" (UID: \"cac9d2d9-f1c9-43ba-9354-5be8d280e066\") " pod="kube-system/cilium-operator-5cc964979-qgqrf" Oct 9 00:58:42.632472 systemd[1]: Started cri-containerd-78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d.scope - libcontainer container 78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d. Oct 9 00:58:42.657953 containerd[1486]: time="2024-10-09T00:58:42.657908274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zgvbs,Uid:c55227cc-27af-435a-ac2f-0bf33d67dae7,Namespace:kube-system,Attempt:0,} returns sandbox id \"78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d\"" Oct 9 00:58:42.659455 kubelet[2657]: E1009 00:58:42.659230 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:42.659568 containerd[1486]: time="2024-10-09T00:58:42.659474358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vh598,Uid:04059549-496f-4b85-8807-72494a463eb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0e0dc111c814ec57ced507e125d847f396b2e1b32ae77317bdda93c58d810ad\"" Oct 9 00:58:42.660034 kubelet[2657]: E1009 00:58:42.660006 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:42.660846 containerd[1486]: time="2024-10-09T00:58:42.660801619Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 9 00:58:42.662465 containerd[1486]: time="2024-10-09T00:58:42.662442896Z" level=info msg="CreateContainer within sandbox \"a0e0dc111c814ec57ced507e125d847f396b2e1b32ae77317bdda93c58d810ad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 00:58:42.873078 kubelet[2657]: E1009 00:58:42.872983 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:42.873346 containerd[1486]: time="2024-10-09T00:58:42.873318095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-qgqrf,Uid:cac9d2d9-f1c9-43ba-9354-5be8d280e066,Namespace:kube-system,Attempt:0,}" Oct 9 00:58:43.156145 containerd[1486]: time="2024-10-09T00:58:43.156032033Z" level=info msg="CreateContainer within sandbox \"a0e0dc111c814ec57ced507e125d847f396b2e1b32ae77317bdda93c58d810ad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cf8af929a68983e3ec0c004ead3679749b0b706b29e8adc6a7347d03bfc083c1\"" Oct 9 00:58:43.157083 containerd[1486]: time="2024-10-09T00:58:43.156590994Z" level=info msg="StartContainer for \"cf8af929a68983e3ec0c004ead3679749b0b706b29e8adc6a7347d03bfc083c1\"" Oct 9 00:58:43.178583 containerd[1486]: time="2024-10-09T00:58:43.178306913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:58:43.178583 containerd[1486]: time="2024-10-09T00:58:43.178378179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:58:43.178583 containerd[1486]: time="2024-10-09T00:58:43.178396753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:43.179312 containerd[1486]: time="2024-10-09T00:58:43.179113623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:58:43.184719 systemd[1]: Started cri-containerd-cf8af929a68983e3ec0c004ead3679749b0b706b29e8adc6a7347d03bfc083c1.scope - libcontainer container cf8af929a68983e3ec0c004ead3679749b0b706b29e8adc6a7347d03bfc083c1. Oct 9 00:58:43.202636 systemd[1]: Started cri-containerd-a2aec6853fda34e370be0d66f2b3f913ef0d19ab416e0c3bec3c2dad1992613b.scope - libcontainer container a2aec6853fda34e370be0d66f2b3f913ef0d19ab416e0c3bec3c2dad1992613b. Oct 9 00:58:43.223006 containerd[1486]: time="2024-10-09T00:58:43.222910704Z" level=info msg="StartContainer for \"cf8af929a68983e3ec0c004ead3679749b0b706b29e8adc6a7347d03bfc083c1\" returns successfully" Oct 9 00:58:43.244075 containerd[1486]: time="2024-10-09T00:58:43.243989915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-qgqrf,Uid:cac9d2d9-f1c9-43ba-9354-5be8d280e066,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2aec6853fda34e370be0d66f2b3f913ef0d19ab416e0c3bec3c2dad1992613b\"" Oct 9 00:58:43.245159 kubelet[2657]: E1009 00:58:43.244805 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:43.390301 kubelet[2657]: E1009 00:58:43.390263 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:50.046626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount984039634.mount: Deactivated successfully. Oct 9 00:58:52.228592 containerd[1486]: time="2024-10-09T00:58:52.228537945Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:52.229369 containerd[1486]: time="2024-10-09T00:58:52.229314932Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735247" Oct 9 00:58:52.230469 containerd[1486]: time="2024-10-09T00:58:52.230420218Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:52.231771 containerd[1486]: time="2024-10-09T00:58:52.231745701Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.570899417s" Oct 9 00:58:52.231829 containerd[1486]: time="2024-10-09T00:58:52.231772010Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 9 00:58:52.233040 containerd[1486]: time="2024-10-09T00:58:52.233005479Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 9 00:58:52.235025 containerd[1486]: time="2024-10-09T00:58:52.234994193Z" level=info msg="CreateContainer within sandbox \"78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 9 00:58:52.247629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount654501920.mount: Deactivated successfully. Oct 9 00:58:52.248599 containerd[1486]: time="2024-10-09T00:58:52.248570670Z" level=info msg="CreateContainer within sandbox \"78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070\"" Oct 9 00:58:52.249001 containerd[1486]: time="2024-10-09T00:58:52.248981896Z" level=info msg="StartContainer for \"7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070\"" Oct 9 00:58:52.285638 systemd[1]: Started cri-containerd-7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070.scope - libcontainer container 7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070. Oct 9 00:58:52.310111 containerd[1486]: time="2024-10-09T00:58:52.310070661Z" level=info msg="StartContainer for \"7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070\" returns successfully" Oct 9 00:58:52.319620 systemd[1]: cri-containerd-7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070.scope: Deactivated successfully. Oct 9 00:58:52.407026 kubelet[2657]: E1009 00:58:52.406968 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:52.655329 kubelet[2657]: I1009 00:58:52.655278 2657 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vh598" podStartSLOduration=10.65523888 podStartE2EDuration="10.65523888s" podCreationTimestamp="2024-10-09 00:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:58:43.399335773 +0000 UTC m=+15.138585115" watchObservedRunningTime="2024-10-09 00:58:52.65523888 +0000 UTC m=+24.394488212" Oct 9 00:58:52.781507 containerd[1486]: time="2024-10-09T00:58:52.781451967Z" level=info msg="shim disconnected" id=7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070 namespace=k8s.io Oct 9 00:58:52.781507 containerd[1486]: time="2024-10-09T00:58:52.781500218Z" level=warning msg="cleaning up after shim disconnected" id=7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070 namespace=k8s.io Oct 9 00:58:52.781507 containerd[1486]: time="2024-10-09T00:58:52.781523462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:58:53.245872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070-rootfs.mount: Deactivated successfully. Oct 9 00:58:53.409573 kubelet[2657]: E1009 00:58:53.409547 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:53.411126 containerd[1486]: time="2024-10-09T00:58:53.411090253Z" level=info msg="CreateContainer within sandbox \"78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 9 00:58:53.428782 containerd[1486]: time="2024-10-09T00:58:53.428741539Z" level=info msg="CreateContainer within sandbox \"78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64\"" Oct 9 00:58:53.429411 containerd[1486]: time="2024-10-09T00:58:53.429386055Z" level=info msg="StartContainer for \"4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64\"" Oct 9 00:58:53.468228 systemd[1]: Started cri-containerd-4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64.scope - libcontainer container 4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64. Oct 9 00:58:53.499353 containerd[1486]: time="2024-10-09T00:58:53.498549046Z" level=info msg="StartContainer for \"4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64\" returns successfully" Oct 9 00:58:53.515022 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 00:58:53.515434 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:58:53.515521 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:58:53.525991 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:58:53.526268 systemd[1]: cri-containerd-4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64.scope: Deactivated successfully. Oct 9 00:58:53.588001 containerd[1486]: time="2024-10-09T00:58:53.587947999Z" level=info msg="shim disconnected" id=4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64 namespace=k8s.io Oct 9 00:58:53.588218 containerd[1486]: time="2024-10-09T00:58:53.588002713Z" level=warning msg="cleaning up after shim disconnected" id=4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64 namespace=k8s.io Oct 9 00:58:53.588218 containerd[1486]: time="2024-10-09T00:58:53.588011309Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:58:53.589257 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:58:53.871069 containerd[1486]: time="2024-10-09T00:58:53.871021622Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:53.871863 containerd[1486]: time="2024-10-09T00:58:53.871821150Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907233" Oct 9 00:58:53.872907 containerd[1486]: time="2024-10-09T00:58:53.872877975Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:58:53.874189 containerd[1486]: time="2024-10-09T00:58:53.874163390Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.641132423s" Oct 9 00:58:53.874230 containerd[1486]: time="2024-10-09T00:58:53.874188206Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 9 00:58:53.875539 containerd[1486]: time="2024-10-09T00:58:53.875504570Z" level=info msg="CreateContainer within sandbox \"a2aec6853fda34e370be0d66f2b3f913ef0d19ab416e0c3bec3c2dad1992613b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 9 00:58:53.886406 containerd[1486]: time="2024-10-09T00:58:53.886367267Z" level=info msg="CreateContainer within sandbox \"a2aec6853fda34e370be0d66f2b3f913ef0d19ab416e0c3bec3c2dad1992613b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64\"" Oct 9 00:58:53.886829 containerd[1486]: time="2024-10-09T00:58:53.886809270Z" level=info msg="StartContainer for \"61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64\"" Oct 9 00:58:53.914665 systemd[1]: Started cri-containerd-61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64.scope - libcontainer container 61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64. Oct 9 00:58:53.939810 containerd[1486]: time="2024-10-09T00:58:53.939772646Z" level=info msg="StartContainer for \"61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64\" returns successfully" Oct 9 00:58:54.248675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64-rootfs.mount: Deactivated successfully. Oct 9 00:58:54.412451 kubelet[2657]: E1009 00:58:54.412174 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:54.414680 kubelet[2657]: E1009 00:58:54.414655 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:54.416157 containerd[1486]: time="2024-10-09T00:58:54.416110643Z" level=info msg="CreateContainer within sandbox \"78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 9 00:58:54.420527 kubelet[2657]: I1009 00:58:54.419976 2657 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-qgqrf" podStartSLOduration=1.791406475 podStartE2EDuration="12.419941067s" podCreationTimestamp="2024-10-09 00:58:42 +0000 UTC" firstStartedPulling="2024-10-09 00:58:43.245877106 +0000 UTC m=+14.985126448" lastFinishedPulling="2024-10-09 00:58:53.874411698 +0000 UTC m=+25.613661040" observedRunningTime="2024-10-09 00:58:54.419344412 +0000 UTC m=+26.158593754" watchObservedRunningTime="2024-10-09 00:58:54.419941067 +0000 UTC m=+26.159190549" Oct 9 00:58:54.436208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269402667.mount: Deactivated successfully. Oct 9 00:58:54.445387 containerd[1486]: time="2024-10-09T00:58:54.445336075Z" level=info msg="CreateContainer within sandbox \"78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574\"" Oct 9 00:58:54.446180 containerd[1486]: time="2024-10-09T00:58:54.445861516Z" level=info msg="StartContainer for \"718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574\"" Oct 9 00:58:54.493701 systemd[1]: Started cri-containerd-718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574.scope - libcontainer container 718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574. Oct 9 00:58:54.536811 systemd[1]: cri-containerd-718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574.scope: Deactivated successfully. Oct 9 00:58:54.635038 containerd[1486]: time="2024-10-09T00:58:54.634994515Z" level=info msg="StartContainer for \"718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574\" returns successfully" Oct 9 00:58:54.746602 containerd[1486]: time="2024-10-09T00:58:54.746536527Z" level=info msg="shim disconnected" id=718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574 namespace=k8s.io Oct 9 00:58:54.746602 containerd[1486]: time="2024-10-09T00:58:54.746595979Z" level=warning msg="cleaning up after shim disconnected" id=718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574 namespace=k8s.io Oct 9 00:58:54.746602 containerd[1486]: time="2024-10-09T00:58:54.746605507Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:58:55.247779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574-rootfs.mount: Deactivated successfully. Oct 9 00:58:55.417597 kubelet[2657]: E1009 00:58:55.417539 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:55.419324 kubelet[2657]: E1009 00:58:55.417610 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:55.419571 containerd[1486]: time="2024-10-09T00:58:55.419535415Z" level=info msg="CreateContainer within sandbox \"78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 9 00:58:55.610238 containerd[1486]: time="2024-10-09T00:58:55.610188728Z" level=info msg="CreateContainer within sandbox \"78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f\"" Oct 9 00:58:55.610941 containerd[1486]: time="2024-10-09T00:58:55.610814758Z" level=info msg="StartContainer for \"999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f\"" Oct 9 00:58:55.645661 systemd[1]: Started cri-containerd-999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f.scope - libcontainer container 999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f. Oct 9 00:58:55.668465 systemd[1]: cri-containerd-999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f.scope: Deactivated successfully. Oct 9 00:58:55.671498 containerd[1486]: time="2024-10-09T00:58:55.671459570Z" level=info msg="StartContainer for \"999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f\" returns successfully" Oct 9 00:58:55.695436 containerd[1486]: time="2024-10-09T00:58:55.695375862Z" level=info msg="shim disconnected" id=999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f namespace=k8s.io Oct 9 00:58:55.695436 containerd[1486]: time="2024-10-09T00:58:55.695434793Z" level=warning msg="cleaning up after shim disconnected" id=999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f namespace=k8s.io Oct 9 00:58:55.695661 containerd[1486]: time="2024-10-09T00:58:55.695447167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:58:56.247741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f-rootfs.mount: Deactivated successfully. Oct 9 00:58:56.422406 kubelet[2657]: E1009 00:58:56.421156 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:56.424620 containerd[1486]: time="2024-10-09T00:58:56.424587061Z" level=info msg="CreateContainer within sandbox \"78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 9 00:58:56.540495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010882108.mount: Deactivated successfully. Oct 9 00:58:56.541924 containerd[1486]: time="2024-10-09T00:58:56.541887016Z" level=info msg="CreateContainer within sandbox \"78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e\"" Oct 9 00:58:56.542372 containerd[1486]: time="2024-10-09T00:58:56.542347733Z" level=info msg="StartContainer for \"a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e\"" Oct 9 00:58:56.572674 systemd[1]: Started cri-containerd-a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e.scope - libcontainer container a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e. Oct 9 00:58:56.601270 containerd[1486]: time="2024-10-09T00:58:56.601218833Z" level=info msg="StartContainer for \"a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e\" returns successfully" Oct 9 00:58:56.758193 kubelet[2657]: I1009 00:58:56.757471 2657 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 00:58:56.784767 kubelet[2657]: I1009 00:58:56.784718 2657 topology_manager.go:215] "Topology Admit Handler" podUID="f35dcb53-fc03-44dd-acf5-d1d6682fe739" podNamespace="kube-system" podName="coredns-76f75df574-5qht5" Oct 9 00:58:56.792785 kubelet[2657]: W1009 00:58:56.792686 2657 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 9 00:58:56.792785 kubelet[2657]: E1009 00:58:56.792724 2657 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 9 00:58:56.794955 systemd[1]: Created slice kubepods-burstable-podf35dcb53_fc03_44dd_acf5_d1d6682fe739.slice - libcontainer container kubepods-burstable-podf35dcb53_fc03_44dd_acf5_d1d6682fe739.slice. Oct 9 00:58:56.797331 kubelet[2657]: I1009 00:58:56.797307 2657 topology_manager.go:215] "Topology Admit Handler" podUID="bb507c0d-065d-4bb6-abe6-9a6cd300340c" podNamespace="kube-system" podName="coredns-76f75df574-drzg8" Oct 9 00:58:56.807962 systemd[1]: Created slice kubepods-burstable-podbb507c0d_065d_4bb6_abe6_9a6cd300340c.slice - libcontainer container kubepods-burstable-podbb507c0d_065d_4bb6_abe6_9a6cd300340c.slice. Oct 9 00:58:56.856331 kubelet[2657]: I1009 00:58:56.856304 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfx5n\" (UniqueName: \"kubernetes.io/projected/bb507c0d-065d-4bb6-abe6-9a6cd300340c-kube-api-access-wfx5n\") pod \"coredns-76f75df574-drzg8\" (UID: \"bb507c0d-065d-4bb6-abe6-9a6cd300340c\") " pod="kube-system/coredns-76f75df574-drzg8" Oct 9 00:58:56.856478 kubelet[2657]: I1009 00:58:56.856345 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f35dcb53-fc03-44dd-acf5-d1d6682fe739-config-volume\") pod \"coredns-76f75df574-5qht5\" (UID: \"f35dcb53-fc03-44dd-acf5-d1d6682fe739\") " pod="kube-system/coredns-76f75df574-5qht5" Oct 9 00:58:56.856478 kubelet[2657]: I1009 00:58:56.856364 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb507c0d-065d-4bb6-abe6-9a6cd300340c-config-volume\") pod \"coredns-76f75df574-drzg8\" (UID: \"bb507c0d-065d-4bb6-abe6-9a6cd300340c\") " pod="kube-system/coredns-76f75df574-drzg8" Oct 9 00:58:56.856596 kubelet[2657]: I1009 00:58:56.856499 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgrqz\" (UniqueName: \"kubernetes.io/projected/f35dcb53-fc03-44dd-acf5-d1d6682fe739-kube-api-access-dgrqz\") pod \"coredns-76f75df574-5qht5\" (UID: \"f35dcb53-fc03-44dd-acf5-d1d6682fe739\") " pod="kube-system/coredns-76f75df574-5qht5" Oct 9 00:58:57.424734 kubelet[2657]: E1009 00:58:57.424692 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:57.436039 kubelet[2657]: I1009 00:58:57.435807 2657 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zgvbs" podStartSLOduration=5.864079696 podStartE2EDuration="15.43575984s" podCreationTimestamp="2024-10-09 00:58:42 +0000 UTC" firstStartedPulling="2024-10-09 00:58:42.66048073 +0000 UTC m=+14.399730072" lastFinishedPulling="2024-10-09 00:58:52.232160874 +0000 UTC m=+23.971410216" observedRunningTime="2024-10-09 00:58:57.43560587 +0000 UTC m=+29.174855222" watchObservedRunningTime="2024-10-09 00:58:57.43575984 +0000 UTC m=+29.175009182" Oct 9 00:58:57.478155 systemd[1]: Started sshd@7-10.0.0.51:22-10.0.0.1:55444.service - OpenSSH per-connection server daemon (10.0.0.1:55444). Oct 9 00:58:57.515044 sshd[3476]: Accepted publickey for core from 10.0.0.1 port 55444 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:58:57.516854 sshd[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:58:57.521096 systemd-logind[1474]: New session 8 of user core. Oct 9 00:58:57.527627 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 00:58:57.650886 sshd[3476]: pam_unix(sshd:session): session closed for user core Oct 9 00:58:57.654473 systemd[1]: sshd@7-10.0.0.51:22-10.0.0.1:55444.service: Deactivated successfully. Oct 9 00:58:57.656177 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 00:58:57.656739 systemd-logind[1474]: Session 8 logged out. Waiting for processes to exit. Oct 9 00:58:57.657543 systemd-logind[1474]: Removed session 8. Oct 9 00:58:57.964869 kubelet[2657]: E1009 00:58:57.964816 2657 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Oct 9 00:58:57.964996 kubelet[2657]: E1009 00:58:57.964922 2657 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb507c0d-065d-4bb6-abe6-9a6cd300340c-config-volume podName:bb507c0d-065d-4bb6-abe6-9a6cd300340c nodeName:}" failed. No retries permitted until 2024-10-09 00:58:58.464900779 +0000 UTC m=+30.204150121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bb507c0d-065d-4bb6-abe6-9a6cd300340c-config-volume") pod "coredns-76f75df574-drzg8" (UID: "bb507c0d-065d-4bb6-abe6-9a6cd300340c") : failed to sync configmap cache: timed out waiting for the condition Oct 9 00:58:57.965106 kubelet[2657]: E1009 00:58:57.965004 2657 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Oct 9 00:58:57.965106 kubelet[2657]: E1009 00:58:57.965078 2657 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f35dcb53-fc03-44dd-acf5-d1d6682fe739-config-volume podName:f35dcb53-fc03-44dd-acf5-d1d6682fe739 nodeName:}" failed. No retries permitted until 2024-10-09 00:58:58.4650603 +0000 UTC m=+30.204309642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f35dcb53-fc03-44dd-acf5-d1d6682fe739-config-volume") pod "coredns-76f75df574-5qht5" (UID: "f35dcb53-fc03-44dd-acf5-d1d6682fe739") : failed to sync configmap cache: timed out waiting for the condition Oct 9 00:58:58.426834 kubelet[2657]: E1009 00:58:58.426793 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:58.600125 kubelet[2657]: E1009 00:58:58.600068 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:58.600652 containerd[1486]: time="2024-10-09T00:58:58.600608905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5qht5,Uid:f35dcb53-fc03-44dd-acf5-d1d6682fe739,Namespace:kube-system,Attempt:0,}" Oct 9 00:58:58.610334 kubelet[2657]: E1009 00:58:58.610309 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:58.610687 containerd[1486]: time="2024-10-09T00:58:58.610652773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-drzg8,Uid:bb507c0d-065d-4bb6-abe6-9a6cd300340c,Namespace:kube-system,Attempt:0,}" Oct 9 00:58:58.730215 systemd-networkd[1409]: cilium_host: Link UP Oct 9 00:58:58.730389 systemd-networkd[1409]: cilium_net: Link UP Oct 9 00:58:58.731797 systemd-networkd[1409]: cilium_net: Gained carrier Oct 9 00:58:58.732138 systemd-networkd[1409]: cilium_host: Gained carrier Oct 9 00:58:58.732792 systemd-networkd[1409]: cilium_net: Gained IPv6LL Oct 9 00:58:58.733132 systemd-networkd[1409]: cilium_host: Gained IPv6LL Oct 9 00:58:58.833393 systemd-networkd[1409]: cilium_vxlan: Link UP Oct 9 00:58:58.834895 systemd-networkd[1409]: cilium_vxlan: Gained carrier Oct 9 00:58:59.040554 kernel: NET: Registered PF_ALG protocol family Oct 9 00:58:59.428769 kubelet[2657]: E1009 00:58:59.428724 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:58:59.702013 systemd-networkd[1409]: lxc_health: Link UP Oct 9 00:58:59.713026 systemd-networkd[1409]: lxc_health: Gained carrier Oct 9 00:59:00.169466 systemd-networkd[1409]: lxcdacd7257db65: Link UP Oct 9 00:59:00.175544 kernel: eth0: renamed from tmp123a9 Oct 9 00:59:00.183865 systemd-networkd[1409]: lxcdacd7257db65: Gained carrier Oct 9 00:59:00.184812 systemd-networkd[1409]: lxc7a9dc26517aa: Link UP Oct 9 00:59:00.192641 kernel: eth0: renamed from tmpc633e Oct 9 00:59:00.202000 systemd-networkd[1409]: lxc7a9dc26517aa: Gained carrier Oct 9 00:59:00.245746 systemd-networkd[1409]: cilium_vxlan: Gained IPv6LL Oct 9 00:59:00.430882 kubelet[2657]: E1009 00:59:00.430481 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:01.269667 systemd-networkd[1409]: lxc7a9dc26517aa: Gained IPv6LL Oct 9 00:59:01.431933 kubelet[2657]: E1009 00:59:01.431908 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:01.461732 systemd-networkd[1409]: lxcdacd7257db65: Gained IPv6LL Oct 9 00:59:01.462292 systemd-networkd[1409]: lxc_health: Gained IPv6LL Oct 9 00:59:02.434058 kubelet[2657]: E1009 00:59:02.434018 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:02.667176 systemd[1]: Started sshd@8-10.0.0.51:22-10.0.0.1:55448.service - OpenSSH per-connection server daemon (10.0.0.1:55448). Oct 9 00:59:02.706041 sshd[3896]: Accepted publickey for core from 10.0.0.1 port 55448 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:02.707739 sshd[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:02.712636 systemd-logind[1474]: New session 9 of user core. Oct 9 00:59:02.717810 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 00:59:02.837146 sshd[3896]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:02.840852 systemd[1]: sshd@8-10.0.0.51:22-10.0.0.1:55448.service: Deactivated successfully. Oct 9 00:59:02.842639 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 00:59:02.843230 systemd-logind[1474]: Session 9 logged out. Waiting for processes to exit. Oct 9 00:59:02.844048 systemd-logind[1474]: Removed session 9. Oct 9 00:59:03.662989 containerd[1486]: time="2024-10-09T00:59:03.662902860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:59:03.662989 containerd[1486]: time="2024-10-09T00:59:03.662964335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:59:03.662989 containerd[1486]: time="2024-10-09T00:59:03.662978582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:59:03.663484 containerd[1486]: time="2024-10-09T00:59:03.663073411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:59:03.664576 containerd[1486]: time="2024-10-09T00:59:03.664295970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:59:03.664576 containerd[1486]: time="2024-10-09T00:59:03.664355512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:59:03.664576 containerd[1486]: time="2024-10-09T00:59:03.664370020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:59:03.664576 containerd[1486]: time="2024-10-09T00:59:03.664437556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:59:03.678856 systemd[1]: run-containerd-runc-k8s.io-123a9a6bec502b58b8713b3b75ecf0bfc98d0273b2ac3234424225132bb644a9-runc.MbDxgx.mount: Deactivated successfully. Oct 9 00:59:03.704639 systemd[1]: Started cri-containerd-123a9a6bec502b58b8713b3b75ecf0bfc98d0273b2ac3234424225132bb644a9.scope - libcontainer container 123a9a6bec502b58b8713b3b75ecf0bfc98d0273b2ac3234424225132bb644a9. Oct 9 00:59:03.706157 systemd[1]: Started cri-containerd-c633e669fd05466ba90d2d9e227a41c359181b90b4515ea0e2ec15f99bb901c8.scope - libcontainer container c633e669fd05466ba90d2d9e227a41c359181b90b4515ea0e2ec15f99bb901c8. Oct 9 00:59:03.717878 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:59:03.720033 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:59:03.741845 containerd[1486]: time="2024-10-09T00:59:03.741804019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5qht5,Uid:f35dcb53-fc03-44dd-acf5-d1d6682fe739,Namespace:kube-system,Attempt:0,} returns sandbox id \"123a9a6bec502b58b8713b3b75ecf0bfc98d0273b2ac3234424225132bb644a9\"" Oct 9 00:59:03.743467 kubelet[2657]: E1009 00:59:03.743207 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:03.745888 containerd[1486]: time="2024-10-09T00:59:03.745773738Z" level=info msg="CreateContainer within sandbox \"123a9a6bec502b58b8713b3b75ecf0bfc98d0273b2ac3234424225132bb644a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 00:59:03.747768 containerd[1486]: time="2024-10-09T00:59:03.747747521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-drzg8,Uid:bb507c0d-065d-4bb6-abe6-9a6cd300340c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c633e669fd05466ba90d2d9e227a41c359181b90b4515ea0e2ec15f99bb901c8\"" Oct 9 00:59:03.748858 kubelet[2657]: E1009 00:59:03.748827 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:03.751231 containerd[1486]: time="2024-10-09T00:59:03.751187793Z" level=info msg="CreateContainer within sandbox \"c633e669fd05466ba90d2d9e227a41c359181b90b4515ea0e2ec15f99bb901c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 00:59:03.763607 containerd[1486]: time="2024-10-09T00:59:03.763561160Z" level=info msg="CreateContainer within sandbox \"123a9a6bec502b58b8713b3b75ecf0bfc98d0273b2ac3234424225132bb644a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"29ad6753d81760d40f2bf89addc820f0790e891ac334d31de0d9430f3c269833\"" Oct 9 00:59:03.764382 containerd[1486]: time="2024-10-09T00:59:03.763974698Z" level=info msg="StartContainer for \"29ad6753d81760d40f2bf89addc820f0790e891ac334d31de0d9430f3c269833\"" Oct 9 00:59:03.773803 containerd[1486]: time="2024-10-09T00:59:03.773746882Z" level=info msg="CreateContainer within sandbox \"c633e669fd05466ba90d2d9e227a41c359181b90b4515ea0e2ec15f99bb901c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f0a6c0c0e43d7f58773298cdcf1b70c1ad5a15ff33d9bc5707243c49883e2af9\"" Oct 9 00:59:03.774653 containerd[1486]: time="2024-10-09T00:59:03.774457388Z" level=info msg="StartContainer for \"f0a6c0c0e43d7f58773298cdcf1b70c1ad5a15ff33d9bc5707243c49883e2af9\"" Oct 9 00:59:03.792649 systemd[1]: Started cri-containerd-29ad6753d81760d40f2bf89addc820f0790e891ac334d31de0d9430f3c269833.scope - libcontainer container 29ad6753d81760d40f2bf89addc820f0790e891ac334d31de0d9430f3c269833. Oct 9 00:59:03.796411 systemd[1]: Started cri-containerd-f0a6c0c0e43d7f58773298cdcf1b70c1ad5a15ff33d9bc5707243c49883e2af9.scope - libcontainer container f0a6c0c0e43d7f58773298cdcf1b70c1ad5a15ff33d9bc5707243c49883e2af9. Oct 9 00:59:04.007224 containerd[1486]: time="2024-10-09T00:59:04.007056176Z" level=info msg="StartContainer for \"f0a6c0c0e43d7f58773298cdcf1b70c1ad5a15ff33d9bc5707243c49883e2af9\" returns successfully" Oct 9 00:59:04.007224 containerd[1486]: time="2024-10-09T00:59:04.007141696Z" level=info msg="StartContainer for \"29ad6753d81760d40f2bf89addc820f0790e891ac334d31de0d9430f3c269833\" returns successfully" Oct 9 00:59:04.438945 kubelet[2657]: E1009 00:59:04.438419 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:04.440805 kubelet[2657]: E1009 00:59:04.440645 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:04.448350 kubelet[2657]: I1009 00:59:04.448089 2657 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-drzg8" podStartSLOduration=22.448037027 podStartE2EDuration="22.448037027s" podCreationTimestamp="2024-10-09 00:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:59:04.447930136 +0000 UTC m=+36.187179478" watchObservedRunningTime="2024-10-09 00:59:04.448037027 +0000 UTC m=+36.187286369" Oct 9 00:59:04.457956 kubelet[2657]: I1009 00:59:04.457893 2657 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-5qht5" podStartSLOduration=22.457774221 podStartE2EDuration="22.457774221s" podCreationTimestamp="2024-10-09 00:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:59:04.456899807 +0000 UTC m=+36.196149149" watchObservedRunningTime="2024-10-09 00:59:04.457774221 +0000 UTC m=+36.197023563" Oct 9 00:59:05.442377 kubelet[2657]: E1009 00:59:05.442330 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:05.442826 kubelet[2657]: E1009 00:59:05.442445 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:06.443860 kubelet[2657]: E1009 00:59:06.443828 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:06.444277 kubelet[2657]: E1009 00:59:06.444053 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:07.853976 systemd[1]: Started sshd@9-10.0.0.51:22-10.0.0.1:37272.service - OpenSSH per-connection server daemon (10.0.0.1:37272). Oct 9 00:59:07.892897 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 37272 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:07.894867 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:07.899356 systemd-logind[1474]: New session 10 of user core. Oct 9 00:59:07.910660 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 00:59:08.037495 sshd[4087]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:08.042053 systemd[1]: sshd@9-10.0.0.51:22-10.0.0.1:37272.service: Deactivated successfully. Oct 9 00:59:08.044813 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 00:59:08.045614 systemd-logind[1474]: Session 10 logged out. Waiting for processes to exit. Oct 9 00:59:08.046690 systemd-logind[1474]: Removed session 10. Oct 9 00:59:13.049356 systemd[1]: Started sshd@10-10.0.0.51:22-10.0.0.1:37280.service - OpenSSH per-connection server daemon (10.0.0.1:37280). Oct 9 00:59:13.084095 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 37280 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:13.085618 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:13.089413 systemd-logind[1474]: New session 11 of user core. Oct 9 00:59:13.101667 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 00:59:13.207603 sshd[4105]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:13.221340 systemd[1]: sshd@10-10.0.0.51:22-10.0.0.1:37280.service: Deactivated successfully. Oct 9 00:59:13.223173 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 00:59:13.224762 systemd-logind[1474]: Session 11 logged out. Waiting for processes to exit. Oct 9 00:59:13.238748 systemd[1]: Started sshd@11-10.0.0.51:22-10.0.0.1:37286.service - OpenSSH per-connection server daemon (10.0.0.1:37286). Oct 9 00:59:13.239685 systemd-logind[1474]: Removed session 11. Oct 9 00:59:13.268154 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 37286 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:13.269722 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:13.273459 systemd-logind[1474]: New session 12 of user core. Oct 9 00:59:13.289632 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 00:59:13.437019 sshd[4121]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:13.447108 systemd[1]: sshd@11-10.0.0.51:22-10.0.0.1:37286.service: Deactivated successfully. Oct 9 00:59:13.450351 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 00:59:13.452628 systemd-logind[1474]: Session 12 logged out. Waiting for processes to exit. Oct 9 00:59:13.462818 systemd[1]: Started sshd@12-10.0.0.51:22-10.0.0.1:37294.service - OpenSSH per-connection server daemon (10.0.0.1:37294). Oct 9 00:59:13.463748 systemd-logind[1474]: Removed session 12. Oct 9 00:59:13.491109 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 37294 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:13.492760 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:13.497265 systemd-logind[1474]: New session 13 of user core. Oct 9 00:59:13.508659 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 00:59:13.620961 sshd[4135]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:13.624713 systemd[1]: sshd@12-10.0.0.51:22-10.0.0.1:37294.service: Deactivated successfully. Oct 9 00:59:13.626573 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 00:59:13.627277 systemd-logind[1474]: Session 13 logged out. Waiting for processes to exit. Oct 9 00:59:13.628271 systemd-logind[1474]: Removed session 13. Oct 9 00:59:18.635576 systemd[1]: Started sshd@13-10.0.0.51:22-10.0.0.1:42592.service - OpenSSH per-connection server daemon (10.0.0.1:42592). Oct 9 00:59:18.668118 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 42592 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:18.669639 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:18.673246 systemd-logind[1474]: New session 14 of user core. Oct 9 00:59:18.683634 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 00:59:18.787376 sshd[4150]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:18.790981 systemd[1]: sshd@13-10.0.0.51:22-10.0.0.1:42592.service: Deactivated successfully. Oct 9 00:59:18.793012 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 00:59:18.793632 systemd-logind[1474]: Session 14 logged out. Waiting for processes to exit. Oct 9 00:59:18.794423 systemd-logind[1474]: Removed session 14. Oct 9 00:59:23.799294 systemd[1]: Started sshd@14-10.0.0.51:22-10.0.0.1:42608.service - OpenSSH per-connection server daemon (10.0.0.1:42608). Oct 9 00:59:23.832001 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 42608 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:23.833737 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:23.837597 systemd-logind[1474]: New session 15 of user core. Oct 9 00:59:23.849652 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 00:59:23.961776 sshd[4164]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:23.973373 systemd[1]: sshd@14-10.0.0.51:22-10.0.0.1:42608.service: Deactivated successfully. Oct 9 00:59:23.975252 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 00:59:23.976850 systemd-logind[1474]: Session 15 logged out. Waiting for processes to exit. Oct 9 00:59:23.980884 systemd[1]: Started sshd@15-10.0.0.51:22-10.0.0.1:42610.service - OpenSSH per-connection server daemon (10.0.0.1:42610). Oct 9 00:59:23.981773 systemd-logind[1474]: Removed session 15. Oct 9 00:59:24.010485 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 42610 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:24.012368 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:24.015960 systemd-logind[1474]: New session 16 of user core. Oct 9 00:59:24.022649 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 00:59:24.193000 sshd[4178]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:24.201299 systemd[1]: sshd@15-10.0.0.51:22-10.0.0.1:42610.service: Deactivated successfully. Oct 9 00:59:24.203205 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 00:59:24.204733 systemd-logind[1474]: Session 16 logged out. Waiting for processes to exit. Oct 9 00:59:24.219867 systemd[1]: Started sshd@16-10.0.0.51:22-10.0.0.1:42624.service - OpenSSH per-connection server daemon (10.0.0.1:42624). Oct 9 00:59:24.220828 systemd-logind[1474]: Removed session 16. Oct 9 00:59:24.251303 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 42624 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:24.252762 sshd[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:24.256595 systemd-logind[1474]: New session 17 of user core. Oct 9 00:59:24.266631 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 00:59:25.895980 sshd[4190]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:25.907316 systemd[1]: sshd@16-10.0.0.51:22-10.0.0.1:42624.service: Deactivated successfully. Oct 9 00:59:25.909134 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 00:59:25.910482 systemd-logind[1474]: Session 17 logged out. Waiting for processes to exit. Oct 9 00:59:25.917795 systemd[1]: Started sshd@17-10.0.0.51:22-10.0.0.1:42638.service - OpenSSH per-connection server daemon (10.0.0.1:42638). Oct 9 00:59:25.918782 systemd-logind[1474]: Removed session 17. Oct 9 00:59:25.947809 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 42638 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:25.949614 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:25.953434 systemd-logind[1474]: New session 18 of user core. Oct 9 00:59:25.962666 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 00:59:26.202392 sshd[4210]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:26.210914 systemd[1]: sshd@17-10.0.0.51:22-10.0.0.1:42638.service: Deactivated successfully. Oct 9 00:59:26.212685 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 00:59:26.214342 systemd-logind[1474]: Session 18 logged out. Waiting for processes to exit. Oct 9 00:59:26.220925 systemd[1]: Started sshd@18-10.0.0.51:22-10.0.0.1:42650.service - OpenSSH per-connection server daemon (10.0.0.1:42650). Oct 9 00:59:26.221948 systemd-logind[1474]: Removed session 18. Oct 9 00:59:26.250574 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 42650 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:26.252164 sshd[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:26.256147 systemd-logind[1474]: New session 19 of user core. Oct 9 00:59:26.265725 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 00:59:26.377012 sshd[4223]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:26.381644 systemd[1]: sshd@18-10.0.0.51:22-10.0.0.1:42650.service: Deactivated successfully. Oct 9 00:59:26.383469 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 00:59:26.384296 systemd-logind[1474]: Session 19 logged out. Waiting for processes to exit. Oct 9 00:59:26.385176 systemd-logind[1474]: Removed session 19. Oct 9 00:59:31.392018 systemd[1]: Started sshd@19-10.0.0.51:22-10.0.0.1:46324.service - OpenSSH per-connection server daemon (10.0.0.1:46324). Oct 9 00:59:31.447290 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 46324 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:31.449820 sshd[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:31.455968 systemd-logind[1474]: New session 20 of user core. Oct 9 00:59:31.466929 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 00:59:31.679980 sshd[4239]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:31.685762 systemd[1]: sshd@19-10.0.0.51:22-10.0.0.1:46324.service: Deactivated successfully. Oct 9 00:59:31.688215 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 00:59:31.689190 systemd-logind[1474]: Session 20 logged out. Waiting for processes to exit. Oct 9 00:59:31.690850 systemd-logind[1474]: Removed session 20. Oct 9 00:59:36.691535 systemd[1]: Started sshd@20-10.0.0.51:22-10.0.0.1:46336.service - OpenSSH per-connection server daemon (10.0.0.1:46336). Oct 9 00:59:36.727162 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 46336 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:36.728689 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:36.732811 systemd-logind[1474]: New session 21 of user core. Oct 9 00:59:36.739646 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 00:59:36.850930 sshd[4257]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:36.855403 systemd[1]: sshd@20-10.0.0.51:22-10.0.0.1:46336.service: Deactivated successfully. Oct 9 00:59:36.857397 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 00:59:36.858090 systemd-logind[1474]: Session 21 logged out. Waiting for processes to exit. Oct 9 00:59:36.859116 systemd-logind[1474]: Removed session 21. Oct 9 00:59:41.869353 systemd[1]: Started sshd@21-10.0.0.51:22-10.0.0.1:42162.service - OpenSSH per-connection server daemon (10.0.0.1:42162). Oct 9 00:59:41.906984 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 42162 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:41.908876 sshd[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:41.913344 systemd-logind[1474]: New session 22 of user core. Oct 9 00:59:41.922707 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 00:59:42.033489 sshd[4271]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:42.038609 systemd[1]: sshd@21-10.0.0.51:22-10.0.0.1:42162.service: Deactivated successfully. Oct 9 00:59:42.040869 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 00:59:42.041661 systemd-logind[1474]: Session 22 logged out. Waiting for processes to exit. Oct 9 00:59:42.042701 systemd-logind[1474]: Removed session 22. Oct 9 00:59:47.045729 systemd[1]: Started sshd@22-10.0.0.51:22-10.0.0.1:56560.service - OpenSSH per-connection server daemon (10.0.0.1:56560). Oct 9 00:59:47.078884 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 56560 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:47.080557 sshd[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:47.084504 systemd-logind[1474]: New session 23 of user core. Oct 9 00:59:47.096733 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 00:59:47.199640 sshd[4288]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:47.217403 systemd[1]: sshd@22-10.0.0.51:22-10.0.0.1:56560.service: Deactivated successfully. Oct 9 00:59:47.219124 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 00:59:47.220610 systemd-logind[1474]: Session 23 logged out. Waiting for processes to exit. Oct 9 00:59:47.236029 systemd[1]: Started sshd@23-10.0.0.51:22-10.0.0.1:56572.service - OpenSSH per-connection server daemon (10.0.0.1:56572). Oct 9 00:59:47.236857 systemd-logind[1474]: Removed session 23. Oct 9 00:59:47.263674 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 56572 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:47.265111 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:47.268727 systemd-logind[1474]: New session 24 of user core. Oct 9 00:59:47.279639 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 00:59:49.063305 containerd[1486]: time="2024-10-09T00:59:49.063223204Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 00:59:49.064060 containerd[1486]: time="2024-10-09T00:59:49.063441673Z" level=info msg="StopContainer for \"61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64\" with timeout 30 (s)" Oct 9 00:59:49.064091 containerd[1486]: time="2024-10-09T00:59:49.064062133Z" level=info msg="Stop container \"61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64\" with signal terminated" Oct 9 00:59:49.066589 containerd[1486]: time="2024-10-09T00:59:49.066527594Z" level=info msg="StopContainer for \"a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e\" with timeout 2 (s)" Oct 9 00:59:49.066906 containerd[1486]: time="2024-10-09T00:59:49.066843600Z" level=info msg="Stop container \"a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e\" with signal terminated" Oct 9 00:59:49.074987 systemd-networkd[1409]: lxc_health: Link DOWN Oct 9 00:59:49.074996 systemd-networkd[1409]: lxc_health: Lost carrier Oct 9 00:59:49.080264 systemd[1]: cri-containerd-61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64.scope: Deactivated successfully. Oct 9 00:59:49.102304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64-rootfs.mount: Deactivated successfully. Oct 9 00:59:49.104044 systemd[1]: cri-containerd-a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e.scope: Deactivated successfully. Oct 9 00:59:49.104562 systemd[1]: cri-containerd-a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e.scope: Consumed 6.785s CPU time. Oct 9 00:59:49.123948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e-rootfs.mount: Deactivated successfully. Oct 9 00:59:49.166880 containerd[1486]: time="2024-10-09T00:59:49.166818788Z" level=info msg="shim disconnected" id=61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64 namespace=k8s.io Oct 9 00:59:49.166880 containerd[1486]: time="2024-10-09T00:59:49.166875598Z" level=warning msg="cleaning up after shim disconnected" id=61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64 namespace=k8s.io Oct 9 00:59:49.166880 containerd[1486]: time="2024-10-09T00:59:49.166884425Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:59:49.174437 containerd[1486]: time="2024-10-09T00:59:49.174358855Z" level=info msg="shim disconnected" id=a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e namespace=k8s.io Oct 9 00:59:49.174437 containerd[1486]: time="2024-10-09T00:59:49.174429872Z" level=warning msg="cleaning up after shim disconnected" id=a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e namespace=k8s.io Oct 9 00:59:49.174437 containerd[1486]: time="2024-10-09T00:59:49.174438768Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:59:49.189024 containerd[1486]: time="2024-10-09T00:59:49.188973125Z" level=info msg="StopContainer for \"61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64\" returns successfully" Oct 9 00:59:49.189702 containerd[1486]: time="2024-10-09T00:59:49.189674110Z" level=info msg="StopPodSandbox for \"a2aec6853fda34e370be0d66f2b3f913ef0d19ab416e0c3bec3c2dad1992613b\"" Oct 9 00:59:49.196020 containerd[1486]: time="2024-10-09T00:59:49.195939712Z" level=info msg="StopContainer for \"a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e\" returns successfully" Oct 9 00:59:49.196503 containerd[1486]: time="2024-10-09T00:59:49.196477364Z" level=info msg="StopPodSandbox for \"78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d\"" Oct 9 00:59:49.201083 containerd[1486]: time="2024-10-09T00:59:49.196531928Z" level=info msg="Container to stop \"4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:59:49.201083 containerd[1486]: time="2024-10-09T00:59:49.201081627Z" level=info msg="Container to stop \"718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:59:49.201175 containerd[1486]: time="2024-10-09T00:59:49.201092860Z" level=info msg="Container to stop \"7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:59:49.201175 containerd[1486]: time="2024-10-09T00:59:49.201101957Z" level=info msg="Container to stop \"999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:59:49.201175 containerd[1486]: time="2024-10-09T00:59:49.201112566Z" level=info msg="Container to stop \"a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:59:49.203957 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d-shm.mount: Deactivated successfully. Oct 9 00:59:49.208197 systemd[1]: cri-containerd-78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d.scope: Deactivated successfully. Oct 9 00:59:49.211684 containerd[1486]: time="2024-10-09T00:59:49.189717734Z" level=info msg="Container to stop \"61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:59:49.214033 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2aec6853fda34e370be0d66f2b3f913ef0d19ab416e0c3bec3c2dad1992613b-shm.mount: Deactivated successfully. Oct 9 00:59:49.225195 systemd[1]: cri-containerd-a2aec6853fda34e370be0d66f2b3f913ef0d19ab416e0c3bec3c2dad1992613b.scope: Deactivated successfully. Oct 9 00:59:49.280744 containerd[1486]: time="2024-10-09T00:59:49.280536765Z" level=info msg="shim disconnected" id=78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d namespace=k8s.io Oct 9 00:59:49.280744 containerd[1486]: time="2024-10-09T00:59:49.280594415Z" level=warning msg="cleaning up after shim disconnected" id=78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d namespace=k8s.io Oct 9 00:59:49.280744 containerd[1486]: time="2024-10-09T00:59:49.280602130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:59:49.281071 containerd[1486]: time="2024-10-09T00:59:49.280881286Z" level=info msg="shim disconnected" id=a2aec6853fda34e370be0d66f2b3f913ef0d19ab416e0c3bec3c2dad1992613b namespace=k8s.io Oct 9 00:59:49.281071 containerd[1486]: time="2024-10-09T00:59:49.280906344Z" level=warning msg="cleaning up after shim disconnected" id=a2aec6853fda34e370be0d66f2b3f913ef0d19ab416e0c3bec3c2dad1992613b namespace=k8s.io Oct 9 00:59:49.281071 containerd[1486]: time="2024-10-09T00:59:49.280914299Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:59:49.295047 containerd[1486]: time="2024-10-09T00:59:49.294989154Z" level=info msg="TearDown network for sandbox \"78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d\" successfully" Oct 9 00:59:49.295047 containerd[1486]: time="2024-10-09T00:59:49.295032578Z" level=info msg="StopPodSandbox for \"78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d\" returns successfully" Oct 9 00:59:49.296737 containerd[1486]: time="2024-10-09T00:59:49.296709325Z" level=info msg="TearDown network for sandbox \"a2aec6853fda34e370be0d66f2b3f913ef0d19ab416e0c3bec3c2dad1992613b\" successfully" Oct 9 00:59:49.296737 containerd[1486]: time="2024-10-09T00:59:49.296736236Z" level=info msg="StopPodSandbox for \"a2aec6853fda34e370be0d66f2b3f913ef0d19ab416e0c3bec3c2dad1992613b\" returns successfully" Oct 9 00:59:49.356709 kubelet[2657]: I1009 00:59:49.356558 2657 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-hostproc\") pod \"c55227cc-27af-435a-ac2f-0bf33d67dae7\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " Oct 9 00:59:49.356709 kubelet[2657]: I1009 00:59:49.356598 2657 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-host-proc-sys-kernel\") pod \"c55227cc-27af-435a-ac2f-0bf33d67dae7\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " Oct 9 00:59:49.356709 kubelet[2657]: I1009 00:59:49.356616 2657 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-cni-path\") pod \"c55227cc-27af-435a-ac2f-0bf33d67dae7\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " Oct 9 00:59:49.356709 kubelet[2657]: I1009 00:59:49.356641 2657 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c55227cc-27af-435a-ac2f-0bf33d67dae7-clustermesh-secrets\") pod \"c55227cc-27af-435a-ac2f-0bf33d67dae7\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " Oct 9 00:59:49.356709 kubelet[2657]: I1009 00:59:49.356660 2657 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-bpf-maps\") pod \"c55227cc-27af-435a-ac2f-0bf33d67dae7\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " Oct 9 00:59:49.356709 kubelet[2657]: I1009 00:59:49.356682 2657 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-528s8\" (UniqueName: \"kubernetes.io/projected/c55227cc-27af-435a-ac2f-0bf33d67dae7-kube-api-access-528s8\") pod \"c55227cc-27af-435a-ac2f-0bf33d67dae7\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " Oct 9 00:59:49.360474 kubelet[2657]: I1009 00:59:49.356706 2657 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-host-proc-sys-net\") pod \"c55227cc-27af-435a-ac2f-0bf33d67dae7\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " Oct 9 00:59:49.360474 kubelet[2657]: I1009 00:59:49.356723 2657 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-lib-modules\") pod \"c55227cc-27af-435a-ac2f-0bf33d67dae7\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " Oct 9 00:59:49.360474 kubelet[2657]: I1009 00:59:49.356706 2657 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-hostproc" (OuterVolumeSpecName: "hostproc") pod "c55227cc-27af-435a-ac2f-0bf33d67dae7" (UID: "c55227cc-27af-435a-ac2f-0bf33d67dae7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:59:49.360474 kubelet[2657]: I1009 00:59:49.356741 2657 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-cilium-cgroup\") pod \"c55227cc-27af-435a-ac2f-0bf33d67dae7\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " Oct 9 00:59:49.360474 kubelet[2657]: I1009 00:59:49.356786 2657 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c55227cc-27af-435a-ac2f-0bf33d67dae7" (UID: "c55227cc-27af-435a-ac2f-0bf33d67dae7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:59:49.360474 kubelet[2657]: I1009 00:59:49.356821 2657 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc2kv\" (UniqueName: \"kubernetes.io/projected/cac9d2d9-f1c9-43ba-9354-5be8d280e066-kube-api-access-gc2kv\") pod \"cac9d2d9-f1c9-43ba-9354-5be8d280e066\" (UID: \"cac9d2d9-f1c9-43ba-9354-5be8d280e066\") " Oct 9 00:59:49.360637 kubelet[2657]: I1009 00:59:49.356889 2657 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-xtables-lock\") pod \"c55227cc-27af-435a-ac2f-0bf33d67dae7\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " Oct 9 00:59:49.360637 kubelet[2657]: I1009 00:59:49.356909 2657 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-cilium-run\") pod \"c55227cc-27af-435a-ac2f-0bf33d67dae7\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " Oct 9 00:59:49.360637 kubelet[2657]: I1009 00:59:49.356969 2657 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c55227cc-27af-435a-ac2f-0bf33d67dae7-cilium-config-path\") pod \"c55227cc-27af-435a-ac2f-0bf33d67dae7\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " Oct 9 00:59:49.360637 kubelet[2657]: I1009 00:59:49.356988 2657 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-etc-cni-netd\") pod \"c55227cc-27af-435a-ac2f-0bf33d67dae7\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " Oct 9 00:59:49.360637 kubelet[2657]: I1009 00:59:49.357009 2657 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c55227cc-27af-435a-ac2f-0bf33d67dae7-hubble-tls\") pod \"c55227cc-27af-435a-ac2f-0bf33d67dae7\" (UID: \"c55227cc-27af-435a-ac2f-0bf33d67dae7\") " Oct 9 00:59:49.360637 kubelet[2657]: I1009 00:59:49.357045 2657 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cac9d2d9-f1c9-43ba-9354-5be8d280e066-cilium-config-path\") pod \"cac9d2d9-f1c9-43ba-9354-5be8d280e066\" (UID: \"cac9d2d9-f1c9-43ba-9354-5be8d280e066\") " Oct 9 00:59:49.360788 kubelet[2657]: I1009 00:59:49.357091 2657 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 9 00:59:49.360788 kubelet[2657]: I1009 00:59:49.357119 2657 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 9 00:59:49.360788 kubelet[2657]: I1009 00:59:49.357462 2657 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c55227cc-27af-435a-ac2f-0bf33d67dae7" (UID: "c55227cc-27af-435a-ac2f-0bf33d67dae7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:59:49.360788 kubelet[2657]: I1009 00:59:49.357498 2657 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c55227cc-27af-435a-ac2f-0bf33d67dae7" (UID: "c55227cc-27af-435a-ac2f-0bf33d67dae7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:59:49.360788 kubelet[2657]: I1009 00:59:49.357567 2657 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-cni-path" (OuterVolumeSpecName: "cni-path") pod "c55227cc-27af-435a-ac2f-0bf33d67dae7" (UID: "c55227cc-27af-435a-ac2f-0bf33d67dae7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:59:49.361017 kubelet[2657]: I1009 00:59:49.357591 2657 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c55227cc-27af-435a-ac2f-0bf33d67dae7" (UID: "c55227cc-27af-435a-ac2f-0bf33d67dae7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:59:49.361017 kubelet[2657]: I1009 00:59:49.356774 2657 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c55227cc-27af-435a-ac2f-0bf33d67dae7" (UID: "c55227cc-27af-435a-ac2f-0bf33d67dae7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:59:49.361017 kubelet[2657]: I1009 00:59:49.359429 2657 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c55227cc-27af-435a-ac2f-0bf33d67dae7" (UID: "c55227cc-27af-435a-ac2f-0bf33d67dae7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:59:49.361017 kubelet[2657]: I1009 00:59:49.360185 2657 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c55227cc-27af-435a-ac2f-0bf33d67dae7" (UID: "c55227cc-27af-435a-ac2f-0bf33d67dae7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:59:49.361243 kubelet[2657]: I1009 00:59:49.361212 2657 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55227cc-27af-435a-ac2f-0bf33d67dae7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c55227cc-27af-435a-ac2f-0bf33d67dae7" (UID: "c55227cc-27af-435a-ac2f-0bf33d67dae7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 9 00:59:49.361336 kubelet[2657]: I1009 00:59:49.361323 2657 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c55227cc-27af-435a-ac2f-0bf33d67dae7" (UID: "c55227cc-27af-435a-ac2f-0bf33d67dae7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:59:49.362678 kubelet[2657]: I1009 00:59:49.362660 2657 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c55227cc-27af-435a-ac2f-0bf33d67dae7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c55227cc-27af-435a-ac2f-0bf33d67dae7" (UID: "c55227cc-27af-435a-ac2f-0bf33d67dae7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 00:59:49.363243 kubelet[2657]: I1009 00:59:49.363218 2657 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c55227cc-27af-435a-ac2f-0bf33d67dae7-kube-api-access-528s8" (OuterVolumeSpecName: "kube-api-access-528s8") pod "c55227cc-27af-435a-ac2f-0bf33d67dae7" (UID: "c55227cc-27af-435a-ac2f-0bf33d67dae7"). InnerVolumeSpecName "kube-api-access-528s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 00:59:49.363411 kubelet[2657]: I1009 00:59:49.363373 2657 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cac9d2d9-f1c9-43ba-9354-5be8d280e066-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cac9d2d9-f1c9-43ba-9354-5be8d280e066" (UID: "cac9d2d9-f1c9-43ba-9354-5be8d280e066"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 00:59:49.363659 kubelet[2657]: I1009 00:59:49.363627 2657 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c55227cc-27af-435a-ac2f-0bf33d67dae7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c55227cc-27af-435a-ac2f-0bf33d67dae7" (UID: "c55227cc-27af-435a-ac2f-0bf33d67dae7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 00:59:49.363800 kubelet[2657]: I1009 00:59:49.363741 2657 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac9d2d9-f1c9-43ba-9354-5be8d280e066-kube-api-access-gc2kv" (OuterVolumeSpecName: "kube-api-access-gc2kv") pod "cac9d2d9-f1c9-43ba-9354-5be8d280e066" (UID: "cac9d2d9-f1c9-43ba-9354-5be8d280e066"). InnerVolumeSpecName "kube-api-access-gc2kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 00:59:49.458228 kubelet[2657]: I1009 00:59:49.458178 2657 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 9 00:59:49.458228 kubelet[2657]: I1009 00:59:49.458214 2657 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gc2kv\" (UniqueName: \"kubernetes.io/projected/cac9d2d9-f1c9-43ba-9354-5be8d280e066-kube-api-access-gc2kv\") on node \"localhost\" DevicePath \"\"" Oct 9 00:59:49.458228 kubelet[2657]: I1009 00:59:49.458228 2657 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 9 00:59:49.458228 kubelet[2657]: I1009 00:59:49.458237 2657 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 9 00:59:49.458228 kubelet[2657]: I1009 00:59:49.458247 2657 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c55227cc-27af-435a-ac2f-0bf33d67dae7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 9 00:59:49.458506 kubelet[2657]: I1009 00:59:49.458260 2657 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 9 00:59:49.458506 kubelet[2657]: I1009 00:59:49.458270 2657 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c55227cc-27af-435a-ac2f-0bf33d67dae7-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 9 00:59:49.458506 kubelet[2657]: I1009 00:59:49.458279 2657 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cac9d2d9-f1c9-43ba-9354-5be8d280e066-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 9 00:59:49.458506 kubelet[2657]: I1009 00:59:49.458289 2657 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 9 00:59:49.458506 kubelet[2657]: I1009 00:59:49.458298 2657 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 9 00:59:49.458506 kubelet[2657]: I1009 00:59:49.458308 2657 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c55227cc-27af-435a-ac2f-0bf33d67dae7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 9 00:59:49.458506 kubelet[2657]: I1009 00:59:49.458318 2657 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-528s8\" (UniqueName: \"kubernetes.io/projected/c55227cc-27af-435a-ac2f-0bf33d67dae7-kube-api-access-528s8\") on node \"localhost\" DevicePath \"\"" Oct 9 00:59:49.458506 kubelet[2657]: I1009 00:59:49.458328 2657 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 9 00:59:49.458723 kubelet[2657]: I1009 00:59:49.458337 2657 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c55227cc-27af-435a-ac2f-0bf33d67dae7-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 9 00:59:49.520384 kubelet[2657]: I1009 00:59:49.520345 2657 scope.go:117] "RemoveContainer" containerID="61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64" Oct 9 00:59:49.528475 containerd[1486]: time="2024-10-09T00:59:49.528386214Z" level=info msg="RemoveContainer for \"61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64\"" Oct 9 00:59:49.528624 systemd[1]: Removed slice kubepods-besteffort-podcac9d2d9_f1c9_43ba_9354_5be8d280e066.slice - libcontainer container kubepods-besteffort-podcac9d2d9_f1c9_43ba_9354_5be8d280e066.slice. Oct 9 00:59:49.532818 systemd[1]: Removed slice kubepods-burstable-podc55227cc_27af_435a_ac2f_0bf33d67dae7.slice - libcontainer container kubepods-burstable-podc55227cc_27af_435a_ac2f_0bf33d67dae7.slice. Oct 9 00:59:49.532914 systemd[1]: kubepods-burstable-podc55227cc_27af_435a_ac2f_0bf33d67dae7.slice: Consumed 6.887s CPU time. Oct 9 00:59:49.592298 containerd[1486]: time="2024-10-09T00:59:49.592253243Z" level=info msg="RemoveContainer for \"61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64\" returns successfully" Oct 9 00:59:49.592939 kubelet[2657]: I1009 00:59:49.592912 2657 scope.go:117] "RemoveContainer" containerID="61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64" Oct 9 00:59:49.593189 containerd[1486]: time="2024-10-09T00:59:49.593153209Z" level=error msg="ContainerStatus for \"61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64\": not found" Oct 9 00:59:49.602476 kubelet[2657]: E1009 00:59:49.602437 2657 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64\": not found" containerID="61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64" Oct 9 00:59:49.602572 kubelet[2657]: I1009 00:59:49.602558 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64"} err="failed to get container status \"61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64\": rpc error: code = NotFound desc = an error occurred when try to find container \"61ca29b91650b26897efb24ab23c09236c437532f85baaf8d17953ef93aeab64\": not found" Oct 9 00:59:49.602605 kubelet[2657]: I1009 00:59:49.602577 2657 scope.go:117] "RemoveContainer" containerID="a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e" Oct 9 00:59:49.603973 containerd[1486]: time="2024-10-09T00:59:49.603922883Z" level=info msg="RemoveContainer for \"a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e\"" Oct 9 00:59:49.672303 containerd[1486]: time="2024-10-09T00:59:49.672174325Z" level=info msg="RemoveContainer for \"a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e\" returns successfully" Oct 9 00:59:49.672504 kubelet[2657]: I1009 00:59:49.672457 2657 scope.go:117] "RemoveContainer" containerID="999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f" Oct 9 00:59:49.674143 containerd[1486]: time="2024-10-09T00:59:49.674103235Z" level=info msg="RemoveContainer for \"999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f\"" Oct 9 00:59:49.744804 containerd[1486]: time="2024-10-09T00:59:49.744767075Z" level=info msg="RemoveContainer for \"999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f\" returns successfully" Oct 9 00:59:49.745045 kubelet[2657]: I1009 00:59:49.745017 2657 scope.go:117] "RemoveContainer" containerID="718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574" Oct 9 00:59:49.746160 containerd[1486]: time="2024-10-09T00:59:49.746113188Z" level=info msg="RemoveContainer for \"718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574\"" Oct 9 00:59:49.802304 containerd[1486]: time="2024-10-09T00:59:49.802253020Z" level=info msg="RemoveContainer for \"718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574\" returns successfully" Oct 9 00:59:49.802576 kubelet[2657]: I1009 00:59:49.802538 2657 scope.go:117] "RemoveContainer" containerID="4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64" Oct 9 00:59:49.803535 containerd[1486]: time="2024-10-09T00:59:49.803469225Z" level=info msg="RemoveContainer for \"4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64\"" Oct 9 00:59:49.871900 containerd[1486]: time="2024-10-09T00:59:49.871847568Z" level=info msg="RemoveContainer for \"4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64\" returns successfully" Oct 9 00:59:49.872108 kubelet[2657]: I1009 00:59:49.872073 2657 scope.go:117] "RemoveContainer" containerID="7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070" Oct 9 00:59:49.873119 containerd[1486]: time="2024-10-09T00:59:49.873063571Z" level=info msg="RemoveContainer for \"7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070\"" Oct 9 00:59:49.934051 containerd[1486]: time="2024-10-09T00:59:49.933892893Z" level=info msg="RemoveContainer for \"7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070\" returns successfully" Oct 9 00:59:49.934197 kubelet[2657]: I1009 00:59:49.934022 2657 scope.go:117] "RemoveContainer" containerID="a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e" Oct 9 00:59:49.934251 containerd[1486]: time="2024-10-09T00:59:49.934158112Z" level=error msg="ContainerStatus for \"a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e\": not found" Oct 9 00:59:49.934298 kubelet[2657]: E1009 00:59:49.934278 2657 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e\": not found" containerID="a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e" Oct 9 00:59:49.934338 kubelet[2657]: I1009 00:59:49.934312 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e"} err="failed to get container status \"a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4048b056d6cf9a0870645074b18c62edd67744e9cc119d99b4147c5da011f6e\": not found" Oct 9 00:59:49.934338 kubelet[2657]: I1009 00:59:49.934322 2657 scope.go:117] "RemoveContainer" containerID="999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f" Oct 9 00:59:49.934461 containerd[1486]: time="2024-10-09T00:59:49.934431566Z" level=error msg="ContainerStatus for \"999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f\": not found" Oct 9 00:59:49.934560 kubelet[2657]: E1009 00:59:49.934539 2657 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f\": not found" containerID="999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f" Oct 9 00:59:49.934560 kubelet[2657]: I1009 00:59:49.934559 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f"} err="failed to get container status \"999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f\": rpc error: code = NotFound desc = an error occurred when try to find container \"999b53befefc966426ff0d357438703097cc63e3d3fc813b5a6d4b2c014b343f\": not found" Oct 9 00:59:49.934638 kubelet[2657]: I1009 00:59:49.934568 2657 scope.go:117] "RemoveContainer" containerID="718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574" Oct 9 00:59:49.934771 containerd[1486]: time="2024-10-09T00:59:49.934685313Z" level=error msg="ContainerStatus for \"718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574\": not found" Oct 9 00:59:49.934889 kubelet[2657]: E1009 00:59:49.934867 2657 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574\": not found" containerID="718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574" Oct 9 00:59:49.934929 kubelet[2657]: I1009 00:59:49.934901 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574"} err="failed to get container status \"718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574\": rpc error: code = NotFound desc = an error occurred when try to find container \"718df843338a55645b2c014ab349b5defb1060224439c29d682dde6b82bc0574\": not found" Oct 9 00:59:49.934929 kubelet[2657]: I1009 00:59:49.934910 2657 scope.go:117] "RemoveContainer" containerID="4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64" Oct 9 00:59:49.935061 containerd[1486]: time="2024-10-09T00:59:49.935029934Z" level=error msg="ContainerStatus for \"4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64\": not found" Oct 9 00:59:49.935136 kubelet[2657]: E1009 00:59:49.935119 2657 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64\": not found" containerID="4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64" Oct 9 00:59:49.935175 kubelet[2657]: I1009 00:59:49.935139 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64"} err="failed to get container status \"4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64\": rpc error: code = NotFound desc = an error occurred when try to find container \"4bff078164639cff7c88da1ee396825a25524e4ef6aa470a9e383826e34bce64\": not found" Oct 9 00:59:49.935175 kubelet[2657]: I1009 00:59:49.935147 2657 scope.go:117] "RemoveContainer" containerID="7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070" Oct 9 00:59:49.935355 containerd[1486]: time="2024-10-09T00:59:49.935304441Z" level=error msg="ContainerStatus for \"7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070\": not found" Oct 9 00:59:49.935456 kubelet[2657]: E1009 00:59:49.935404 2657 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070\": not found" containerID="7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070" Oct 9 00:59:49.935456 kubelet[2657]: I1009 00:59:49.935421 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070"} err="failed to get container status \"7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a3ecec28b7af9db27a513f388bd23f71a471bf5b069d4e2158f76f6f29ec070\": not found" Oct 9 00:59:50.038823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2aec6853fda34e370be0d66f2b3f913ef0d19ab416e0c3bec3c2dad1992613b-rootfs.mount: Deactivated successfully. Oct 9 00:59:50.038924 systemd[1]: var-lib-kubelet-pods-cac9d2d9\x2df1c9\x2d43ba\x2d9354\x2d5be8d280e066-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgc2kv.mount: Deactivated successfully. Oct 9 00:59:50.038999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78649a6edb93e5584f762515406b945bc06db8f7df3bc3ecce3463462bface0d-rootfs.mount: Deactivated successfully. Oct 9 00:59:50.039073 systemd[1]: var-lib-kubelet-pods-c55227cc\x2d27af\x2d435a\x2dac2f\x2d0bf33d67dae7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 9 00:59:50.039151 systemd[1]: var-lib-kubelet-pods-c55227cc\x2d27af\x2d435a\x2dac2f\x2d0bf33d67dae7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d528s8.mount: Deactivated successfully. Oct 9 00:59:50.039223 systemd[1]: var-lib-kubelet-pods-c55227cc\x2d27af\x2d435a\x2dac2f\x2d0bf33d67dae7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 9 00:59:50.360553 kubelet[2657]: E1009 00:59:50.360482 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:50.362858 kubelet[2657]: I1009 00:59:50.362838 2657 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c55227cc-27af-435a-ac2f-0bf33d67dae7" path="/var/lib/kubelet/pods/c55227cc-27af-435a-ac2f-0bf33d67dae7/volumes" Oct 9 00:59:50.363782 kubelet[2657]: I1009 00:59:50.363754 2657 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cac9d2d9-f1c9-43ba-9354-5be8d280e066" path="/var/lib/kubelet/pods/cac9d2d9-f1c9-43ba-9354-5be8d280e066/volumes" Oct 9 00:59:50.897068 sshd[4302]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:50.910416 systemd[1]: sshd@23-10.0.0.51:22-10.0.0.1:56572.service: Deactivated successfully. Oct 9 00:59:50.912652 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 00:59:50.912880 systemd[1]: session-24.scope: Consumed 1.014s CPU time. Oct 9 00:59:50.914383 systemd-logind[1474]: Session 24 logged out. Waiting for processes to exit. Oct 9 00:59:50.925261 systemd[1]: Started sshd@24-10.0.0.51:22-10.0.0.1:56580.service - OpenSSH per-connection server daemon (10.0.0.1:56580). Oct 9 00:59:50.926673 systemd-logind[1474]: Removed session 24. Oct 9 00:59:50.958677 sshd[4464]: Accepted publickey for core from 10.0.0.1 port 56580 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:50.960345 sshd[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:50.965100 systemd-logind[1474]: New session 25 of user core. Oct 9 00:59:50.974749 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 00:59:51.360857 kubelet[2657]: E1009 00:59:51.360801 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:51.517427 sshd[4464]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:51.526939 systemd[1]: sshd@24-10.0.0.51:22-10.0.0.1:56580.service: Deactivated successfully. Oct 9 00:59:51.530430 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 00:59:51.537158 systemd-logind[1474]: Session 25 logged out. Waiting for processes to exit. Oct 9 00:59:51.541341 kubelet[2657]: I1009 00:59:51.541297 2657 topology_manager.go:215] "Topology Admit Handler" podUID="6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9" podNamespace="kube-system" podName="cilium-g6l9j" Oct 9 00:59:51.541558 kubelet[2657]: E1009 00:59:51.541378 2657 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c55227cc-27af-435a-ac2f-0bf33d67dae7" containerName="mount-cgroup" Oct 9 00:59:51.541558 kubelet[2657]: E1009 00:59:51.541393 2657 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c55227cc-27af-435a-ac2f-0bf33d67dae7" containerName="apply-sysctl-overwrites" Oct 9 00:59:51.541558 kubelet[2657]: E1009 00:59:51.541405 2657 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c55227cc-27af-435a-ac2f-0bf33d67dae7" containerName="mount-bpf-fs" Oct 9 00:59:51.541558 kubelet[2657]: E1009 00:59:51.541415 2657 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cac9d2d9-f1c9-43ba-9354-5be8d280e066" containerName="cilium-operator" Oct 9 00:59:51.541558 kubelet[2657]: E1009 00:59:51.541424 2657 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c55227cc-27af-435a-ac2f-0bf33d67dae7" containerName="clean-cilium-state" Oct 9 00:59:51.541558 kubelet[2657]: E1009 00:59:51.541433 2657 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c55227cc-27af-435a-ac2f-0bf33d67dae7" containerName="cilium-agent" Oct 9 00:59:51.541558 kubelet[2657]: I1009 00:59:51.541461 2657 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac9d2d9-f1c9-43ba-9354-5be8d280e066" containerName="cilium-operator" Oct 9 00:59:51.541558 kubelet[2657]: I1009 00:59:51.541471 2657 memory_manager.go:354] "RemoveStaleState removing state" podUID="c55227cc-27af-435a-ac2f-0bf33d67dae7" containerName="cilium-agent" Oct 9 00:59:51.544553 systemd[1]: Started sshd@25-10.0.0.51:22-10.0.0.1:56592.service - OpenSSH per-connection server daemon (10.0.0.1:56592). Oct 9 00:59:51.548149 systemd-logind[1474]: Removed session 25. Oct 9 00:59:51.555114 systemd[1]: Created slice kubepods-burstable-pod6b4a63ef_c9a0_4ec5_a3cc_148b67a8f2a9.slice - libcontainer container kubepods-burstable-pod6b4a63ef_c9a0_4ec5_a3cc_148b67a8f2a9.slice. Oct 9 00:59:51.571420 kubelet[2657]: I1009 00:59:51.571033 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9-clustermesh-secrets\") pod \"cilium-g6l9j\" (UID: \"6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9\") " pod="kube-system/cilium-g6l9j" Oct 9 00:59:51.571420 kubelet[2657]: I1009 00:59:51.571090 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9-etc-cni-netd\") pod \"cilium-g6l9j\" (UID: \"6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9\") " pod="kube-system/cilium-g6l9j" Oct 9 00:59:51.571420 kubelet[2657]: I1009 00:59:51.571115 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9-cilium-ipsec-secrets\") pod \"cilium-g6l9j\" (UID: \"6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9\") " pod="kube-system/cilium-g6l9j" Oct 9 00:59:51.571420 kubelet[2657]: I1009 00:59:51.571139 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrgs7\" (UniqueName: \"kubernetes.io/projected/6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9-kube-api-access-jrgs7\") pod \"cilium-g6l9j\" (UID: \"6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9\") " pod="kube-system/cilium-g6l9j" Oct 9 00:59:51.571420 kubelet[2657]: I1009 00:59:51.571163 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9-cilium-config-path\") pod \"cilium-g6l9j\" (UID: \"6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9\") " pod="kube-system/cilium-g6l9j" Oct 9 00:59:51.571754 kubelet[2657]: I1009 00:59:51.571187 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9-hubble-tls\") pod \"cilium-g6l9j\" (UID: \"6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9\") " pod="kube-system/cilium-g6l9j" Oct 9 00:59:51.571754 kubelet[2657]: I1009 00:59:51.571214 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9-host-proc-sys-kernel\") pod \"cilium-g6l9j\" (UID: \"6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9\") " pod="kube-system/cilium-g6l9j" Oct 9 00:59:51.571754 kubelet[2657]: I1009 00:59:51.571238 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9-cilium-run\") pod \"cilium-g6l9j\" (UID: \"6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9\") " pod="kube-system/cilium-g6l9j" Oct 9 00:59:51.571754 kubelet[2657]: I1009 00:59:51.571261 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9-xtables-lock\") pod \"cilium-g6l9j\" (UID: \"6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9\") " pod="kube-system/cilium-g6l9j" Oct 9 00:59:51.571754 kubelet[2657]: I1009 00:59:51.571291 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9-bpf-maps\") pod \"cilium-g6l9j\" (UID: \"6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9\") " pod="kube-system/cilium-g6l9j" Oct 9 00:59:51.571754 kubelet[2657]: I1009 00:59:51.571317 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9-host-proc-sys-net\") pod \"cilium-g6l9j\" (UID: \"6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9\") " pod="kube-system/cilium-g6l9j" Oct 9 00:59:51.571934 kubelet[2657]: I1009 00:59:51.571346 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9-hostproc\") pod \"cilium-g6l9j\" (UID: \"6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9\") " pod="kube-system/cilium-g6l9j" Oct 9 00:59:51.571934 kubelet[2657]: I1009 00:59:51.571375 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9-cilium-cgroup\") pod \"cilium-g6l9j\" (UID: \"6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9\") " pod="kube-system/cilium-g6l9j" Oct 9 00:59:51.571934 kubelet[2657]: I1009 00:59:51.571403 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9-cni-path\") pod \"cilium-g6l9j\" (UID: \"6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9\") " pod="kube-system/cilium-g6l9j" Oct 9 00:59:51.571934 kubelet[2657]: I1009 00:59:51.571434 2657 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9-lib-modules\") pod \"cilium-g6l9j\" (UID: \"6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9\") " pod="kube-system/cilium-g6l9j" Oct 9 00:59:51.587184 sshd[4477]: Accepted publickey for core from 10.0.0.1 port 56592 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:51.589088 sshd[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:51.593753 systemd-logind[1474]: New session 26 of user core. Oct 9 00:59:51.603783 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 00:59:51.655191 sshd[4477]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:51.667441 systemd[1]: sshd@25-10.0.0.51:22-10.0.0.1:56592.service: Deactivated successfully. Oct 9 00:59:51.669196 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 00:59:51.670541 systemd-logind[1474]: Session 26 logged out. Waiting for processes to exit. Oct 9 00:59:51.679936 systemd[1]: Started sshd@26-10.0.0.51:22-10.0.0.1:56606.service - OpenSSH per-connection server daemon (10.0.0.1:56606). Oct 9 00:59:51.691808 systemd-logind[1474]: Removed session 26. Oct 9 00:59:51.713630 sshd[4485]: Accepted publickey for core from 10.0.0.1 port 56606 ssh2: RSA SHA256:MTKbJJT08JUiJ02ibyBV4OYlBhhaQNgaLIU6YJtedws Oct 9 00:59:51.715469 sshd[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:51.719617 systemd-logind[1474]: New session 27 of user core. Oct 9 00:59:51.729678 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 9 00:59:51.861184 kubelet[2657]: E1009 00:59:51.861131 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:51.861809 containerd[1486]: time="2024-10-09T00:59:51.861749935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6l9j,Uid:6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9,Namespace:kube-system,Attempt:0,}" Oct 9 00:59:51.891450 containerd[1486]: time="2024-10-09T00:59:51.891253477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:59:51.891450 containerd[1486]: time="2024-10-09T00:59:51.891317720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:59:51.891450 containerd[1486]: time="2024-10-09T00:59:51.891329453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:59:51.891730 containerd[1486]: time="2024-10-09T00:59:51.891419545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:59:51.917810 systemd[1]: Started cri-containerd-9b71abdd068361f625dc8242b1c8e6e3662346847605badcb7de53f1a5e82b18.scope - libcontainer container 9b71abdd068361f625dc8242b1c8e6e3662346847605badcb7de53f1a5e82b18. Oct 9 00:59:51.943390 containerd[1486]: time="2024-10-09T00:59:51.943338594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6l9j,Uid:6b4a63ef-c9a0-4ec5-a3cc-148b67a8f2a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b71abdd068361f625dc8242b1c8e6e3662346847605badcb7de53f1a5e82b18\"" Oct 9 00:59:51.944302 kubelet[2657]: E1009 00:59:51.944267 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:51.946412 containerd[1486]: time="2024-10-09T00:59:51.946348823Z" level=info msg="CreateContainer within sandbox \"9b71abdd068361f625dc8242b1c8e6e3662346847605badcb7de53f1a5e82b18\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 9 00:59:51.963658 containerd[1486]: time="2024-10-09T00:59:51.963566269Z" level=info msg="CreateContainer within sandbox \"9b71abdd068361f625dc8242b1c8e6e3662346847605badcb7de53f1a5e82b18\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"76727b040b5052f04f18cd7890fc661d4ae81418e45d838c00eb4f4c27864f22\"" Oct 9 00:59:51.964426 containerd[1486]: time="2024-10-09T00:59:51.964393734Z" level=info msg="StartContainer for \"76727b040b5052f04f18cd7890fc661d4ae81418e45d838c00eb4f4c27864f22\"" Oct 9 00:59:52.000829 systemd[1]: Started cri-containerd-76727b040b5052f04f18cd7890fc661d4ae81418e45d838c00eb4f4c27864f22.scope - libcontainer container 76727b040b5052f04f18cd7890fc661d4ae81418e45d838c00eb4f4c27864f22. Oct 9 00:59:52.032983 containerd[1486]: time="2024-10-09T00:59:52.032935555Z" level=info msg="StartContainer for \"76727b040b5052f04f18cd7890fc661d4ae81418e45d838c00eb4f4c27864f22\" returns successfully" Oct 9 00:59:52.042875 systemd[1]: cri-containerd-76727b040b5052f04f18cd7890fc661d4ae81418e45d838c00eb4f4c27864f22.scope: Deactivated successfully. Oct 9 00:59:52.100713 containerd[1486]: time="2024-10-09T00:59:52.100641547Z" level=info msg="shim disconnected" id=76727b040b5052f04f18cd7890fc661d4ae81418e45d838c00eb4f4c27864f22 namespace=k8s.io Oct 9 00:59:52.100713 containerd[1486]: time="2024-10-09T00:59:52.100701010Z" level=warning msg="cleaning up after shim disconnected" id=76727b040b5052f04f18cd7890fc661d4ae81418e45d838c00eb4f4c27864f22 namespace=k8s.io Oct 9 00:59:52.100713 containerd[1486]: time="2024-10-09T00:59:52.100708815Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:59:52.361302 kubelet[2657]: E1009 00:59:52.361267 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:52.535824 kubelet[2657]: E1009 00:59:52.535791 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:52.537670 containerd[1486]: time="2024-10-09T00:59:52.537621639Z" level=info msg="CreateContainer within sandbox \"9b71abdd068361f625dc8242b1c8e6e3662346847605badcb7de53f1a5e82b18\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 9 00:59:52.552309 containerd[1486]: time="2024-10-09T00:59:52.552153103Z" level=info msg="CreateContainer within sandbox \"9b71abdd068361f625dc8242b1c8e6e3662346847605badcb7de53f1a5e82b18\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"55a0cc7f877809ece260b91217d117c1e27a68e91ac65026b59cd5f9a17f302b\"" Oct 9 00:59:52.553550 containerd[1486]: time="2024-10-09T00:59:52.552749575Z" level=info msg="StartContainer for \"55a0cc7f877809ece260b91217d117c1e27a68e91ac65026b59cd5f9a17f302b\"" Oct 9 00:59:52.581764 systemd[1]: Started cri-containerd-55a0cc7f877809ece260b91217d117c1e27a68e91ac65026b59cd5f9a17f302b.scope - libcontainer container 55a0cc7f877809ece260b91217d117c1e27a68e91ac65026b59cd5f9a17f302b. Oct 9 00:59:52.611919 containerd[1486]: time="2024-10-09T00:59:52.611790870Z" level=info msg="StartContainer for \"55a0cc7f877809ece260b91217d117c1e27a68e91ac65026b59cd5f9a17f302b\" returns successfully" Oct 9 00:59:52.619327 systemd[1]: cri-containerd-55a0cc7f877809ece260b91217d117c1e27a68e91ac65026b59cd5f9a17f302b.scope: Deactivated successfully. Oct 9 00:59:52.649005 containerd[1486]: time="2024-10-09T00:59:52.648930997Z" level=info msg="shim disconnected" id=55a0cc7f877809ece260b91217d117c1e27a68e91ac65026b59cd5f9a17f302b namespace=k8s.io Oct 9 00:59:52.649005 containerd[1486]: time="2024-10-09T00:59:52.649001993Z" level=warning msg="cleaning up after shim disconnected" id=55a0cc7f877809ece260b91217d117c1e27a68e91ac65026b59cd5f9a17f302b namespace=k8s.io Oct 9 00:59:52.649005 containerd[1486]: time="2024-10-09T00:59:52.649010829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:59:53.411553 kubelet[2657]: E1009 00:59:53.411506 2657 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 9 00:59:53.538102 kubelet[2657]: E1009 00:59:53.538077 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:53.539816 containerd[1486]: time="2024-10-09T00:59:53.539783663Z" level=info msg="CreateContainer within sandbox \"9b71abdd068361f625dc8242b1c8e6e3662346847605badcb7de53f1a5e82b18\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 9 00:59:53.557076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2012231824.mount: Deactivated successfully. Oct 9 00:59:53.560838 containerd[1486]: time="2024-10-09T00:59:53.560790967Z" level=info msg="CreateContainer within sandbox \"9b71abdd068361f625dc8242b1c8e6e3662346847605badcb7de53f1a5e82b18\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c9afcbaee5f1bf5b86d7b8a9b55a6f78c3f30b0ab93a8aab685e62fb09067a12\"" Oct 9 00:59:53.561235 containerd[1486]: time="2024-10-09T00:59:53.561215430Z" level=info msg="StartContainer for \"c9afcbaee5f1bf5b86d7b8a9b55a6f78c3f30b0ab93a8aab685e62fb09067a12\"" Oct 9 00:59:53.597654 systemd[1]: Started cri-containerd-c9afcbaee5f1bf5b86d7b8a9b55a6f78c3f30b0ab93a8aab685e62fb09067a12.scope - libcontainer container c9afcbaee5f1bf5b86d7b8a9b55a6f78c3f30b0ab93a8aab685e62fb09067a12. Oct 9 00:59:53.626852 containerd[1486]: time="2024-10-09T00:59:53.626537388Z" level=info msg="StartContainer for \"c9afcbaee5f1bf5b86d7b8a9b55a6f78c3f30b0ab93a8aab685e62fb09067a12\" returns successfully" Oct 9 00:59:53.628046 systemd[1]: cri-containerd-c9afcbaee5f1bf5b86d7b8a9b55a6f78c3f30b0ab93a8aab685e62fb09067a12.scope: Deactivated successfully. Oct 9 00:59:53.682445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9afcbaee5f1bf5b86d7b8a9b55a6f78c3f30b0ab93a8aab685e62fb09067a12-rootfs.mount: Deactivated successfully. Oct 9 00:59:53.811303 containerd[1486]: time="2024-10-09T00:59:53.811235649Z" level=info msg="shim disconnected" id=c9afcbaee5f1bf5b86d7b8a9b55a6f78c3f30b0ab93a8aab685e62fb09067a12 namespace=k8s.io Oct 9 00:59:53.811303 containerd[1486]: time="2024-10-09T00:59:53.811296636Z" level=warning msg="cleaning up after shim disconnected" id=c9afcbaee5f1bf5b86d7b8a9b55a6f78c3f30b0ab93a8aab685e62fb09067a12 namespace=k8s.io Oct 9 00:59:53.811303 containerd[1486]: time="2024-10-09T00:59:53.811309360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:59:54.542077 kubelet[2657]: E1009 00:59:54.542047 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:54.544869 containerd[1486]: time="2024-10-09T00:59:54.544827940Z" level=info msg="CreateContainer within sandbox \"9b71abdd068361f625dc8242b1c8e6e3662346847605badcb7de53f1a5e82b18\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 9 00:59:54.576234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164161432.mount: Deactivated successfully. Oct 9 00:59:54.577402 containerd[1486]: time="2024-10-09T00:59:54.577338133Z" level=info msg="CreateContainer within sandbox \"9b71abdd068361f625dc8242b1c8e6e3662346847605badcb7de53f1a5e82b18\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ef6846b26f03abba04a3b79bcc4eff7576d7e8bd22695a5a24d7390cfd941722\"" Oct 9 00:59:54.578004 containerd[1486]: time="2024-10-09T00:59:54.577937710Z" level=info msg="StartContainer for \"ef6846b26f03abba04a3b79bcc4eff7576d7e8bd22695a5a24d7390cfd941722\"" Oct 9 00:59:54.607675 systemd[1]: Started cri-containerd-ef6846b26f03abba04a3b79bcc4eff7576d7e8bd22695a5a24d7390cfd941722.scope - libcontainer container ef6846b26f03abba04a3b79bcc4eff7576d7e8bd22695a5a24d7390cfd941722. Oct 9 00:59:54.632557 systemd[1]: cri-containerd-ef6846b26f03abba04a3b79bcc4eff7576d7e8bd22695a5a24d7390cfd941722.scope: Deactivated successfully. Oct 9 00:59:54.635319 containerd[1486]: time="2024-10-09T00:59:54.635283341Z" level=info msg="StartContainer for \"ef6846b26f03abba04a3b79bcc4eff7576d7e8bd22695a5a24d7390cfd941722\" returns successfully" Oct 9 00:59:54.661585 containerd[1486]: time="2024-10-09T00:59:54.661499591Z" level=info msg="shim disconnected" id=ef6846b26f03abba04a3b79bcc4eff7576d7e8bd22695a5a24d7390cfd941722 namespace=k8s.io Oct 9 00:59:54.661585 containerd[1486]: time="2024-10-09T00:59:54.661578953Z" level=warning msg="cleaning up after shim disconnected" id=ef6846b26f03abba04a3b79bcc4eff7576d7e8bd22695a5a24d7390cfd941722 namespace=k8s.io Oct 9 00:59:54.661585 containerd[1486]: time="2024-10-09T00:59:54.661591127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:59:54.674037 containerd[1486]: time="2024-10-09T00:59:54.673976063Z" level=warning msg="cleanup warnings time=\"2024-10-09T00:59:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 00:59:54.682172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef6846b26f03abba04a3b79bcc4eff7576d7e8bd22695a5a24d7390cfd941722-rootfs.mount: Deactivated successfully. Oct 9 00:59:55.548341 kubelet[2657]: E1009 00:59:55.548311 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:55.551174 containerd[1486]: time="2024-10-09T00:59:55.551131904Z" level=info msg="CreateContainer within sandbox \"9b71abdd068361f625dc8242b1c8e6e3662346847605badcb7de53f1a5e82b18\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 9 00:59:55.568253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2709534141.mount: Deactivated successfully. Oct 9 00:59:55.570830 containerd[1486]: time="2024-10-09T00:59:55.570794617Z" level=info msg="CreateContainer within sandbox \"9b71abdd068361f625dc8242b1c8e6e3662346847605badcb7de53f1a5e82b18\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f6c4f921ec953d4168a745c9e8ea033c6b3fe264b8564fa27e5b8d84b657e95a\"" Oct 9 00:59:55.571345 containerd[1486]: time="2024-10-09T00:59:55.571309080Z" level=info msg="StartContainer for \"f6c4f921ec953d4168a745c9e8ea033c6b3fe264b8564fa27e5b8d84b657e95a\"" Oct 9 00:59:55.602758 systemd[1]: Started cri-containerd-f6c4f921ec953d4168a745c9e8ea033c6b3fe264b8564fa27e5b8d84b657e95a.scope - libcontainer container f6c4f921ec953d4168a745c9e8ea033c6b3fe264b8564fa27e5b8d84b657e95a. Oct 9 00:59:55.633729 containerd[1486]: time="2024-10-09T00:59:55.633690696Z" level=info msg="StartContainer for \"f6c4f921ec953d4168a745c9e8ea033c6b3fe264b8564fa27e5b8d84b657e95a\" returns successfully" Oct 9 00:59:56.057546 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Oct 9 00:59:56.552313 kubelet[2657]: E1009 00:59:56.552188 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:57.863248 kubelet[2657]: E1009 00:59:57.863206 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:59.157903 systemd-networkd[1409]: lxc_health: Link UP Oct 9 00:59:59.162890 systemd-networkd[1409]: lxc_health: Gained carrier Oct 9 00:59:59.862962 kubelet[2657]: E1009 00:59:59.862878 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:59:59.888660 kubelet[2657]: I1009 00:59:59.888261 2657 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-g6l9j" podStartSLOduration=8.888220606 podStartE2EDuration="8.888220606s" podCreationTimestamp="2024-10-09 00:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:59:56.634908676 +0000 UTC m=+88.374158018" watchObservedRunningTime="2024-10-09 00:59:59.888220606 +0000 UTC m=+91.627469948" Oct 9 01:00:00.469749 systemd-networkd[1409]: lxc_health: Gained IPv6LL Oct 9 01:00:00.559105 kubelet[2657]: E1009 01:00:00.559068 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:00:01.560827 kubelet[2657]: E1009 01:00:01.560782 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:00:04.486243 sshd[4485]: pam_unix(sshd:session): session closed for user core Oct 9 01:00:04.490126 systemd[1]: sshd@26-10.0.0.51:22-10.0.0.1:56606.service: Deactivated successfully. Oct 9 01:00:04.492105 systemd[1]: session-27.scope: Deactivated successfully. Oct 9 01:00:04.492796 systemd-logind[1474]: Session 27 logged out. Waiting for processes to exit. Oct 9 01:00:04.493669 systemd-logind[1474]: Removed session 27.