Mar 25 01:26:25.996258 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 24 23:38:35 -00 2025 Mar 25 01:26:25.996317 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:26:25.996329 kernel: BIOS-provided physical RAM map: Mar 25 01:26:25.996338 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Mar 25 01:26:25.996346 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Mar 25 01:26:25.996359 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Mar 25 01:26:25.996370 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Mar 25 01:26:25.996379 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Mar 25 01:26:25.996388 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Mar 25 01:26:25.996397 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Mar 25 01:26:25.996412 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Mar 25 01:26:25.996421 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Mar 25 01:26:25.996430 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Mar 25 01:26:25.996439 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Mar 25 01:26:25.996454 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Mar 25 01:26:25.996464 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Mar 25 01:26:25.996474 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 25 01:26:25.996483 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 25 01:26:25.996493 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 25 01:26:25.996506 kernel: NX (Execute Disable) protection: active Mar 25 01:26:25.996516 kernel: APIC: Static calls initialized Mar 25 01:26:25.996525 kernel: e820: update [mem 0x9a187018-0x9a190c57] usable ==> usable Mar 25 01:26:25.996536 kernel: e820: update [mem 0x9a187018-0x9a190c57] usable ==> usable Mar 25 01:26:25.996545 kernel: e820: update [mem 0x9a14a018-0x9a186e57] usable ==> usable Mar 25 01:26:25.996555 kernel: e820: update [mem 0x9a14a018-0x9a186e57] usable ==> usable Mar 25 01:26:25.996565 kernel: extended physical RAM map: Mar 25 01:26:25.996575 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Mar 25 01:26:25.996585 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Mar 25 01:26:25.996608 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Mar 25 01:26:25.996619 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Mar 25 01:26:25.996633 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a14a017] usable Mar 25 01:26:25.996643 kernel: reserve setup_data: [mem 0x000000009a14a018-0x000000009a186e57] usable Mar 25 01:26:25.996653 kernel: reserve setup_data: [mem 0x000000009a186e58-0x000000009a187017] usable Mar 25 01:26:25.996663 kernel: reserve setup_data: [mem 0x000000009a187018-0x000000009a190c57] usable Mar 25 01:26:25.996673 kernel: reserve setup_data: [mem 0x000000009a190c58-0x000000009b8ecfff] usable Mar 25 01:26:25.996683 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Mar 25 01:26:25.996692 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Mar 25 01:26:25.996702 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Mar 25 01:26:25.996712 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Mar 25 01:26:25.996722 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Mar 25 01:26:25.996740 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Mar 25 01:26:25.996750 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Mar 25 01:26:25.996760 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Mar 25 01:26:25.996771 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 25 01:26:25.996781 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 25 01:26:25.996791 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 25 01:26:25.996805 kernel: efi: EFI v2.7 by EDK II Mar 25 01:26:25.996816 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1f7018 RNG=0x9bb73018 Mar 25 01:26:25.996826 kernel: random: crng init done Mar 25 01:26:25.996836 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Mar 25 01:26:25.996847 kernel: secureboot: Secure boot enabled Mar 25 01:26:25.996857 kernel: SMBIOS 2.8 present. Mar 25 01:26:25.996867 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 25 01:26:25.996878 kernel: Hypervisor detected: KVM Mar 25 01:26:25.996887 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 25 01:26:25.996897 kernel: kvm-clock: using sched offset of 4551211121 cycles Mar 25 01:26:25.996908 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 25 01:26:25.996922 kernel: tsc: Detected 2794.748 MHz processor Mar 25 01:26:25.996932 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 25 01:26:25.996942 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 25 01:26:25.996951 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Mar 25 01:26:25.996959 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 25 01:26:25.996968 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 25 01:26:25.996993 kernel: Using GB pages for direct mapping Mar 25 01:26:25.997004 kernel: ACPI: Early table checksum verification disabled Mar 25 01:26:25.997013 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Mar 25 01:26:25.997024 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 25 01:26:25.997032 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:26:25.997039 kernel: ACPI: DSDT 0x000000009BB7A000 002225 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:26:25.997047 kernel: ACPI: FACS 0x000000009BBDD000 000040 Mar 25 01:26:25.997054 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:26:25.997062 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:26:25.997069 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:26:25.997078 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:26:25.997088 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 25 01:26:25.997100 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Mar 25 01:26:25.997109 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c224] Mar 25 01:26:25.997119 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Mar 25 01:26:25.997129 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Mar 25 01:26:25.997141 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Mar 25 01:26:25.997153 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Mar 25 01:26:25.997166 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Mar 25 01:26:25.997179 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Mar 25 01:26:25.997192 kernel: No NUMA configuration found Mar 25 01:26:25.997210 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Mar 25 01:26:25.997223 kernel: NODE_DATA(0) allocated [mem 0x9bf59000-0x9bf5efff] Mar 25 01:26:25.997236 kernel: Zone ranges: Mar 25 01:26:25.997249 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 25 01:26:25.997262 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Mar 25 01:26:25.997274 kernel: Normal empty Mar 25 01:26:25.997287 kernel: Movable zone start for each node Mar 25 01:26:25.997300 kernel: Early memory node ranges Mar 25 01:26:25.997313 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Mar 25 01:26:25.997326 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Mar 25 01:26:25.997342 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Mar 25 01:26:25.997352 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Mar 25 01:26:25.997362 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Mar 25 01:26:25.997373 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Mar 25 01:26:25.997384 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 25 01:26:25.997394 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Mar 25 01:26:25.997405 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 25 01:26:25.997415 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 25 01:26:25.997426 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 25 01:26:25.997440 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Mar 25 01:26:25.997450 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 25 01:26:25.997461 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 25 01:26:25.997471 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 25 01:26:25.997481 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 25 01:26:25.997492 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 25 01:26:25.997502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 25 01:26:25.997513 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 25 01:26:25.997524 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 25 01:26:25.997538 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 25 01:26:25.997548 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 25 01:26:25.997559 kernel: TSC deadline timer available Mar 25 01:26:25.997569 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 25 01:26:25.997580 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 25 01:26:25.997590 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 25 01:26:25.997623 kernel: kvm-guest: setup PV sched yield Mar 25 01:26:25.997637 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 25 01:26:25.997647 kernel: Booting paravirtualized kernel on KVM Mar 25 01:26:25.997658 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 25 01:26:25.997667 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 25 01:26:25.997675 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 25 01:26:25.997685 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 25 01:26:25.997693 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 25 01:26:25.997701 kernel: kvm-guest: PV spinlocks enabled Mar 25 01:26:25.997709 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 25 01:26:25.997718 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:26:25.997726 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 25 01:26:25.997734 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 25 01:26:25.997742 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 25 01:26:25.997753 kernel: Fallback order for Node 0: 0 Mar 25 01:26:25.997761 kernel: Built 1 zonelists, mobility grouping on. Total pages: 625927 Mar 25 01:26:25.997768 kernel: Policy zone: DMA32 Mar 25 01:26:25.997777 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 25 01:26:25.997785 kernel: Memory: 2368304K/2552216K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43592K init, 1472K bss, 183656K reserved, 0K cma-reserved) Mar 25 01:26:25.997795 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 25 01:26:25.997803 kernel: ftrace: allocating 37985 entries in 149 pages Mar 25 01:26:25.997811 kernel: ftrace: allocated 149 pages with 4 groups Mar 25 01:26:25.997819 kernel: Dynamic Preempt: voluntary Mar 25 01:26:25.997827 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 25 01:26:25.997835 kernel: rcu: RCU event tracing is enabled. Mar 25 01:26:25.997844 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 25 01:26:25.997852 kernel: Trampoline variant of Tasks RCU enabled. Mar 25 01:26:25.997860 kernel: Rude variant of Tasks RCU enabled. Mar 25 01:26:25.997870 kernel: Tracing variant of Tasks RCU enabled. Mar 25 01:26:25.997878 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 25 01:26:25.997886 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 25 01:26:25.997894 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 25 01:26:25.997902 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 25 01:26:25.997910 kernel: Console: colour dummy device 80x25 Mar 25 01:26:25.997917 kernel: printk: console [ttyS0] enabled Mar 25 01:26:25.997925 kernel: ACPI: Core revision 20230628 Mar 25 01:26:25.997934 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 25 01:26:25.997944 kernel: APIC: Switch to symmetric I/O mode setup Mar 25 01:26:25.997952 kernel: x2apic enabled Mar 25 01:26:25.997960 kernel: APIC: Switched APIC routing to: physical x2apic Mar 25 01:26:25.997970 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 25 01:26:25.998004 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 25 01:26:25.998012 kernel: kvm-guest: setup PV IPIs Mar 25 01:26:25.998020 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 25 01:26:25.998028 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 25 01:26:25.998036 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Mar 25 01:26:25.998048 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 25 01:26:25.998056 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 25 01:26:25.998064 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 25 01:26:25.998072 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 25 01:26:25.998079 kernel: Spectre V2 : Mitigation: Retpolines Mar 25 01:26:25.998088 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 25 01:26:25.998096 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 25 01:26:25.998104 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 25 01:26:25.998112 kernel: RETBleed: Mitigation: untrained return thunk Mar 25 01:26:25.998122 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 25 01:26:25.998132 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 25 01:26:25.998140 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 25 01:26:25.998151 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 25 01:26:25.998161 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 25 01:26:25.998172 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 25 01:26:25.998183 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 25 01:26:25.998194 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 25 01:26:25.998205 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 25 01:26:25.998213 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 25 01:26:25.998221 kernel: Freeing SMP alternatives memory: 32K Mar 25 01:26:25.998229 kernel: pid_max: default: 32768 minimum: 301 Mar 25 01:26:25.998237 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 25 01:26:25.998245 kernel: landlock: Up and running. Mar 25 01:26:25.998253 kernel: SELinux: Initializing. Mar 25 01:26:25.998260 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 25 01:26:25.998268 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 25 01:26:25.998279 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 25 01:26:25.998287 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 25 01:26:25.998295 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 25 01:26:25.998303 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 25 01:26:25.998311 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 25 01:26:25.998319 kernel: ... version: 0 Mar 25 01:26:25.998326 kernel: ... bit width: 48 Mar 25 01:26:25.998334 kernel: ... generic registers: 6 Mar 25 01:26:25.998342 kernel: ... value mask: 0000ffffffffffff Mar 25 01:26:25.998352 kernel: ... max period: 00007fffffffffff Mar 25 01:26:25.998360 kernel: ... fixed-purpose events: 0 Mar 25 01:26:25.998368 kernel: ... event mask: 000000000000003f Mar 25 01:26:25.998375 kernel: signal: max sigframe size: 1776 Mar 25 01:26:25.998383 kernel: rcu: Hierarchical SRCU implementation. Mar 25 01:26:25.998391 kernel: rcu: Max phase no-delay instances is 400. Mar 25 01:26:25.998399 kernel: smp: Bringing up secondary CPUs ... Mar 25 01:26:25.998407 kernel: smpboot: x86: Booting SMP configuration: Mar 25 01:26:25.998415 kernel: .... node #0, CPUs: #1 #2 #3 Mar 25 01:26:25.998424 kernel: smp: Brought up 1 node, 4 CPUs Mar 25 01:26:25.998432 kernel: smpboot: Max logical packages: 1 Mar 25 01:26:25.998440 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Mar 25 01:26:25.998448 kernel: devtmpfs: initialized Mar 25 01:26:25.998455 kernel: x86/mm: Memory block size: 128MB Mar 25 01:26:25.998463 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Mar 25 01:26:25.998471 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Mar 25 01:26:25.998479 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 25 01:26:25.998487 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 25 01:26:25.998497 kernel: pinctrl core: initialized pinctrl subsystem Mar 25 01:26:25.998505 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 25 01:26:25.998513 kernel: audit: initializing netlink subsys (disabled) Mar 25 01:26:25.998521 kernel: audit: type=2000 audit(1742865985.019:1): state=initialized audit_enabled=0 res=1 Mar 25 01:26:25.998529 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 25 01:26:25.998542 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 25 01:26:25.998550 kernel: cpuidle: using governor menu Mar 25 01:26:25.998558 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 25 01:26:25.998566 kernel: dca service started, version 1.12.1 Mar 25 01:26:25.998576 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Mar 25 01:26:25.998584 kernel: PCI: Using configuration type 1 for base access Mar 25 01:26:25.998592 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 25 01:26:25.998609 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 25 01:26:25.998617 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 25 01:26:25.998626 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 25 01:26:25.998633 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 25 01:26:25.998641 kernel: ACPI: Added _OSI(Module Device) Mar 25 01:26:25.998649 kernel: ACPI: Added _OSI(Processor Device) Mar 25 01:26:25.998659 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 25 01:26:25.998667 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 25 01:26:25.998675 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 25 01:26:25.998682 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 25 01:26:25.998690 kernel: ACPI: Interpreter enabled Mar 25 01:26:25.998698 kernel: ACPI: PM: (supports S0 S5) Mar 25 01:26:25.998705 kernel: ACPI: Using IOAPIC for interrupt routing Mar 25 01:26:25.998713 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 25 01:26:25.998721 kernel: PCI: Using E820 reservations for host bridge windows Mar 25 01:26:25.998731 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 25 01:26:25.998739 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 25 01:26:25.998926 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 25 01:26:25.999083 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 25 01:26:25.999210 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 25 01:26:25.999220 kernel: PCI host bridge to bus 0000:00 Mar 25 01:26:25.999382 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 25 01:26:25.999504 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 25 01:26:25.999630 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 25 01:26:25.999745 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 25 01:26:25.999857 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 25 01:26:25.999971 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 25 01:26:26.000113 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 25 01:26:26.000273 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 25 01:26:26.000425 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 25 01:26:26.000552 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 25 01:26:26.000700 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 25 01:26:26.000843 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 25 01:26:26.000997 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 25 01:26:26.001141 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 25 01:26:26.001295 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 25 01:26:26.001424 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 25 01:26:26.001547 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 25 01:26:26.001682 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Mar 25 01:26:26.001813 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 25 01:26:26.001939 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 25 01:26:26.002088 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 25 01:26:26.002221 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Mar 25 01:26:26.002353 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 25 01:26:26.002477 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 25 01:26:26.002609 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 25 01:26:26.002735 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 25 01:26:26.002859 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 25 01:26:26.003007 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 25 01:26:26.003145 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 25 01:26:26.003277 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 25 01:26:26.003401 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 25 01:26:26.003523 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 25 01:26:26.003664 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 25 01:26:26.003788 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 25 01:26:26.003802 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 25 01:26:26.003811 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 25 01:26:26.003819 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 25 01:26:26.003827 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 25 01:26:26.003835 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 25 01:26:26.003844 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 25 01:26:26.003854 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 25 01:26:26.003865 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 25 01:26:26.003876 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 25 01:26:26.003890 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 25 01:26:26.003904 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 25 01:26:26.003918 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 25 01:26:26.003928 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 25 01:26:26.003939 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 25 01:26:26.003950 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 25 01:26:26.003961 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 25 01:26:26.003971 kernel: iommu: Default domain type: Translated Mar 25 01:26:26.003998 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 25 01:26:26.004014 kernel: efivars: Registered efivars operations Mar 25 01:26:26.004025 kernel: PCI: Using ACPI for IRQ routing Mar 25 01:26:26.004035 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 25 01:26:26.004046 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Mar 25 01:26:26.004056 kernel: e820: reserve RAM buffer [mem 0x9a14a018-0x9bffffff] Mar 25 01:26:26.004066 kernel: e820: reserve RAM buffer [mem 0x9a187018-0x9bffffff] Mar 25 01:26:26.004077 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Mar 25 01:26:26.004087 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Mar 25 01:26:26.004249 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 25 01:26:26.004380 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 25 01:26:26.004504 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 25 01:26:26.004514 kernel: vgaarb: loaded Mar 25 01:26:26.004522 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 25 01:26:26.004530 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 25 01:26:26.004539 kernel: clocksource: Switched to clocksource kvm-clock Mar 25 01:26:26.004547 kernel: VFS: Disk quotas dquot_6.6.0 Mar 25 01:26:26.004555 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 25 01:26:26.004566 kernel: pnp: PnP ACPI init Mar 25 01:26:26.004723 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 25 01:26:26.004735 kernel: pnp: PnP ACPI: found 6 devices Mar 25 01:26:26.004744 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 25 01:26:26.004752 kernel: NET: Registered PF_INET protocol family Mar 25 01:26:26.004760 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 25 01:26:26.004768 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 25 01:26:26.004776 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 25 01:26:26.004784 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 25 01:26:26.004796 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 25 01:26:26.004804 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 25 01:26:26.004812 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 25 01:26:26.004820 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 25 01:26:26.004828 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 25 01:26:26.004835 kernel: NET: Registered PF_XDP protocol family Mar 25 01:26:26.004962 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 25 01:26:26.005104 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 25 01:26:26.005228 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 25 01:26:26.005341 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 25 01:26:26.005453 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 25 01:26:26.005564 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 25 01:26:26.005687 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 25 01:26:26.005799 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 25 01:26:26.005810 kernel: PCI: CLS 0 bytes, default 64 Mar 25 01:26:26.005818 kernel: Initialise system trusted keyrings Mar 25 01:26:26.005830 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 25 01:26:26.005838 kernel: Key type asymmetric registered Mar 25 01:26:26.005846 kernel: Asymmetric key parser 'x509' registered Mar 25 01:26:26.005854 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 25 01:26:26.005862 kernel: io scheduler mq-deadline registered Mar 25 01:26:26.005870 kernel: io scheduler kyber registered Mar 25 01:26:26.005878 kernel: io scheduler bfq registered Mar 25 01:26:26.005886 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 25 01:26:26.005911 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 25 01:26:26.005925 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 25 01:26:26.005933 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 25 01:26:26.005942 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 25 01:26:26.005950 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 25 01:26:26.005958 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 25 01:26:26.005967 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 25 01:26:26.005975 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 25 01:26:26.006000 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 25 01:26:26.006164 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 25 01:26:26.006310 kernel: rtc_cmos 00:04: registered as rtc0 Mar 25 01:26:26.006451 kernel: rtc_cmos 00:04: setting system clock to 2025-03-25T01:26:25 UTC (1742865985) Mar 25 01:26:26.006571 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 25 01:26:26.006583 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 25 01:26:26.006591 kernel: efifb: probing for efifb Mar 25 01:26:26.006612 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 25 01:26:26.006620 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 25 01:26:26.006629 kernel: efifb: scrolling: redraw Mar 25 01:26:26.006642 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 25 01:26:26.006650 kernel: Console: switching to colour frame buffer device 160x50 Mar 25 01:26:26.006659 kernel: fb0: EFI VGA frame buffer device Mar 25 01:26:26.006667 kernel: pstore: Using crash dump compression: deflate Mar 25 01:26:26.006675 kernel: pstore: Registered efi_pstore as persistent store backend Mar 25 01:26:26.006683 kernel: NET: Registered PF_INET6 protocol family Mar 25 01:26:26.006691 kernel: Segment Routing with IPv6 Mar 25 01:26:26.006700 kernel: In-situ OAM (IOAM) with IPv6 Mar 25 01:26:26.006708 kernel: NET: Registered PF_PACKET protocol family Mar 25 01:26:26.006719 kernel: Key type dns_resolver registered Mar 25 01:26:26.006730 kernel: IPI shorthand broadcast: enabled Mar 25 01:26:26.006739 kernel: sched_clock: Marking stable (672002432, 142497790)->(832643602, -18143380) Mar 25 01:26:26.006747 kernel: registered taskstats version 1 Mar 25 01:26:26.006755 kernel: Loading compiled-in X.509 certificates Mar 25 01:26:26.006764 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: eff01054e94a599f8e404b9a9482f4e2220f5386' Mar 25 01:26:26.006775 kernel: Key type .fscrypt registered Mar 25 01:26:26.006783 kernel: Key type fscrypt-provisioning registered Mar 25 01:26:26.006791 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 25 01:26:26.006800 kernel: ima: Allocated hash algorithm: sha1 Mar 25 01:26:26.006808 kernel: ima: No architecture policies found Mar 25 01:26:26.006816 kernel: clk: Disabling unused clocks Mar 25 01:26:26.006824 kernel: Freeing unused kernel image (initmem) memory: 43592K Mar 25 01:26:26.006833 kernel: Write protecting the kernel read-only data: 40960k Mar 25 01:26:26.006841 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 25 01:26:26.006852 kernel: Run /init as init process Mar 25 01:26:26.006860 kernel: with arguments: Mar 25 01:26:26.006868 kernel: /init Mar 25 01:26:26.006876 kernel: with environment: Mar 25 01:26:26.006885 kernel: HOME=/ Mar 25 01:26:26.006893 kernel: TERM=linux Mar 25 01:26:26.006901 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 25 01:26:26.006910 systemd[1]: Successfully made /usr/ read-only. Mar 25 01:26:26.006924 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:26:26.006934 systemd[1]: Detected virtualization kvm. Mar 25 01:26:26.006943 systemd[1]: Detected architecture x86-64. Mar 25 01:26:26.006951 systemd[1]: Running in initrd. Mar 25 01:26:26.006960 systemd[1]: No hostname configured, using default hostname. Mar 25 01:26:26.006969 systemd[1]: Hostname set to . Mar 25 01:26:26.007001 systemd[1]: Initializing machine ID from VM UUID. Mar 25 01:26:26.007010 systemd[1]: Queued start job for default target initrd.target. Mar 25 01:26:26.007023 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:26:26.007032 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:26:26.007041 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 25 01:26:26.007050 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:26:26.007059 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 25 01:26:26.007069 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 25 01:26:26.007082 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 25 01:26:26.007091 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 25 01:26:26.007100 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:26:26.007110 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:26:26.007121 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:26:26.007130 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:26:26.007141 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:26:26.007150 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:26:26.007158 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:26:26.007170 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:26:26.007179 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 25 01:26:26.007188 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 25 01:26:26.007196 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:26:26.007205 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:26:26.007214 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:26:26.007223 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:26:26.007232 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 25 01:26:26.007243 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:26:26.007252 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 25 01:26:26.007261 systemd[1]: Starting systemd-fsck-usr.service... Mar 25 01:26:26.007270 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:26:26.007279 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:26:26.007288 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:26:26.007296 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 25 01:26:26.007306 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:26:26.007317 systemd[1]: Finished systemd-fsck-usr.service. Mar 25 01:26:26.007326 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:26:26.007365 systemd-journald[191]: Collecting audit messages is disabled. Mar 25 01:26:26.007389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:26:26.007399 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:26:26.007408 systemd-journald[191]: Journal started Mar 25 01:26:26.007428 systemd-journald[191]: Runtime Journal (/run/log/journal/6d4de1926fa94656afa3c415b351d285) is 6M, max 47.9M, 41.9M free. Mar 25 01:26:25.994281 systemd-modules-load[192]: Inserted module 'overlay' Mar 25 01:26:26.010003 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:26:26.012299 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:26:26.018711 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:26:26.027999 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:26:26.035825 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 25 01:26:26.034659 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:26:26.038005 kernel: Bridge firewalling registered Mar 25 01:26:26.038018 systemd-modules-load[192]: Inserted module 'br_netfilter' Mar 25 01:26:26.039834 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:26:26.041533 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:26:26.044659 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:26:26.049958 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:26:26.051357 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 25 01:26:26.067132 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:26:26.070379 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:26:26.080488 dracut-cmdline[229]: dracut-dracut-053 Mar 25 01:26:26.087019 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:26:26.127781 systemd-resolved[232]: Positive Trust Anchors: Mar 25 01:26:26.127797 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:26:26.127835 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:26:26.131050 systemd-resolved[232]: Defaulting to hostname 'linux'. Mar 25 01:26:26.132344 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:26:26.138960 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:26:26.191022 kernel: SCSI subsystem initialized Mar 25 01:26:26.201012 kernel: Loading iSCSI transport class v2.0-870. Mar 25 01:26:26.213014 kernel: iscsi: registered transport (tcp) Mar 25 01:26:26.249037 kernel: iscsi: registered transport (qla4xxx) Mar 25 01:26:26.249124 kernel: QLogic iSCSI HBA Driver Mar 25 01:26:26.300865 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 25 01:26:26.303520 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 25 01:26:26.356034 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 25 01:26:26.356104 kernel: device-mapper: uevent: version 1.0.3 Mar 25 01:26:26.356116 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 25 01:26:26.402035 kernel: raid6: avx2x4 gen() 28715 MB/s Mar 25 01:26:26.419015 kernel: raid6: avx2x2 gen() 29778 MB/s Mar 25 01:26:26.436192 kernel: raid6: avx2x1 gen() 24694 MB/s Mar 25 01:26:26.436224 kernel: raid6: using algorithm avx2x2 gen() 29778 MB/s Mar 25 01:26:26.464108 kernel: raid6: .... xor() 18977 MB/s, rmw enabled Mar 25 01:26:26.464157 kernel: raid6: using avx2x2 recovery algorithm Mar 25 01:26:26.485031 kernel: xor: automatically using best checksumming function avx Mar 25 01:26:26.641021 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 25 01:26:26.653239 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:26:26.656074 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:26:26.691287 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 25 01:26:26.700530 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:26:26.704325 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 25 01:26:26.732904 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Mar 25 01:26:26.770072 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:26:26.773082 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:26:26.865923 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:26:26.871236 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 25 01:26:26.895803 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 25 01:26:26.899200 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:26:26.902442 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:26:26.903913 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:26:26.908817 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 25 01:26:26.914104 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 25 01:26:26.951786 kernel: libata version 3.00 loaded. Mar 25 01:26:26.951803 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 25 01:26:26.951952 kernel: cryptd: max_cpu_qlen set to 1000 Mar 25 01:26:26.951964 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 25 01:26:26.951976 kernel: GPT:9289727 != 19775487 Mar 25 01:26:26.952006 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 25 01:26:26.952017 kernel: GPT:9289727 != 19775487 Mar 25 01:26:26.952027 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 25 01:26:26.952037 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 01:26:26.947592 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:26:26.955061 kernel: AVX2 version of gcm_enc/dec engaged. Mar 25 01:26:26.955433 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:26:26.957620 kernel: ahci 0000:00:1f.2: version 3.0 Mar 25 01:26:26.983189 kernel: AES CTR mode by8 optimization enabled Mar 25 01:26:26.983211 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 25 01:26:26.983226 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 25 01:26:26.983443 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 25 01:26:26.983650 kernel: scsi host0: ahci Mar 25 01:26:26.983860 kernel: scsi host1: ahci Mar 25 01:26:26.984129 kernel: scsi host2: ahci Mar 25 01:26:26.984332 kernel: scsi host3: ahci Mar 25 01:26:26.984526 kernel: scsi host4: ahci Mar 25 01:26:26.984737 kernel: scsi host5: ahci Mar 25 01:26:26.984939 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 25 01:26:26.984956 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 25 01:26:26.984971 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 25 01:26:26.985014 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (472) Mar 25 01:26:26.985033 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 25 01:26:26.985051 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 25 01:26:26.985069 kernel: BTRFS: device fsid 6d9424cd-1432-492b-b006-b311869817e2 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (474) Mar 25 01:26:26.985088 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 25 01:26:26.957253 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:26:26.961418 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:26:26.962765 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:26:26.962947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:26:26.964760 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:26:26.967755 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:26:26.984698 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:26:27.216927 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 25 01:26:27.217642 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:26:27.236450 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 25 01:26:27.314808 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 25 01:26:27.314833 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 25 01:26:27.314843 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 25 01:26:27.316478 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 25 01:26:27.316491 kernel: ata3.00: applying bridge limits Mar 25 01:26:27.318061 kernel: ata3.00: configured for UDMA/100 Mar 25 01:26:27.319132 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 25 01:26:27.327118 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 25 01:26:27.336960 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 25 01:26:27.382176 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 25 01:26:27.386480 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 25 01:26:27.387674 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:26:27.421407 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:26:27.521041 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 25 01:26:27.521119 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 25 01:26:27.522016 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 25 01:26:27.550014 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 25 01:26:27.563627 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 25 01:26:27.563640 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 25 01:26:27.876101 disk-uuid[557]: Primary Header is updated. Mar 25 01:26:27.876101 disk-uuid[557]: Secondary Entries is updated. Mar 25 01:26:27.876101 disk-uuid[557]: Secondary Header is updated. Mar 25 01:26:27.890088 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 01:26:27.897038 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 01:26:28.896872 disk-uuid[582]: The operation has completed successfully. Mar 25 01:26:28.898728 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 01:26:28.932509 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 25 01:26:28.932686 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 25 01:26:28.974267 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 25 01:26:29.008525 sh[593]: Success Mar 25 01:26:29.023044 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 25 01:26:29.059589 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 25 01:26:29.132263 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 25 01:26:29.146068 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 25 01:26:29.153710 kernel: BTRFS info (device dm-0): first mount of filesystem 6d9424cd-1432-492b-b006-b311869817e2 Mar 25 01:26:29.153744 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:26:29.153755 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 25 01:26:29.156180 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 25 01:26:29.156207 kernel: BTRFS info (device dm-0): using free space tree Mar 25 01:26:29.161716 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 25 01:26:29.164534 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 25 01:26:29.165530 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 25 01:26:29.189726 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 25 01:26:29.209398 kernel: BTRFS info (device vda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:26:29.209453 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:26:29.209464 kernel: BTRFS info (device vda6): using free space tree Mar 25 01:26:29.213024 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 01:26:29.218010 kernel: BTRFS info (device vda6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:26:29.298082 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:26:29.315372 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:26:29.357401 systemd-networkd[769]: lo: Link UP Mar 25 01:26:29.357413 systemd-networkd[769]: lo: Gained carrier Mar 25 01:26:29.359114 systemd-networkd[769]: Enumeration completed Mar 25 01:26:29.359440 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:26:29.359444 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:26:29.372907 systemd-networkd[769]: eth0: Link UP Mar 25 01:26:29.372912 systemd-networkd[769]: eth0: Gained carrier Mar 25 01:26:29.372920 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:26:29.373346 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:26:29.376141 systemd[1]: Reached target network.target - Network. Mar 25 01:26:29.401024 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 25 01:26:29.682069 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 25 01:26:29.685845 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 25 01:26:29.871928 ignition[774]: Ignition 2.20.0 Mar 25 01:26:29.871943 ignition[774]: Stage: fetch-offline Mar 25 01:26:29.872009 ignition[774]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:26:29.872026 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:26:29.872141 ignition[774]: parsed url from cmdline: "" Mar 25 01:26:29.872146 ignition[774]: no config URL provided Mar 25 01:26:29.872151 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:26:29.872161 ignition[774]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:26:29.872194 ignition[774]: op(1): [started] loading QEMU firmware config module Mar 25 01:26:29.872200 ignition[774]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 25 01:26:29.880164 ignition[774]: op(1): [finished] loading QEMU firmware config module Mar 25 01:26:29.922477 ignition[774]: parsing config with SHA512: 923169dd4ad89686402327a18d77ab5c4f214e7e8a3c9c9cbc9daffdb443c7023e717a8d1c8393a84c4ba139429c97038b256bce9af81aaae3a028a80761510f Mar 25 01:26:29.955422 unknown[774]: fetched base config from "system" Mar 25 01:26:29.955442 unknown[774]: fetched user config from "qemu" Mar 25 01:26:29.964257 ignition[774]: fetch-offline: fetch-offline passed Mar 25 01:26:29.964407 ignition[774]: Ignition finished successfully Mar 25 01:26:29.967476 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:26:29.970330 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 25 01:26:29.973135 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 25 01:26:30.020411 ignition[785]: Ignition 2.20.0 Mar 25 01:26:30.020427 ignition[785]: Stage: kargs Mar 25 01:26:30.020629 ignition[785]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:26:30.020642 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:26:30.025134 ignition[785]: kargs: kargs passed Mar 25 01:26:30.025203 ignition[785]: Ignition finished successfully Mar 25 01:26:30.029162 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 25 01:26:30.032257 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 25 01:26:30.062789 ignition[793]: Ignition 2.20.0 Mar 25 01:26:30.062804 ignition[793]: Stage: disks Mar 25 01:26:30.063702 ignition[793]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:26:30.063717 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:26:30.064890 ignition[793]: disks: disks passed Mar 25 01:26:30.064952 ignition[793]: Ignition finished successfully Mar 25 01:26:30.070623 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 25 01:26:30.073216 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 25 01:26:30.073662 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 25 01:26:30.076243 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:26:30.076650 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:26:30.077285 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:26:30.078801 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 25 01:26:30.110041 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 25 01:26:30.116580 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 25 01:26:30.120738 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 25 01:26:30.272013 kernel: EXT4-fs (vda9): mounted filesystem 4e6dca82-2e50-453c-be25-61f944b72008 r/w with ordered data mode. Quota mode: none. Mar 25 01:26:30.272456 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 25 01:26:30.274038 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 25 01:26:30.276584 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:26:30.278575 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 25 01:26:30.280004 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 25 01:26:30.280055 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 25 01:26:30.280083 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:26:30.292718 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (812) Mar 25 01:26:30.292745 kernel: BTRFS info (device vda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:26:30.292760 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:26:30.286035 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 25 01:26:30.296303 kernel: BTRFS info (device vda6): using free space tree Mar 25 01:26:30.296326 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 01:26:30.288735 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 25 01:26:30.297612 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:26:30.326715 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Mar 25 01:26:30.332526 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Mar 25 01:26:30.337176 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Mar 25 01:26:30.341408 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Mar 25 01:26:30.426056 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 25 01:26:30.427830 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 25 01:26:30.430407 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 25 01:26:30.451459 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 25 01:26:30.453036 kernel: BTRFS info (device vda6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:26:30.468186 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 25 01:26:30.492458 ignition[926]: INFO : Ignition 2.20.0 Mar 25 01:26:30.492458 ignition[926]: INFO : Stage: mount Mar 25 01:26:30.494335 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:26:30.494335 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:26:30.494335 ignition[926]: INFO : mount: mount passed Mar 25 01:26:30.494335 ignition[926]: INFO : Ignition finished successfully Mar 25 01:26:30.500356 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 25 01:26:30.501669 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 25 01:26:31.274928 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:26:31.306025 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (940) Mar 25 01:26:31.306088 kernel: BTRFS info (device vda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:26:31.307879 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:26:31.307903 kernel: BTRFS info (device vda6): using free space tree Mar 25 01:26:31.312017 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 01:26:31.313212 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:26:31.345797 ignition[957]: INFO : Ignition 2.20.0 Mar 25 01:26:31.347101 ignition[957]: INFO : Stage: files Mar 25 01:26:31.347101 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:26:31.347101 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:26:31.350906 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 25 01:26:31.350906 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 25 01:26:31.350906 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 25 01:26:31.355224 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 25 01:26:31.355224 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 25 01:26:31.355224 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 25 01:26:31.355224 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 01:26:31.355224 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 25 01:26:31.352711 unknown[957]: wrote ssh authorized keys file for user: core Mar 25 01:26:31.391140 systemd-networkd[769]: eth0: Gained IPv6LL Mar 25 01:26:31.396970 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 25 01:26:31.496869 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 01:26:31.496869 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:26:31.501131 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 25 01:26:31.843522 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 25 01:26:32.037256 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:26:32.037256 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 25 01:26:32.041064 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 25 01:26:32.041064 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:26:32.044691 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:26:32.046463 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:26:32.048305 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:26:32.050102 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:26:32.051956 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:26:32.053952 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:26:32.055933 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:26:32.057771 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:26:32.060387 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:26:32.062935 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:26:32.065174 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 25 01:26:32.346252 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 25 01:26:32.830478 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:26:32.830478 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 25 01:26:33.545628 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:26:33.548486 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:26:33.548486 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 25 01:26:33.548486 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 25 01:26:33.548486 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 25 01:26:33.548486 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 25 01:26:33.548486 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 25 01:26:33.548486 ignition[957]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 25 01:26:33.572277 ignition[957]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 25 01:26:33.579179 ignition[957]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 25 01:26:33.581098 ignition[957]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 25 01:26:33.581098 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 25 01:26:33.581098 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 25 01:26:33.581098 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:26:33.581098 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:26:33.581098 ignition[957]: INFO : files: files passed Mar 25 01:26:33.581098 ignition[957]: INFO : Ignition finished successfully Mar 25 01:26:33.593939 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 25 01:26:33.597593 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 25 01:26:33.599007 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 25 01:26:33.615510 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 25 01:26:33.615651 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 25 01:26:33.619606 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Mar 25 01:26:33.621247 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:26:33.621247 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:26:33.624620 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:26:33.623614 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:26:33.626288 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 25 01:26:33.629867 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 25 01:26:33.700069 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 25 01:26:33.701463 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 25 01:26:33.705470 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 25 01:26:33.708213 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 25 01:26:33.710789 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 25 01:26:33.713664 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 25 01:26:33.756291 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:26:33.760865 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 25 01:26:33.792438 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:26:33.795318 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:26:33.798116 systemd[1]: Stopped target timers.target - Timer Units. Mar 25 01:26:33.800151 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 25 01:26:33.801355 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:26:33.804400 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 25 01:26:33.806846 systemd[1]: Stopped target basic.target - Basic System. Mar 25 01:26:33.809026 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 25 01:26:33.811598 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:26:33.814357 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 25 01:26:33.816992 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 25 01:26:33.819314 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:26:33.822252 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 25 01:26:33.824684 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 25 01:26:33.827124 systemd[1]: Stopped target swap.target - Swaps. Mar 25 01:26:33.829071 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 25 01:26:33.830317 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:26:33.833036 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:26:33.835480 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:26:33.837893 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 25 01:26:33.838907 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:26:33.841630 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 25 01:26:33.842729 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 25 01:26:33.845230 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 25 01:26:33.846317 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:26:33.848778 systemd[1]: Stopped target paths.target - Path Units. Mar 25 01:26:33.850592 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 25 01:26:33.854048 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:26:33.856921 systemd[1]: Stopped target slices.target - Slice Units. Mar 25 01:26:33.858864 systemd[1]: Stopped target sockets.target - Socket Units. Mar 25 01:26:33.860892 systemd[1]: iscsid.socket: Deactivated successfully. Mar 25 01:26:33.861818 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:26:33.863925 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 25 01:26:33.864872 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:26:33.867051 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 25 01:26:33.868275 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:26:33.870874 systemd[1]: ignition-files.service: Deactivated successfully. Mar 25 01:26:33.871894 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 25 01:26:33.874905 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 25 01:26:33.877069 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 25 01:26:33.878176 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:26:33.887702 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 25 01:26:33.889512 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 25 01:26:33.890639 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:26:33.893274 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 25 01:26:33.894342 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:26:33.913425 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 25 01:26:33.913537 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 25 01:26:33.928816 ignition[1013]: INFO : Ignition 2.20.0 Mar 25 01:26:33.928816 ignition[1013]: INFO : Stage: umount Mar 25 01:26:33.928816 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:26:33.928816 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:26:33.928816 ignition[1013]: INFO : umount: umount passed Mar 25 01:26:33.928816 ignition[1013]: INFO : Ignition finished successfully Mar 25 01:26:33.929015 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 25 01:26:33.930606 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 25 01:26:33.936287 systemd[1]: Stopped target network.target - Network. Mar 25 01:26:33.937643 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 25 01:26:33.937735 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 25 01:26:33.939667 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 25 01:26:33.939720 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 25 01:26:33.941602 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 25 01:26:33.941675 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 25 01:26:33.944017 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 25 01:26:33.944093 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 25 01:26:33.948893 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 25 01:26:33.950929 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 25 01:26:33.954227 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 25 01:26:33.955007 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 25 01:26:33.955145 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 25 01:26:33.958232 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 25 01:26:33.958340 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 25 01:26:33.960766 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 25 01:26:33.960897 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 25 01:26:33.964928 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 25 01:26:33.965230 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 25 01:26:33.965355 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 25 01:26:33.969295 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 25 01:26:33.971057 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 25 01:26:33.971135 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:26:33.974001 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 25 01:26:33.984019 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 25 01:26:33.984094 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:26:33.986637 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:26:33.986690 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:26:33.988640 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 25 01:26:33.988690 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 25 01:26:33.990755 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 25 01:26:33.990819 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:26:33.993153 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:26:33.998566 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:26:33.998640 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:26:34.018209 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 25 01:26:34.018350 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 25 01:26:34.020560 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 25 01:26:34.020727 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:26:34.023271 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 25 01:26:34.023338 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 25 01:26:34.024726 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 25 01:26:34.024765 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:26:34.027436 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 25 01:26:34.027490 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:26:34.029957 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 25 01:26:34.030017 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 25 01:26:34.032411 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:26:34.032464 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:26:34.035295 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 25 01:26:34.036498 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 25 01:26:34.036551 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:26:34.038780 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:26:34.038827 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:26:34.041904 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 25 01:26:34.041968 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:26:34.056082 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 25 01:26:34.056209 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 25 01:26:34.058715 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 25 01:26:34.061656 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 25 01:26:34.086956 systemd[1]: Switching root. Mar 25 01:26:34.120025 systemd-journald[191]: Journal stopped Mar 25 01:26:36.821473 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Mar 25 01:26:36.821554 kernel: SELinux: policy capability network_peer_controls=1 Mar 25 01:26:36.821573 kernel: SELinux: policy capability open_perms=1 Mar 25 01:26:36.821590 kernel: SELinux: policy capability extended_socket_class=1 Mar 25 01:26:36.821605 kernel: SELinux: policy capability always_check_network=0 Mar 25 01:26:36.821621 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 25 01:26:36.821651 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 25 01:26:36.821673 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 25 01:26:36.821689 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 25 01:26:36.821704 kernel: audit: type=1403 audit(1742865995.906:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 25 01:26:36.821728 systemd[1]: Successfully loaded SELinux policy in 73.138ms. Mar 25 01:26:36.821762 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.408ms. Mar 25 01:26:36.821782 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:26:36.821799 systemd[1]: Detected virtualization kvm. Mar 25 01:26:36.821816 systemd[1]: Detected architecture x86-64. Mar 25 01:26:36.821837 systemd[1]: Detected first boot. Mar 25 01:26:36.821854 systemd[1]: Initializing machine ID from VM UUID. Mar 25 01:26:36.821871 zram_generator::config[1061]: No configuration found. Mar 25 01:26:36.821888 kernel: Guest personality initialized and is inactive Mar 25 01:26:36.821904 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 25 01:26:36.821920 kernel: Initialized host personality Mar 25 01:26:36.821936 kernel: NET: Registered PF_VSOCK protocol family Mar 25 01:26:36.821953 systemd[1]: Populated /etc with preset unit settings. Mar 25 01:26:36.821970 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 25 01:26:36.823610 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 25 01:26:36.823640 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 25 01:26:36.823657 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 25 01:26:36.823674 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 25 01:26:36.823690 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 25 01:26:36.823708 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 25 01:26:36.823723 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 25 01:26:36.823738 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 25 01:26:36.823767 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 25 01:26:36.823782 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 25 01:26:36.823797 systemd[1]: Created slice user.slice - User and Session Slice. Mar 25 01:26:36.823812 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:26:36.823828 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:26:36.823845 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 25 01:26:36.823861 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 25 01:26:36.823879 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 25 01:26:36.823896 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:26:36.823917 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 25 01:26:36.823933 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:26:36.823951 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 25 01:26:36.823967 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 25 01:26:36.824001 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 25 01:26:36.824020 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 25 01:26:36.824036 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:26:36.824052 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:26:36.824073 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:26:36.824092 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:26:36.824109 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 25 01:26:36.824126 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 25 01:26:36.824142 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 25 01:26:36.824159 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:26:36.824175 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:26:36.824192 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:26:36.824208 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 25 01:26:36.824232 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 25 01:26:36.824248 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 25 01:26:36.824265 systemd[1]: Mounting media.mount - External Media Directory... Mar 25 01:26:36.824281 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:26:36.824298 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 25 01:26:36.824323 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 25 01:26:36.824340 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 25 01:26:36.824359 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 25 01:26:36.824380 systemd[1]: Reached target machines.target - Containers. Mar 25 01:26:36.824396 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 25 01:26:36.824413 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:26:36.824430 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:26:36.824446 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 25 01:26:36.824466 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:26:36.824482 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:26:36.824499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:26:36.824516 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 25 01:26:36.824537 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:26:36.824555 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 25 01:26:36.824572 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 25 01:26:36.824590 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 25 01:26:36.824609 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 25 01:26:36.824628 systemd[1]: Stopped systemd-fsck-usr.service. Mar 25 01:26:36.824648 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:26:36.824665 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:26:36.824686 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:26:36.824702 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 25 01:26:36.824719 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 25 01:26:36.824778 systemd-journald[1125]: Collecting audit messages is disabled. Mar 25 01:26:36.824810 kernel: loop: module loaded Mar 25 01:26:36.824827 kernel: fuse: init (API version 7.39) Mar 25 01:26:36.824843 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 25 01:26:36.824860 systemd-journald[1125]: Journal started Mar 25 01:26:36.824895 systemd-journald[1125]: Runtime Journal (/run/log/journal/6d4de1926fa94656afa3c415b351d285) is 6M, max 47.9M, 41.9M free. Mar 25 01:26:36.565976 systemd[1]: Queued start job for default target multi-user.target. Mar 25 01:26:36.578341 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 25 01:26:36.578871 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 25 01:26:36.579353 systemd[1]: systemd-journald.service: Consumed 1.068s CPU time. Mar 25 01:26:36.832301 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:26:36.832371 systemd[1]: verity-setup.service: Deactivated successfully. Mar 25 01:26:36.832393 systemd[1]: Stopped verity-setup.service. Mar 25 01:26:36.837013 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:26:36.845211 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:26:36.846700 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 25 01:26:36.869998 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 25 01:26:36.871462 systemd[1]: Mounted media.mount - External Media Directory. Mar 25 01:26:36.872777 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 25 01:26:36.874207 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 25 01:26:36.875632 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 25 01:26:36.877179 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:26:36.879168 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 25 01:26:36.879397 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 25 01:26:36.881229 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:26:36.881550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:26:36.885479 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:26:36.886088 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:26:36.921970 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 25 01:26:36.922628 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 25 01:26:36.923958 kernel: ACPI: bus type drm_connector registered Mar 25 01:26:36.924453 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:26:36.924744 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:26:36.926759 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:26:36.927072 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:26:36.928737 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:26:36.930509 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 25 01:26:36.932398 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 25 01:26:36.934297 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 25 01:26:36.948915 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 25 01:26:36.973875 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 25 01:26:36.976758 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 25 01:26:36.978069 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 25 01:26:36.978105 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:26:36.980144 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 25 01:26:36.982484 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 25 01:26:36.998521 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 25 01:26:37.000091 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:26:37.002298 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 25 01:26:37.004708 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 25 01:26:37.006608 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:26:37.008900 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 25 01:26:37.011323 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:26:37.015774 systemd-journald[1125]: Time spent on flushing to /var/log/journal/6d4de1926fa94656afa3c415b351d285 is 20.832ms for 1026 entries. Mar 25 01:26:37.015774 systemd-journald[1125]: System Journal (/var/log/journal/6d4de1926fa94656afa3c415b351d285) is 8M, max 195.6M, 187.6M free. Mar 25 01:26:37.583675 systemd-journald[1125]: Received client request to flush runtime journal. Mar 25 01:26:37.583748 kernel: loop0: detected capacity change from 0 to 151640 Mar 25 01:26:37.583786 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 25 01:26:37.583812 kernel: loop1: detected capacity change from 0 to 210664 Mar 25 01:26:37.583836 kernel: loop2: detected capacity change from 0 to 109808 Mar 25 01:26:37.583865 kernel: loop3: detected capacity change from 0 to 151640 Mar 25 01:26:37.583886 kernel: loop4: detected capacity change from 0 to 210664 Mar 25 01:26:37.583911 kernel: loop5: detected capacity change from 0 to 109808 Mar 25 01:26:37.012676 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:26:37.015846 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 25 01:26:37.021323 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:26:37.023558 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 25 01:26:37.025466 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 25 01:26:37.029148 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 25 01:26:37.080106 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 25 01:26:37.081903 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:26:37.107659 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 25 01:26:37.432578 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 25 01:26:37.434216 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 25 01:26:37.436792 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 25 01:26:37.516886 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 25 01:26:37.517647 (sd-merge)[1194]: Merged extensions into '/usr'. Mar 25 01:26:37.522056 systemd[1]: Reload requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Mar 25 01:26:37.522072 systemd[1]: Reloading... Mar 25 01:26:37.602075 zram_generator::config[1223]: No configuration found. Mar 25 01:26:37.739781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:26:37.777230 ldconfig[1161]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 25 01:26:37.812358 systemd[1]: Reloading finished in 289 ms. Mar 25 01:26:37.834655 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 25 01:26:37.836381 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 25 01:26:37.837893 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 25 01:26:37.839601 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 25 01:26:37.859616 systemd[1]: Starting ensure-sysext.service... Mar 25 01:26:37.862001 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 25 01:26:37.898612 systemd[1]: Reload requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Mar 25 01:26:37.898631 systemd[1]: Reloading... Mar 25 01:26:37.978047 zram_generator::config[1297]: No configuration found. Mar 25 01:26:38.122217 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:26:38.197927 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 25 01:26:38.198331 systemd[1]: Reloading finished in 299 ms. Mar 25 01:26:38.214134 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 25 01:26:38.216114 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 25 01:26:38.234697 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 25 01:26:38.245229 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:26:38.248020 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:26:38.251454 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:26:38.251678 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:26:38.260261 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:26:38.264139 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:26:38.267903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:26:38.269235 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:26:38.269404 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:26:38.269553 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:26:38.276303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:26:38.276687 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:26:38.278118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:26:38.278483 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:26:38.280433 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Mar 25 01:26:38.280454 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Mar 25 01:26:38.286421 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 25 01:26:38.286714 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 25 01:26:38.287101 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:26:38.287390 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:26:38.289203 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 25 01:26:38.289502 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:26:38.289556 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Mar 25 01:26:38.291017 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Mar 25 01:26:38.296077 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:26:38.296093 systemd-tmpfiles[1337]: Skipping /boot Mar 25 01:26:38.296796 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:26:38.297113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:26:38.298791 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:26:38.301292 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:26:38.308756 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:26:38.313106 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:26:38.313925 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:26:38.314206 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:26:38.316877 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:26:38.316895 systemd-tmpfiles[1337]: Skipping /boot Mar 25 01:26:38.318064 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:26:38.319638 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:26:38.321824 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:26:38.322137 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:26:38.324016 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:26:38.324332 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:26:38.326066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:26:38.326335 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:26:38.328544 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:26:38.328807 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:26:38.334063 systemd[1]: Finished ensure-sysext.service. Mar 25 01:26:38.342392 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:26:38.342462 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:26:38.349259 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:26:38.352421 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:26:38.355312 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 25 01:26:38.371019 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 25 01:26:38.373005 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Mar 25 01:26:38.375712 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:26:38.381293 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 25 01:26:38.386506 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 25 01:26:38.399121 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 25 01:26:38.410705 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 25 01:26:38.415511 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:26:38.418301 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 25 01:26:38.424874 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:26:38.429613 augenrules[1396]: No rules Mar 25 01:26:38.430233 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 25 01:26:38.433673 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:26:38.434038 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:26:38.463971 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 25 01:26:38.471672 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 25 01:26:38.473396 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 01:26:38.479865 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 25 01:26:38.495043 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1386) Mar 25 01:26:38.564600 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 25 01:26:38.601682 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 25 01:26:38.604786 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 25 01:26:38.634809 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 25 01:26:38.635449 systemd[1]: Reached target time-set.target - System Time Set. Mar 25 01:26:38.639635 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 25 01:26:38.652001 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 25 01:26:38.657053 kernel: ACPI: button: Power Button [PWRF] Mar 25 01:26:38.657956 systemd-networkd[1394]: lo: Link UP Mar 25 01:26:38.657970 systemd-networkd[1394]: lo: Gained carrier Mar 25 01:26:38.659741 systemd-networkd[1394]: Enumeration completed Mar 25 01:26:38.659851 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:26:38.660884 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:26:38.660897 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:26:38.661289 systemd-resolved[1361]: Positive Trust Anchors: Mar 25 01:26:38.661303 systemd-resolved[1361]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:26:38.661334 systemd-resolved[1361]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:26:38.662385 systemd-networkd[1394]: eth0: Link UP Mar 25 01:26:38.662589 systemd-networkd[1394]: eth0: Gained carrier Mar 25 01:26:38.662708 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:26:38.662779 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 25 01:26:38.666183 systemd-resolved[1361]: Defaulting to hostname 'linux'. Mar 25 01:26:38.668101 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 25 01:26:38.669404 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:26:38.674864 systemd[1]: Reached target network.target - Network. Mar 25 01:26:38.676548 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:26:38.687101 systemd-networkd[1394]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 25 01:26:38.694393 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 25 01:26:38.694685 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 25 01:26:38.694923 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 25 01:26:38.701573 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 25 01:26:38.701780 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 25 01:26:38.695139 systemd-timesyncd[1363]: Network configuration changed, trying to establish connection. Mar 25 01:26:38.698386 systemd-timesyncd[1363]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 25 01:26:38.698473 systemd-timesyncd[1363]: Initial clock synchronization to Tue 2025-03-25 01:26:38.904379 UTC. Mar 25 01:26:38.715200 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 25 01:26:38.749428 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:26:38.796020 kernel: mousedev: PS/2 mouse device common for all mice Mar 25 01:26:38.800290 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:26:38.800615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:26:38.803706 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:26:38.808426 kernel: kvm_amd: TSC scaling supported Mar 25 01:26:38.808464 kernel: kvm_amd: Nested Virtualization enabled Mar 25 01:26:38.808478 kernel: kvm_amd: Nested Paging enabled Mar 25 01:26:38.809448 kernel: kvm_amd: LBR virtualization supported Mar 25 01:26:38.809470 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 25 01:26:38.809487 kernel: kvm_amd: Virtual GIF supported Mar 25 01:26:38.830026 kernel: EDAC MC: Ver: 3.0.0 Mar 25 01:26:38.865870 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 25 01:26:38.869381 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 25 01:26:38.882522 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:26:38.890378 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:26:38.931764 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 25 01:26:38.933757 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:26:38.935167 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:26:38.936668 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 25 01:26:38.938277 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 25 01:26:38.940196 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 25 01:26:38.941716 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 25 01:26:38.943260 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 25 01:26:38.945125 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 25 01:26:38.945164 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:26:38.946336 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:26:38.948559 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 25 01:26:38.952245 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 25 01:26:38.957125 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 25 01:26:38.958901 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 25 01:26:38.960536 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 25 01:26:38.965417 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 25 01:26:38.967711 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 25 01:26:38.971082 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 25 01:26:38.973150 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 25 01:26:38.974627 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:26:38.975878 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:26:38.977129 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:26:38.977169 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:26:38.978680 systemd[1]: Starting containerd.service - containerd container runtime... Mar 25 01:26:38.980400 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:26:38.981378 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 25 01:26:38.983777 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 25 01:26:38.994433 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 25 01:26:38.995820 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 25 01:26:38.997257 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 25 01:26:38.999795 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 25 01:26:39.001145 jq[1458]: false Mar 25 01:26:39.002673 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 25 01:26:39.008318 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 25 01:26:39.018220 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 25 01:26:39.020800 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 25 01:26:39.021566 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 25 01:26:39.022455 systemd[1]: Starting update-engine.service - Update Engine... Mar 25 01:26:39.025991 extend-filesystems[1459]: Found loop3 Mar 25 01:26:39.031050 extend-filesystems[1459]: Found loop4 Mar 25 01:26:39.031050 extend-filesystems[1459]: Found loop5 Mar 25 01:26:39.031050 extend-filesystems[1459]: Found sr0 Mar 25 01:26:39.031050 extend-filesystems[1459]: Found vda Mar 25 01:26:39.031050 extend-filesystems[1459]: Found vda1 Mar 25 01:26:39.031050 extend-filesystems[1459]: Found vda2 Mar 25 01:26:39.031050 extend-filesystems[1459]: Found vda3 Mar 25 01:26:39.031050 extend-filesystems[1459]: Found usr Mar 25 01:26:39.031050 extend-filesystems[1459]: Found vda4 Mar 25 01:26:39.031050 extend-filesystems[1459]: Found vda6 Mar 25 01:26:39.031050 extend-filesystems[1459]: Found vda7 Mar 25 01:26:39.031050 extend-filesystems[1459]: Found vda9 Mar 25 01:26:39.031050 extend-filesystems[1459]: Checking size of /dev/vda9 Mar 25 01:26:39.029236 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 25 01:26:39.035637 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 25 01:26:39.044197 extend-filesystems[1459]: Resized partition /dev/vda9 Mar 25 01:26:39.048717 jq[1473]: true Mar 25 01:26:39.045235 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 25 01:26:39.045092 dbus-daemon[1457]: [system] SELinux support is enabled Mar 25 01:26:39.049727 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 25 01:26:39.051462 extend-filesystems[1479]: resize2fs 1.47.2 (1-Jan-2025) Mar 25 01:26:39.060274 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 25 01:26:39.050027 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 25 01:26:39.050392 systemd[1]: motdgen.service: Deactivated successfully. Mar 25 01:26:39.050638 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 25 01:26:39.057463 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 25 01:26:39.057801 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 25 01:26:39.061484 update_engine[1471]: I20250325 01:26:39.061394 1471 main.cc:92] Flatcar Update Engine starting Mar 25 01:26:39.068884 update_engine[1471]: I20250325 01:26:39.062908 1471 update_check_scheduler.cc:74] Next update check in 2m49s Mar 25 01:26:39.087046 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1406) Mar 25 01:26:39.088428 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 25 01:26:39.095545 jq[1483]: true Mar 25 01:26:39.096043 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 25 01:26:39.116057 tar[1482]: linux-amd64/helm Mar 25 01:26:39.118132 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 25 01:26:39.118186 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 25 01:26:39.120035 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 25 01:26:39.120054 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 25 01:26:39.133863 extend-filesystems[1479]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 25 01:26:39.133863 extend-filesystems[1479]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 25 01:26:39.133863 extend-filesystems[1479]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 25 01:26:39.142113 systemd[1]: Started update-engine.service - Update Engine. Mar 25 01:26:39.145199 extend-filesystems[1459]: Resized filesystem in /dev/vda9 Mar 25 01:26:39.146186 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 25 01:26:39.147803 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 25 01:26:39.148218 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 25 01:26:39.162201 systemd-logind[1470]: Watching system buttons on /dev/input/event1 (Power Button) Mar 25 01:26:39.162239 systemd-logind[1470]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 25 01:26:39.163111 systemd-logind[1470]: New seat seat0. Mar 25 01:26:39.169224 systemd[1]: Started systemd-logind.service - User Login Management. Mar 25 01:26:39.218509 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:26:39.221099 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 25 01:26:39.229593 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 25 01:26:39.244435 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 25 01:26:39.251252 sshd_keygen[1481]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 25 01:26:39.306160 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 25 01:26:39.313416 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 25 01:26:39.330337 systemd[1]: issuegen.service: Deactivated successfully. Mar 25 01:26:39.330700 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 25 01:26:39.335322 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 25 01:26:39.412064 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 25 01:26:39.415926 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 25 01:26:39.420145 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 25 01:26:39.422873 systemd[1]: Reached target getty.target - Login Prompts. Mar 25 01:26:39.560038 containerd[1485]: time="2025-03-25T01:26:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 25 01:26:39.562252 containerd[1485]: time="2025-03-25T01:26:39.562198836Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 25 01:26:39.652519 containerd[1485]: time="2025-03-25T01:26:39.652425200Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.093µs" Mar 25 01:26:39.652519 containerd[1485]: time="2025-03-25T01:26:39.652491476Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 25 01:26:39.652519 containerd[1485]: time="2025-03-25T01:26:39.652519789Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 25 01:26:39.652871 containerd[1485]: time="2025-03-25T01:26:39.652764061Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 25 01:26:39.652871 containerd[1485]: time="2025-03-25T01:26:39.652792702Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 25 01:26:39.652871 containerd[1485]: time="2025-03-25T01:26:39.652834026Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:26:39.652953 containerd[1485]: time="2025-03-25T01:26:39.652930083Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:26:39.652953 containerd[1485]: time="2025-03-25T01:26:39.652947482Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:26:39.653317 containerd[1485]: time="2025-03-25T01:26:39.653280977Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:26:39.653317 containerd[1485]: time="2025-03-25T01:26:39.653300811Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:26:39.653317 containerd[1485]: time="2025-03-25T01:26:39.653311674Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:26:39.653396 containerd[1485]: time="2025-03-25T01:26:39.653320081Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 25 01:26:39.653522 containerd[1485]: time="2025-03-25T01:26:39.653455047Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 25 01:26:39.653781 containerd[1485]: time="2025-03-25T01:26:39.653748422Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:26:39.653807 containerd[1485]: time="2025-03-25T01:26:39.653789621Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:26:39.653807 containerd[1485]: time="2025-03-25T01:26:39.653800895Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 25 01:26:39.653860 containerd[1485]: time="2025-03-25T01:26:39.653841530Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 25 01:26:39.654119 containerd[1485]: time="2025-03-25T01:26:39.654095965Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 25 01:26:39.654198 containerd[1485]: time="2025-03-25T01:26:39.654175662Z" level=info msg="metadata content store policy set" policy=shared Mar 25 01:26:39.660178 containerd[1485]: time="2025-03-25T01:26:39.660014074Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 25 01:26:39.660178 containerd[1485]: time="2025-03-25T01:26:39.660100389Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 25 01:26:39.660178 containerd[1485]: time="2025-03-25T01:26:39.660126349Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 25 01:26:39.660178 containerd[1485]: time="2025-03-25T01:26:39.660147725Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 25 01:26:39.660178 containerd[1485]: time="2025-03-25T01:26:39.660165329Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 25 01:26:39.660178 containerd[1485]: time="2025-03-25T01:26:39.660181002Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 25 01:26:39.660410 containerd[1485]: time="2025-03-25T01:26:39.660200836Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 25 01:26:39.660410 containerd[1485]: time="2025-03-25T01:26:39.660240371Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 25 01:26:39.660410 containerd[1485]: time="2025-03-25T01:26:39.660259116Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 25 01:26:39.660410 containerd[1485]: time="2025-03-25T01:26:39.660274983Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 25 01:26:39.660410 containerd[1485]: time="2025-03-25T01:26:39.660289669Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 25 01:26:39.660410 containerd[1485]: time="2025-03-25T01:26:39.660312936Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 25 01:26:39.660524 containerd[1485]: time="2025-03-25T01:26:39.660510519Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 25 01:26:39.660677 containerd[1485]: time="2025-03-25T01:26:39.660544844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 25 01:26:39.660677 containerd[1485]: time="2025-03-25T01:26:39.660571780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 25 01:26:39.660677 containerd[1485]: time="2025-03-25T01:26:39.660589508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 25 01:26:39.660677 containerd[1485]: time="2025-03-25T01:26:39.660603936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 25 01:26:39.660677 containerd[1485]: time="2025-03-25T01:26:39.660617625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 25 01:26:39.660677 containerd[1485]: time="2025-03-25T01:26:39.660635404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 25 01:26:39.660677 containerd[1485]: time="2025-03-25T01:26:39.660653081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 25 01:26:39.660677 containerd[1485]: time="2025-03-25T01:26:39.660667139Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 25 01:26:39.660677 containerd[1485]: time="2025-03-25T01:26:39.660678906Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 25 01:26:39.661068 containerd[1485]: time="2025-03-25T01:26:39.660691033Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 25 01:26:39.661068 containerd[1485]: time="2025-03-25T01:26:39.660780966Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 25 01:26:39.661068 containerd[1485]: time="2025-03-25T01:26:39.660803473Z" level=info msg="Start snapshots syncer" Mar 25 01:26:39.661068 containerd[1485]: time="2025-03-25T01:26:39.660841455Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 25 01:26:39.661258 containerd[1485]: time="2025-03-25T01:26:39.661204712Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 25 01:26:39.661438 containerd[1485]: time="2025-03-25T01:26:39.661272211Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 25 01:26:39.661438 containerd[1485]: time="2025-03-25T01:26:39.661346328Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 25 01:26:39.661487 containerd[1485]: time="2025-03-25T01:26:39.661472683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 25 01:26:39.661698 containerd[1485]: time="2025-03-25T01:26:39.661502136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 25 01:26:39.661698 containerd[1485]: time="2025-03-25T01:26:39.661548012Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 25 01:26:39.661698 containerd[1485]: time="2025-03-25T01:26:39.661566037Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 25 01:26:39.661698 containerd[1485]: time="2025-03-25T01:26:39.661585502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 25 01:26:39.661698 containerd[1485]: time="2025-03-25T01:26:39.661598163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 25 01:26:39.661698 containerd[1485]: time="2025-03-25T01:26:39.661610403Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 25 01:26:39.661698 containerd[1485]: time="2025-03-25T01:26:39.661642487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 25 01:26:39.661698 containerd[1485]: time="2025-03-25T01:26:39.661682722Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 25 01:26:39.661698 containerd[1485]: time="2025-03-25T01:26:39.661701682Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 25 01:26:39.663322 containerd[1485]: time="2025-03-25T01:26:39.663248929Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:26:39.663322 containerd[1485]: time="2025-03-25T01:26:39.663279678Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:26:39.663322 containerd[1485]: time="2025-03-25T01:26:39.663292431Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:26:39.663322 containerd[1485]: time="2025-03-25T01:26:39.663303767Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:26:39.663322 containerd[1485]: time="2025-03-25T01:26:39.663312275Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 25 01:26:39.663322 containerd[1485]: time="2025-03-25T01:26:39.663322830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 25 01:26:39.663484 containerd[1485]: time="2025-03-25T01:26:39.663335779Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 25 01:26:39.663484 containerd[1485]: time="2025-03-25T01:26:39.663357011Z" level=info msg="runtime interface created" Mar 25 01:26:39.663484 containerd[1485]: time="2025-03-25T01:26:39.663362889Z" level=info msg="created NRI interface" Mar 25 01:26:39.663484 containerd[1485]: time="2025-03-25T01:26:39.663383577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 25 01:26:39.663484 containerd[1485]: time="2025-03-25T01:26:39.663401171Z" level=info msg="Connect containerd service" Mar 25 01:26:39.663484 containerd[1485]: time="2025-03-25T01:26:39.663438188Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 25 01:26:39.666973 containerd[1485]: time="2025-03-25T01:26:39.666535415Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:26:39.785196 tar[1482]: linux-amd64/LICENSE Mar 25 01:26:39.785332 tar[1482]: linux-amd64/README.md Mar 25 01:26:39.807338 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 25 01:26:39.847456 containerd[1485]: time="2025-03-25T01:26:39.847395510Z" level=info msg="Start subscribing containerd event" Mar 25 01:26:39.847456 containerd[1485]: time="2025-03-25T01:26:39.847467160Z" level=info msg="Start recovering state" Mar 25 01:26:39.847652 containerd[1485]: time="2025-03-25T01:26:39.847616360Z" level=info msg="Start event monitor" Mar 25 01:26:39.847652 containerd[1485]: time="2025-03-25T01:26:39.847636298Z" level=info msg="Start cni network conf syncer for default" Mar 25 01:26:39.847652 containerd[1485]: time="2025-03-25T01:26:39.847648435Z" level=info msg="Start streaming server" Mar 25 01:26:39.847709 containerd[1485]: time="2025-03-25T01:26:39.847659605Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 25 01:26:39.847709 containerd[1485]: time="2025-03-25T01:26:39.847672811Z" level=info msg="runtime interface starting up..." Mar 25 01:26:39.847709 containerd[1485]: time="2025-03-25T01:26:39.847682750Z" level=info msg="starting plugins..." Mar 25 01:26:39.847764 containerd[1485]: time="2025-03-25T01:26:39.847675124Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 25 01:26:39.847792 containerd[1485]: time="2025-03-25T01:26:39.847702748Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 25 01:26:39.847924 containerd[1485]: time="2025-03-25T01:26:39.847764862Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 25 01:26:39.848099 containerd[1485]: time="2025-03-25T01:26:39.848057711Z" level=info msg="containerd successfully booted in 0.288654s" Mar 25 01:26:39.848153 systemd[1]: Started containerd.service - containerd container runtime. Mar 25 01:26:39.901231 systemd-networkd[1394]: eth0: Gained IPv6LL Mar 25 01:26:39.904327 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 25 01:26:39.906522 systemd[1]: Reached target network-online.target - Network is Online. Mar 25 01:26:39.909536 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 25 01:26:39.912507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:26:39.923172 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 25 01:26:39.946577 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 25 01:26:39.946897 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 25 01:26:39.948694 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 25 01:26:39.962746 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 25 01:26:40.436253 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 25 01:26:40.439748 systemd[1]: Started sshd@0-10.0.0.48:22-10.0.0.1:44744.service - OpenSSH per-connection server daemon (10.0.0.1:44744). Mar 25 01:26:40.537842 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 44744 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:26:40.540260 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:26:40.547511 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 25 01:26:40.550266 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 25 01:26:40.558827 systemd-logind[1470]: New session 1 of user core. Mar 25 01:26:40.588502 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 25 01:26:40.594348 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 25 01:26:40.613091 (systemd)[1582]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 25 01:26:40.616962 systemd-logind[1470]: New session c1 of user core. Mar 25 01:26:40.822918 systemd[1582]: Queued start job for default target default.target. Mar 25 01:26:40.833808 systemd[1582]: Created slice app.slice - User Application Slice. Mar 25 01:26:40.833845 systemd[1582]: Reached target paths.target - Paths. Mar 25 01:26:40.833903 systemd[1582]: Reached target timers.target - Timers. Mar 25 01:26:40.835946 systemd[1582]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 25 01:26:40.848755 systemd[1582]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 25 01:26:40.848945 systemd[1582]: Reached target sockets.target - Sockets. Mar 25 01:26:40.848997 systemd[1582]: Reached target basic.target - Basic System. Mar 25 01:26:40.849092 systemd[1582]: Reached target default.target - Main User Target. Mar 25 01:26:40.849138 systemd[1582]: Startup finished in 223ms. Mar 25 01:26:40.851490 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 25 01:26:40.872147 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 25 01:26:40.951333 systemd[1]: Started sshd@1-10.0.0.48:22-10.0.0.1:44754.service - OpenSSH per-connection server daemon (10.0.0.1:44754). Mar 25 01:26:41.031596 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 44754 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:26:41.033209 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:26:41.039028 systemd-logind[1470]: New session 2 of user core. Mar 25 01:26:41.053157 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 25 01:26:41.056073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:26:41.059502 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 25 01:26:41.061304 systemd[1]: Startup finished in 815ms (kernel) + 10.158s (initrd) + 5.225s (userspace) = 16.199s. Mar 25 01:26:41.074571 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:26:41.132352 sshd[1601]: Connection closed by 10.0.0.1 port 44754 Mar 25 01:26:41.132688 sshd-session[1593]: pam_unix(sshd:session): session closed for user core Mar 25 01:26:41.146275 systemd[1]: sshd@1-10.0.0.48:22-10.0.0.1:44754.service: Deactivated successfully. Mar 25 01:26:41.148214 systemd[1]: session-2.scope: Deactivated successfully. Mar 25 01:26:41.148948 systemd-logind[1470]: Session 2 logged out. Waiting for processes to exit. Mar 25 01:26:41.151096 systemd[1]: Started sshd@2-10.0.0.48:22-10.0.0.1:44770.service - OpenSSH per-connection server daemon (10.0.0.1:44770). Mar 25 01:26:41.152183 systemd-logind[1470]: Removed session 2. Mar 25 01:26:41.208788 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 44770 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:26:41.210951 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:26:41.216589 systemd-logind[1470]: New session 3 of user core. Mar 25 01:26:41.223178 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 25 01:26:41.275423 sshd[1617]: Connection closed by 10.0.0.1 port 44770 Mar 25 01:26:41.275819 sshd-session[1610]: pam_unix(sshd:session): session closed for user core Mar 25 01:26:41.288226 systemd[1]: sshd@2-10.0.0.48:22-10.0.0.1:44770.service: Deactivated successfully. Mar 25 01:26:41.290262 systemd[1]: session-3.scope: Deactivated successfully. Mar 25 01:26:41.292242 systemd-logind[1470]: Session 3 logged out. Waiting for processes to exit. Mar 25 01:26:41.293561 systemd[1]: Started sshd@3-10.0.0.48:22-10.0.0.1:44780.service - OpenSSH per-connection server daemon (10.0.0.1:44780). Mar 25 01:26:41.294711 systemd-logind[1470]: Removed session 3. Mar 25 01:26:41.341016 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 44780 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:26:41.342593 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:26:41.347746 systemd-logind[1470]: New session 4 of user core. Mar 25 01:26:41.354158 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 25 01:26:41.410476 sshd[1626]: Connection closed by 10.0.0.1 port 44780 Mar 25 01:26:41.412310 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Mar 25 01:26:41.422711 systemd[1]: sshd@3-10.0.0.48:22-10.0.0.1:44780.service: Deactivated successfully. Mar 25 01:26:41.424878 systemd[1]: session-4.scope: Deactivated successfully. Mar 25 01:26:41.425659 systemd-logind[1470]: Session 4 logged out. Waiting for processes to exit. Mar 25 01:26:41.427703 systemd[1]: Started sshd@4-10.0.0.48:22-10.0.0.1:44782.service - OpenSSH per-connection server daemon (10.0.0.1:44782). Mar 25 01:26:41.428908 systemd-logind[1470]: Removed session 4. Mar 25 01:26:41.478975 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 44782 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:26:41.481017 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:26:41.486070 systemd-logind[1470]: New session 5 of user core. Mar 25 01:26:41.494174 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 25 01:26:41.564957 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 25 01:26:41.565361 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:26:41.583391 sudo[1637]: pam_unix(sudo:session): session closed for user root Mar 25 01:26:41.585392 sshd[1636]: Connection closed by 10.0.0.1 port 44782 Mar 25 01:26:41.585902 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Mar 25 01:26:41.596768 systemd[1]: sshd@4-10.0.0.48:22-10.0.0.1:44782.service: Deactivated successfully. Mar 25 01:26:41.598600 systemd[1]: session-5.scope: Deactivated successfully. Mar 25 01:26:41.600438 systemd-logind[1470]: Session 5 logged out. Waiting for processes to exit. Mar 25 01:26:41.602076 systemd[1]: Started sshd@5-10.0.0.48:22-10.0.0.1:44788.service - OpenSSH per-connection server daemon (10.0.0.1:44788). Mar 25 01:26:41.602879 systemd-logind[1470]: Removed session 5. Mar 25 01:26:41.678314 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 44788 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:26:41.680546 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:26:41.686345 systemd-logind[1470]: New session 6 of user core. Mar 25 01:26:41.695152 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 25 01:26:41.754125 kubelet[1599]: E0325 01:26:41.754066 1599 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:26:41.754264 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 25 01:26:41.754597 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:26:41.758265 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:26:41.758350 sudo[1647]: pam_unix(sudo:session): session closed for user root Mar 25 01:26:41.758470 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:26:41.758861 systemd[1]: kubelet.service: Consumed 1.623s CPU time, 248.2M memory peak. Mar 25 01:26:41.764460 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 25 01:26:41.764777 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:26:41.774862 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:26:41.821124 augenrules[1670]: No rules Mar 25 01:26:41.822900 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:26:41.823233 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:26:41.824412 sudo[1646]: pam_unix(sudo:session): session closed for user root Mar 25 01:26:41.825925 sshd[1645]: Connection closed by 10.0.0.1 port 44788 Mar 25 01:26:41.826249 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Mar 25 01:26:41.839321 systemd[1]: sshd@5-10.0.0.48:22-10.0.0.1:44788.service: Deactivated successfully. Mar 25 01:26:41.841364 systemd[1]: session-6.scope: Deactivated successfully. Mar 25 01:26:41.843462 systemd-logind[1470]: Session 6 logged out. Waiting for processes to exit. Mar 25 01:26:41.844924 systemd[1]: Started sshd@6-10.0.0.48:22-10.0.0.1:44800.service - OpenSSH per-connection server daemon (10.0.0.1:44800). Mar 25 01:26:41.846083 systemd-logind[1470]: Removed session 6. Mar 25 01:26:41.900747 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 44800 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:26:41.902677 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:26:41.908058 systemd-logind[1470]: New session 7 of user core. Mar 25 01:26:41.922342 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 25 01:26:41.978418 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 25 01:26:41.978834 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:26:42.639912 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 25 01:26:42.654381 (dockerd)[1702]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 25 01:26:43.194368 dockerd[1702]: time="2025-03-25T01:26:43.194287770Z" level=info msg="Starting up" Mar 25 01:26:43.197645 dockerd[1702]: time="2025-03-25T01:26:43.197603044Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 25 01:26:44.296907 dockerd[1702]: time="2025-03-25T01:26:44.296824025Z" level=info msg="Loading containers: start." Mar 25 01:26:44.652049 kernel: Initializing XFRM netlink socket Mar 25 01:26:44.738036 systemd-networkd[1394]: docker0: Link UP Mar 25 01:26:44.990949 dockerd[1702]: time="2025-03-25T01:26:44.990810223Z" level=info msg="Loading containers: done." Mar 25 01:26:45.094087 dockerd[1702]: time="2025-03-25T01:26:45.094017458Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 25 01:26:45.094290 dockerd[1702]: time="2025-03-25T01:26:45.094160346Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 25 01:26:45.094347 dockerd[1702]: time="2025-03-25T01:26:45.094308585Z" level=info msg="Daemon has completed initialization" Mar 25 01:26:45.260791 dockerd[1702]: time="2025-03-25T01:26:45.260634939Z" level=info msg="API listen on /run/docker.sock" Mar 25 01:26:45.260837 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 25 01:26:46.170842 containerd[1485]: time="2025-03-25T01:26:46.170785023Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 25 01:26:46.833308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3532047576.mount: Deactivated successfully. Mar 25 01:26:48.015776 containerd[1485]: time="2025-03-25T01:26:48.015726146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:48.016936 containerd[1485]: time="2025-03-25T01:26:48.016809499Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674573" Mar 25 01:26:48.018265 containerd[1485]: time="2025-03-25T01:26:48.018211140Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:48.021040 containerd[1485]: time="2025-03-25T01:26:48.021005579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:48.021977 containerd[1485]: time="2025-03-25T01:26:48.021932952Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 1.85109308s" Mar 25 01:26:48.022047 containerd[1485]: time="2025-03-25T01:26:48.021979284Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 25 01:26:48.047138 containerd[1485]: time="2025-03-25T01:26:48.047089905Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 25 01:26:50.429759 containerd[1485]: time="2025-03-25T01:26:50.429651541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:50.433885 containerd[1485]: time="2025-03-25T01:26:50.433786295Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619772" Mar 25 01:26:50.435772 containerd[1485]: time="2025-03-25T01:26:50.435693715Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:50.438741 containerd[1485]: time="2025-03-25T01:26:50.438685259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:50.439575 containerd[1485]: time="2025-03-25T01:26:50.439538006Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 2.392405033s" Mar 25 01:26:50.439666 containerd[1485]: time="2025-03-25T01:26:50.439577221Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 25 01:26:50.466711 containerd[1485]: time="2025-03-25T01:26:50.466660951Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 25 01:26:51.618531 containerd[1485]: time="2025-03-25T01:26:51.618450379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:51.619494 containerd[1485]: time="2025-03-25T01:26:51.619441494Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903309" Mar 25 01:26:51.620855 containerd[1485]: time="2025-03-25T01:26:51.620823326Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:51.624037 containerd[1485]: time="2025-03-25T01:26:51.623977765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:51.624842 containerd[1485]: time="2025-03-25T01:26:51.624790457Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 1.158082048s" Mar 25 01:26:51.624906 containerd[1485]: time="2025-03-25T01:26:51.624844255Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 25 01:26:51.646171 containerd[1485]: time="2025-03-25T01:26:51.646120785Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 25 01:26:52.008981 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 25 01:26:52.011326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:26:52.262260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:26:52.280447 (kubelet)[2016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:26:52.659326 kubelet[2016]: E0325 01:26:52.659163 2016 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:26:52.666631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:26:52.666852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:26:52.667275 systemd[1]: kubelet.service: Consumed 300ms CPU time, 98.1M memory peak. Mar 25 01:26:53.968298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount261541528.mount: Deactivated successfully. Mar 25 01:26:55.677210 containerd[1485]: time="2025-03-25T01:26:55.677127925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:55.680783 containerd[1485]: time="2025-03-25T01:26:55.680714428Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185372" Mar 25 01:26:55.684085 containerd[1485]: time="2025-03-25T01:26:55.683956522Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:55.687088 containerd[1485]: time="2025-03-25T01:26:55.686998906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:55.687535 containerd[1485]: time="2025-03-25T01:26:55.687486376Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 4.041324211s" Mar 25 01:26:55.687535 containerd[1485]: time="2025-03-25T01:26:55.687521980Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 25 01:26:55.710756 containerd[1485]: time="2025-03-25T01:26:55.710682973Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 25 01:26:56.822888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1283309242.mount: Deactivated successfully. Mar 25 01:26:57.858819 containerd[1485]: time="2025-03-25T01:26:57.858737219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:57.914997 containerd[1485]: time="2025-03-25T01:26:57.914876791Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 25 01:26:57.954641 containerd[1485]: time="2025-03-25T01:26:57.954573192Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:58.001355 containerd[1485]: time="2025-03-25T01:26:58.001284690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:58.002330 containerd[1485]: time="2025-03-25T01:26:58.002253905Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.291507969s" Mar 25 01:26:58.002330 containerd[1485]: time="2025-03-25T01:26:58.002322141Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 25 01:26:58.023154 containerd[1485]: time="2025-03-25T01:26:58.023100447Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 25 01:26:58.685934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1289189145.mount: Deactivated successfully. Mar 25 01:26:58.693130 containerd[1485]: time="2025-03-25T01:26:58.693044330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:58.694033 containerd[1485]: time="2025-03-25T01:26:58.693942919Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Mar 25 01:26:58.695687 containerd[1485]: time="2025-03-25T01:26:58.695643451Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:58.700737 containerd[1485]: time="2025-03-25T01:26:58.700637172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:26:58.701609 containerd[1485]: time="2025-03-25T01:26:58.701558109Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 678.410436ms" Mar 25 01:26:58.701609 containerd[1485]: time="2025-03-25T01:26:58.701597423Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 25 01:26:58.725210 containerd[1485]: time="2025-03-25T01:26:58.725161930Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 25 01:26:59.295236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157926779.mount: Deactivated successfully. Mar 25 01:27:02.917632 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 25 01:27:02.919654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:27:03.126200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:27:03.147504 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:27:03.388051 kubelet[2165]: E0325 01:27:03.387838 2165 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:27:03.391951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:27:03.392175 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:27:03.392522 systemd[1]: kubelet.service: Consumed 275ms CPU time, 96.6M memory peak. Mar 25 01:27:03.980212 containerd[1485]: time="2025-03-25T01:27:03.980137630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:27:04.014275 containerd[1485]: time="2025-03-25T01:27:04.014154926Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Mar 25 01:27:04.026623 containerd[1485]: time="2025-03-25T01:27:04.026548625Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:27:04.037132 containerd[1485]: time="2025-03-25T01:27:04.037040171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:27:04.038346 containerd[1485]: time="2025-03-25T01:27:04.038304185Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 5.313092299s" Mar 25 01:27:04.038400 containerd[1485]: time="2025-03-25T01:27:04.038344838Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 25 01:27:06.793607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:27:06.793831 systemd[1]: kubelet.service: Consumed 275ms CPU time, 96.6M memory peak. Mar 25 01:27:06.796282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:27:06.826080 systemd[1]: Reload requested from client PID 2268 ('systemctl') (unit session-7.scope)... Mar 25 01:27:06.826104 systemd[1]: Reloading... Mar 25 01:27:06.934130 zram_generator::config[2314]: No configuration found. Mar 25 01:27:07.250275 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:27:07.372561 systemd[1]: Reloading finished in 545 ms. Mar 25 01:27:07.435483 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:27:07.438911 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:27:07.440617 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:27:07.440956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:27:07.441023 systemd[1]: kubelet.service: Consumed 157ms CPU time, 83.6M memory peak. Mar 25 01:27:07.443151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:27:07.625041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:27:07.639426 (kubelet)[2361]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:27:07.686081 kubelet[2361]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:27:07.686081 kubelet[2361]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:27:07.686081 kubelet[2361]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:27:07.686529 kubelet[2361]: I0325 01:27:07.686125 2361 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:27:08.859524 kubelet[2361]: I0325 01:27:08.859456 2361 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 25 01:27:08.859524 kubelet[2361]: I0325 01:27:08.859499 2361 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:27:08.860121 kubelet[2361]: I0325 01:27:08.859729 2361 server.go:927] "Client rotation is on, will bootstrap in background" Mar 25 01:27:08.879792 kubelet[2361]: I0325 01:27:08.879730 2361 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:27:08.881041 kubelet[2361]: E0325 01:27:08.880748 2361 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:08.902055 kubelet[2361]: I0325 01:27:08.901964 2361 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:27:08.904289 kubelet[2361]: I0325 01:27:08.904041 2361 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:27:08.904377 kubelet[2361]: I0325 01:27:08.904123 2361 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 25 01:27:08.904493 kubelet[2361]: I0325 01:27:08.904407 2361 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:27:08.904493 kubelet[2361]: I0325 01:27:08.904422 2361 container_manager_linux.go:301] "Creating device plugin manager" Mar 25 01:27:08.904858 kubelet[2361]: I0325 01:27:08.904624 2361 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:27:08.905878 kubelet[2361]: I0325 01:27:08.905755 2361 kubelet.go:400] "Attempting to sync node with API server" Mar 25 01:27:08.905878 kubelet[2361]: I0325 01:27:08.905797 2361 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:27:08.905878 kubelet[2361]: I0325 01:27:08.905831 2361 kubelet.go:312] "Adding apiserver pod source" Mar 25 01:27:08.905878 kubelet[2361]: I0325 01:27:08.905861 2361 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:27:08.906941 kubelet[2361]: W0325 01:27:08.906787 2361 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:08.906941 kubelet[2361]: E0325 01:27:08.906868 2361 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:08.906941 kubelet[2361]: W0325 01:27:08.906787 2361 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:08.906941 kubelet[2361]: E0325 01:27:08.906909 2361 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:08.912230 kubelet[2361]: I0325 01:27:08.912182 2361 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:27:08.914395 kubelet[2361]: I0325 01:27:08.914345 2361 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:27:08.914544 kubelet[2361]: W0325 01:27:08.914424 2361 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 25 01:27:08.915516 kubelet[2361]: I0325 01:27:08.915332 2361 server.go:1264] "Started kubelet" Mar 25 01:27:08.916471 kubelet[2361]: I0325 01:27:08.916036 2361 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:27:08.916566 kubelet[2361]: I0325 01:27:08.916540 2361 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:27:08.916642 kubelet[2361]: I0325 01:27:08.916597 2361 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:27:08.916963 kubelet[2361]: I0325 01:27:08.916928 2361 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:27:08.917747 kubelet[2361]: I0325 01:27:08.917718 2361 server.go:455] "Adding debug handlers to kubelet server" Mar 25 01:27:08.922777 kubelet[2361]: E0325 01:27:08.921593 2361 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:27:08.922777 kubelet[2361]: I0325 01:27:08.921654 2361 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 25 01:27:08.922777 kubelet[2361]: I0325 01:27:08.921775 2361 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:27:08.922777 kubelet[2361]: I0325 01:27:08.921855 2361 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:27:08.922777 kubelet[2361]: W0325 01:27:08.922311 2361 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:08.922777 kubelet[2361]: E0325 01:27:08.922366 2361 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:08.922777 kubelet[2361]: E0325 01:27:08.922457 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="200ms" Mar 25 01:27:08.927110 kubelet[2361]: I0325 01:27:08.926179 2361 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:27:08.927110 kubelet[2361]: I0325 01:27:08.926460 2361 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:27:08.927110 kubelet[2361]: E0325 01:27:08.926562 2361 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:27:08.928480 kubelet[2361]: I0325 01:27:08.927906 2361 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:27:08.929245 kubelet[2361]: E0325 01:27:08.928868 2361 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182fe76b50977f5a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-25 01:27:08.915294042 +0000 UTC m=+1.271127653,LastTimestamp:2025-03-25 01:27:08.915294042 +0000 UTC m=+1.271127653,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 25 01:27:08.946691 kubelet[2361]: I0325 01:27:08.946649 2361 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:27:08.946691 kubelet[2361]: I0325 01:27:08.946677 2361 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:27:08.946867 kubelet[2361]: I0325 01:27:08.946704 2361 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:27:08.947152 kubelet[2361]: I0325 01:27:08.947116 2361 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:27:08.950621 kubelet[2361]: I0325 01:27:08.950588 2361 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:27:08.951514 kubelet[2361]: I0325 01:27:08.950897 2361 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:27:08.951514 kubelet[2361]: I0325 01:27:08.950930 2361 kubelet.go:2337] "Starting kubelet main sync loop" Mar 25 01:27:08.951514 kubelet[2361]: E0325 01:27:08.951002 2361 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:27:08.952727 kubelet[2361]: W0325 01:27:08.952694 2361 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:08.952777 kubelet[2361]: E0325 01:27:08.952734 2361 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:08.954383 kubelet[2361]: I0325 01:27:08.954357 2361 policy_none.go:49] "None policy: Start" Mar 25 01:27:08.956556 kubelet[2361]: I0325 01:27:08.956519 2361 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:27:08.956630 kubelet[2361]: I0325 01:27:08.956573 2361 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:27:08.973745 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 25 01:27:08.993227 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 25 01:27:08.997049 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 25 01:27:09.016249 kubelet[2361]: I0325 01:27:09.016219 2361 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:27:09.016525 kubelet[2361]: I0325 01:27:09.016477 2361 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:27:09.016743 kubelet[2361]: I0325 01:27:09.016619 2361 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:27:09.017604 kubelet[2361]: E0325 01:27:09.017571 2361 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 25 01:27:09.023413 kubelet[2361]: I0325 01:27:09.023367 2361 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 25 01:27:09.023791 kubelet[2361]: E0325 01:27:09.023765 2361 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Mar 25 01:27:09.052263 kubelet[2361]: I0325 01:27:09.052137 2361 topology_manager.go:215] "Topology Admit Handler" podUID="9267c99915bed4bd52787fd4e23fb225" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 25 01:27:09.053495 kubelet[2361]: I0325 01:27:09.053461 2361 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 25 01:27:09.054687 kubelet[2361]: I0325 01:27:09.054209 2361 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 25 01:27:09.061252 systemd[1]: Created slice kubepods-burstable-pod9267c99915bed4bd52787fd4e23fb225.slice - libcontainer container kubepods-burstable-pod9267c99915bed4bd52787fd4e23fb225.slice. Mar 25 01:27:09.089484 systemd[1]: Created slice kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice - libcontainer container kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice. Mar 25 01:27:09.094464 systemd[1]: Created slice kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice - libcontainer container kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice. Mar 25 01:27:09.123643 kubelet[2361]: E0325 01:27:09.123515 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="400ms" Mar 25 01:27:09.223080 kubelet[2361]: I0325 01:27:09.223018 2361 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9267c99915bed4bd52787fd4e23fb225-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9267c99915bed4bd52787fd4e23fb225\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:27:09.223080 kubelet[2361]: I0325 01:27:09.223083 2361 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9267c99915bed4bd52787fd4e23fb225-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9267c99915bed4bd52787fd4e23fb225\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:27:09.223305 kubelet[2361]: I0325 01:27:09.223110 2361 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:27:09.223305 kubelet[2361]: I0325 01:27:09.223129 2361 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:27:09.223305 kubelet[2361]: I0325 01:27:09.223161 2361 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9267c99915bed4bd52787fd4e23fb225-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9267c99915bed4bd52787fd4e23fb225\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:27:09.223305 kubelet[2361]: I0325 01:27:09.223184 2361 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:27:09.223305 kubelet[2361]: I0325 01:27:09.223238 2361 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:27:09.223467 kubelet[2361]: I0325 01:27:09.223259 2361 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:27:09.223467 kubelet[2361]: I0325 01:27:09.223295 2361 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 25 01:27:09.225152 kubelet[2361]: I0325 01:27:09.225118 2361 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 25 01:27:09.225584 kubelet[2361]: E0325 01:27:09.225536 2361 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Mar 25 01:27:09.387865 kubelet[2361]: E0325 01:27:09.387711 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:09.388622 containerd[1485]: time="2025-03-25T01:27:09.388558002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9267c99915bed4bd52787fd4e23fb225,Namespace:kube-system,Attempt:0,}" Mar 25 01:27:09.392148 kubelet[2361]: E0325 01:27:09.392111 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:09.392622 containerd[1485]: time="2025-03-25T01:27:09.392588988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 25 01:27:09.396954 kubelet[2361]: E0325 01:27:09.396914 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:09.397203 containerd[1485]: time="2025-03-25T01:27:09.397182042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 25 01:27:09.525087 kubelet[2361]: E0325 01:27:09.525026 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="800ms" Mar 25 01:27:09.628013 kubelet[2361]: I0325 01:27:09.627955 2361 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 25 01:27:09.628534 kubelet[2361]: E0325 01:27:09.628482 2361 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Mar 25 01:27:10.208436 kubelet[2361]: W0325 01:27:10.208359 2361 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:10.208436 kubelet[2361]: E0325 01:27:10.208445 2361 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:10.325666 kubelet[2361]: E0325 01:27:10.325568 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="1.6s" Mar 25 01:27:10.420087 kubelet[2361]: W0325 01:27:10.419922 2361 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:10.420087 kubelet[2361]: E0325 01:27:10.420076 2361 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:10.430475 kubelet[2361]: I0325 01:27:10.430444 2361 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 25 01:27:10.430971 kubelet[2361]: E0325 01:27:10.430920 2361 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Mar 25 01:27:10.451039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2102849008.mount: Deactivated successfully. Mar 25 01:27:10.452512 kubelet[2361]: W0325 01:27:10.452435 2361 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:10.452512 kubelet[2361]: E0325 01:27:10.452505 2361 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:10.452640 kubelet[2361]: W0325 01:27:10.452557 2361 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:10.452684 kubelet[2361]: E0325 01:27:10.452638 2361 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Mar 25 01:27:10.460418 containerd[1485]: time="2025-03-25T01:27:10.460282440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:27:10.466771 containerd[1485]: time="2025-03-25T01:27:10.466680630Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 25 01:27:10.467952 containerd[1485]: time="2025-03-25T01:27:10.467914810Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:27:10.470367 containerd[1485]: time="2025-03-25T01:27:10.470332305Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:27:10.471383 containerd[1485]: time="2025-03-25T01:27:10.471313478Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 25 01:27:10.472821 containerd[1485]: time="2025-03-25T01:27:10.472752686Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:27:10.473915 containerd[1485]: time="2025-03-25T01:27:10.473880864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:27:10.474660 containerd[1485]: time="2025-03-25T01:27:10.474605470Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 25 01:27:10.474726 containerd[1485]: time="2025-03-25T01:27:10.474687958Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 945.980568ms" Mar 25 01:27:10.477977 containerd[1485]: time="2025-03-25T01:27:10.477938517Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 909.053905ms" Mar 25 01:27:10.478556 containerd[1485]: time="2025-03-25T01:27:10.478527172Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 849.911031ms" Mar 25 01:27:10.517772 containerd[1485]: time="2025-03-25T01:27:10.517709751Z" level=info msg="connecting to shim 0242b2d7d143e405222de26bd396b4b09f738c7a3b917bc1e536455822e6c186" address="unix:///run/containerd/s/ce22581e62ced9781677c06e4c7de95fcfceb040d7c9115d7cdca7e8d23f1c94" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:27:10.524891 containerd[1485]: time="2025-03-25T01:27:10.523335016Z" level=info msg="connecting to shim 2c4ad4b2abcc3da2251d19485f1b42d0915b66ce7c2024a864bb774890cb0a02" address="unix:///run/containerd/s/1741e6c87dec6dbfc4695a4b2fe89419dbe074913ccb950c9e2212350b4e6cea" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:27:10.532714 containerd[1485]: time="2025-03-25T01:27:10.532667111Z" level=info msg="connecting to shim ffba212d244984b94956dd989cb1bc771089c8e2ada110529e6e1cde9c113fdf" address="unix:///run/containerd/s/f5004a35eb0ed60d84955b45498519f9c168b6c8a795f0bd88cfc21fa62588da" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:27:10.570239 systemd[1]: Started cri-containerd-0242b2d7d143e405222de26bd396b4b09f738c7a3b917bc1e536455822e6c186.scope - libcontainer container 0242b2d7d143e405222de26bd396b4b09f738c7a3b917bc1e536455822e6c186. Mar 25 01:27:10.572235 systemd[1]: Started cri-containerd-2c4ad4b2abcc3da2251d19485f1b42d0915b66ce7c2024a864bb774890cb0a02.scope - libcontainer container 2c4ad4b2abcc3da2251d19485f1b42d0915b66ce7c2024a864bb774890cb0a02. Mar 25 01:27:10.580365 systemd[1]: Started cri-containerd-ffba212d244984b94956dd989cb1bc771089c8e2ada110529e6e1cde9c113fdf.scope - libcontainer container ffba212d244984b94956dd989cb1bc771089c8e2ada110529e6e1cde9c113fdf. Mar 25 01:27:10.664467 containerd[1485]: time="2025-03-25T01:27:10.664405638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9267c99915bed4bd52787fd4e23fb225,Namespace:kube-system,Attempt:0,} returns sandbox id \"0242b2d7d143e405222de26bd396b4b09f738c7a3b917bc1e536455822e6c186\"" Mar 25 01:27:10.665755 kubelet[2361]: E0325 01:27:10.665727 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:10.668898 containerd[1485]: time="2025-03-25T01:27:10.668857360Z" level=info msg="CreateContainer within sandbox \"0242b2d7d143e405222de26bd396b4b09f738c7a3b917bc1e536455822e6c186\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 25 01:27:10.677197 containerd[1485]: time="2025-03-25T01:27:10.677085814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c4ad4b2abcc3da2251d19485f1b42d0915b66ce7c2024a864bb774890cb0a02\"" Mar 25 01:27:10.678270 kubelet[2361]: E0325 01:27:10.678180 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:10.680471 containerd[1485]: time="2025-03-25T01:27:10.680410732Z" level=info msg="CreateContainer within sandbox \"2c4ad4b2abcc3da2251d19485f1b42d0915b66ce7c2024a864bb774890cb0a02\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 25 01:27:10.686164 containerd[1485]: time="2025-03-25T01:27:10.686108132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffba212d244984b94956dd989cb1bc771089c8e2ada110529e6e1cde9c113fdf\"" Mar 25 01:27:10.687311 kubelet[2361]: E0325 01:27:10.687268 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:10.689465 containerd[1485]: time="2025-03-25T01:27:10.689422897Z" level=info msg="CreateContainer within sandbox \"ffba212d244984b94956dd989cb1bc771089c8e2ada110529e6e1cde9c113fdf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 25 01:27:10.709277 containerd[1485]: time="2025-03-25T01:27:10.709214664Z" level=info msg="Container 28de243cb875783602ac6aed359d336854a8ee7bef850739e87d58b5d2f4c72a: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:27:10.715009 containerd[1485]: time="2025-03-25T01:27:10.714858771Z" level=info msg="Container 941f0c388923b048c1af0f96c761f115ce4378f455773ef73093f0940f072913: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:27:10.720914 containerd[1485]: time="2025-03-25T01:27:10.720868035Z" level=info msg="CreateContainer within sandbox \"0242b2d7d143e405222de26bd396b4b09f738c7a3b917bc1e536455822e6c186\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"28de243cb875783602ac6aed359d336854a8ee7bef850739e87d58b5d2f4c72a\"" Mar 25 01:27:10.721620 containerd[1485]: time="2025-03-25T01:27:10.721583409Z" level=info msg="StartContainer for \"28de243cb875783602ac6aed359d336854a8ee7bef850739e87d58b5d2f4c72a\"" Mar 25 01:27:10.722722 containerd[1485]: time="2025-03-25T01:27:10.722687632Z" level=info msg="connecting to shim 28de243cb875783602ac6aed359d336854a8ee7bef850739e87d58b5d2f4c72a" address="unix:///run/containerd/s/ce22581e62ced9781677c06e4c7de95fcfceb040d7c9115d7cdca7e8d23f1c94" protocol=ttrpc version=3 Mar 25 01:27:10.723794 containerd[1485]: time="2025-03-25T01:27:10.723765535Z" level=info msg="Container e559b26f710e3650a9d48fa5f1fa0f8d90a25e94b4021e9a09eadc3781c200af: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:27:10.733156 containerd[1485]: time="2025-03-25T01:27:10.733108075Z" level=info msg="CreateContainer within sandbox \"2c4ad4b2abcc3da2251d19485f1b42d0915b66ce7c2024a864bb774890cb0a02\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"941f0c388923b048c1af0f96c761f115ce4378f455773ef73093f0940f072913\"" Mar 25 01:27:10.734119 containerd[1485]: time="2025-03-25T01:27:10.734004654Z" level=info msg="StartContainer for \"941f0c388923b048c1af0f96c761f115ce4378f455773ef73093f0940f072913\"" Mar 25 01:27:10.735434 containerd[1485]: time="2025-03-25T01:27:10.735402909Z" level=info msg="connecting to shim 941f0c388923b048c1af0f96c761f115ce4378f455773ef73093f0940f072913" address="unix:///run/containerd/s/1741e6c87dec6dbfc4695a4b2fe89419dbe074913ccb950c9e2212350b4e6cea" protocol=ttrpc version=3 Mar 25 01:27:10.736566 containerd[1485]: time="2025-03-25T01:27:10.736463492Z" level=info msg="CreateContainer within sandbox \"ffba212d244984b94956dd989cb1bc771089c8e2ada110529e6e1cde9c113fdf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e559b26f710e3650a9d48fa5f1fa0f8d90a25e94b4021e9a09eadc3781c200af\"" Mar 25 01:27:10.736868 containerd[1485]: time="2025-03-25T01:27:10.736829368Z" level=info msg="StartContainer for \"e559b26f710e3650a9d48fa5f1fa0f8d90a25e94b4021e9a09eadc3781c200af\"" Mar 25 01:27:10.738019 containerd[1485]: time="2025-03-25T01:27:10.737947754Z" level=info msg="connecting to shim e559b26f710e3650a9d48fa5f1fa0f8d90a25e94b4021e9a09eadc3781c200af" address="unix:///run/containerd/s/f5004a35eb0ed60d84955b45498519f9c168b6c8a795f0bd88cfc21fa62588da" protocol=ttrpc version=3 Mar 25 01:27:10.743230 systemd[1]: Started cri-containerd-28de243cb875783602ac6aed359d336854a8ee7bef850739e87d58b5d2f4c72a.scope - libcontainer container 28de243cb875783602ac6aed359d336854a8ee7bef850739e87d58b5d2f4c72a. Mar 25 01:27:10.755121 systemd[1]: Started cri-containerd-941f0c388923b048c1af0f96c761f115ce4378f455773ef73093f0940f072913.scope - libcontainer container 941f0c388923b048c1af0f96c761f115ce4378f455773ef73093f0940f072913. Mar 25 01:27:10.759115 systemd[1]: Started cri-containerd-e559b26f710e3650a9d48fa5f1fa0f8d90a25e94b4021e9a09eadc3781c200af.scope - libcontainer container e559b26f710e3650a9d48fa5f1fa0f8d90a25e94b4021e9a09eadc3781c200af. Mar 25 01:27:10.952931 containerd[1485]: time="2025-03-25T01:27:10.952217582Z" level=info msg="StartContainer for \"941f0c388923b048c1af0f96c761f115ce4378f455773ef73093f0940f072913\" returns successfully" Mar 25 01:27:10.952931 containerd[1485]: time="2025-03-25T01:27:10.952395989Z" level=info msg="StartContainer for \"28de243cb875783602ac6aed359d336854a8ee7bef850739e87d58b5d2f4c72a\" returns successfully" Mar 25 01:27:10.953620 containerd[1485]: time="2025-03-25T01:27:10.953335744Z" level=info msg="StartContainer for \"e559b26f710e3650a9d48fa5f1fa0f8d90a25e94b4021e9a09eadc3781c200af\" returns successfully" Mar 25 01:27:10.969222 kubelet[2361]: E0325 01:27:10.965504 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:10.973408 kubelet[2361]: E0325 01:27:10.973383 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:10.983058 kubelet[2361]: E0325 01:27:10.983031 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:11.957701 kubelet[2361]: E0325 01:27:11.957656 2361 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 25 01:27:11.979932 kubelet[2361]: E0325 01:27:11.979877 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:12.033521 kubelet[2361]: I0325 01:27:12.033478 2361 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 25 01:27:12.043219 kubelet[2361]: I0325 01:27:12.043162 2361 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 25 01:27:12.051172 kubelet[2361]: E0325 01:27:12.051132 2361 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:27:12.152112 kubelet[2361]: E0325 01:27:12.152052 2361 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:27:12.252918 kubelet[2361]: E0325 01:27:12.252772 2361 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:27:12.353305 kubelet[2361]: E0325 01:27:12.353257 2361 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:27:12.383672 kubelet[2361]: E0325 01:27:12.383631 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:12.454354 kubelet[2361]: E0325 01:27:12.454290 2361 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:27:12.908944 kubelet[2361]: I0325 01:27:12.908892 2361 apiserver.go:52] "Watching apiserver" Mar 25 01:27:12.922153 kubelet[2361]: I0325 01:27:12.922127 2361 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:27:14.817371 kubelet[2361]: E0325 01:27:14.817311 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:14.983737 kubelet[2361]: E0325 01:27:14.983692 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:16.833422 kubelet[2361]: E0325 01:27:16.833364 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:16.986522 kubelet[2361]: E0325 01:27:16.986464 2361 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:17.569839 systemd[1]: Reload requested from client PID 2640 ('systemctl') (unit session-7.scope)... Mar 25 01:27:17.569860 systemd[1]: Reloading... Mar 25 01:27:17.689038 zram_generator::config[2690]: No configuration found. Mar 25 01:27:17.801219 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:27:17.920715 systemd[1]: Reloading finished in 350 ms. Mar 25 01:27:17.948812 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:27:17.964920 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:27:17.965341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:27:17.965411 systemd[1]: kubelet.service: Consumed 1.273s CPU time, 116.8M memory peak. Mar 25 01:27:17.968057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:27:18.168292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:27:18.181393 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:27:18.246194 kubelet[2729]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:27:18.246194 kubelet[2729]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:27:18.246194 kubelet[2729]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:27:18.246627 kubelet[2729]: I0325 01:27:18.246221 2729 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:27:18.250945 kubelet[2729]: I0325 01:27:18.250900 2729 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 25 01:27:18.250945 kubelet[2729]: I0325 01:27:18.250928 2729 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:27:18.251218 kubelet[2729]: I0325 01:27:18.251195 2729 server.go:927] "Client rotation is on, will bootstrap in background" Mar 25 01:27:18.252557 kubelet[2729]: I0325 01:27:18.252525 2729 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 25 01:27:18.253783 kubelet[2729]: I0325 01:27:18.253752 2729 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:27:18.274723 kubelet[2729]: I0325 01:27:18.274644 2729 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:27:18.275013 kubelet[2729]: I0325 01:27:18.274944 2729 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:27:18.275201 kubelet[2729]: I0325 01:27:18.275008 2729 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 25 01:27:18.275280 kubelet[2729]: I0325 01:27:18.275213 2729 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:27:18.275280 kubelet[2729]: I0325 01:27:18.275224 2729 container_manager_linux.go:301] "Creating device plugin manager" Mar 25 01:27:18.294085 kubelet[2729]: I0325 01:27:18.294058 2729 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:27:18.294205 kubelet[2729]: I0325 01:27:18.294190 2729 kubelet.go:400] "Attempting to sync node with API server" Mar 25 01:27:18.294250 kubelet[2729]: I0325 01:27:18.294206 2729 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:27:18.294250 kubelet[2729]: I0325 01:27:18.294229 2729 kubelet.go:312] "Adding apiserver pod source" Mar 25 01:27:18.294250 kubelet[2729]: I0325 01:27:18.294248 2729 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:27:18.295599 kubelet[2729]: I0325 01:27:18.295559 2729 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:27:18.297999 kubelet[2729]: I0325 01:27:18.295787 2729 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:27:18.297999 kubelet[2729]: I0325 01:27:18.296282 2729 server.go:1264] "Started kubelet" Mar 25 01:27:18.297999 kubelet[2729]: I0325 01:27:18.297010 2729 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:27:18.297999 kubelet[2729]: I0325 01:27:18.297301 2729 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:27:18.297999 kubelet[2729]: I0325 01:27:18.297332 2729 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:27:18.297999 kubelet[2729]: I0325 01:27:18.297463 2729 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:27:18.298330 kubelet[2729]: I0325 01:27:18.298302 2729 server.go:455] "Adding debug handlers to kubelet server" Mar 25 01:27:18.299321 kubelet[2729]: E0325 01:27:18.299284 2729 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:27:18.299486 kubelet[2729]: I0325 01:27:18.299474 2729 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 25 01:27:18.299785 kubelet[2729]: I0325 01:27:18.299748 2729 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:27:18.300087 kubelet[2729]: I0325 01:27:18.300054 2729 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:27:18.303213 kubelet[2729]: I0325 01:27:18.303178 2729 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:27:18.303512 kubelet[2729]: I0325 01:27:18.303295 2729 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:27:18.308706 kubelet[2729]: E0325 01:27:18.307959 2729 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:27:18.308706 kubelet[2729]: I0325 01:27:18.308108 2729 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:27:18.313367 kubelet[2729]: I0325 01:27:18.310971 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:27:18.313367 kubelet[2729]: I0325 01:27:18.312234 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:27:18.313367 kubelet[2729]: I0325 01:27:18.312262 2729 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:27:18.313367 kubelet[2729]: I0325 01:27:18.312278 2729 kubelet.go:2337] "Starting kubelet main sync loop" Mar 25 01:27:18.313367 kubelet[2729]: E0325 01:27:18.312327 2729 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:27:18.353744 kubelet[2729]: I0325 01:27:18.353712 2729 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:27:18.353744 kubelet[2729]: I0325 01:27:18.353730 2729 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:27:18.353744 kubelet[2729]: I0325 01:27:18.353752 2729 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:27:18.353959 kubelet[2729]: I0325 01:27:18.353933 2729 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 25 01:27:18.353959 kubelet[2729]: I0325 01:27:18.353945 2729 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 25 01:27:18.354033 kubelet[2729]: I0325 01:27:18.353968 2729 policy_none.go:49] "None policy: Start" Mar 25 01:27:18.354685 kubelet[2729]: I0325 01:27:18.354664 2729 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:27:18.354723 kubelet[2729]: I0325 01:27:18.354698 2729 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:27:18.354883 kubelet[2729]: I0325 01:27:18.354864 2729 state_mem.go:75] "Updated machine memory state" Mar 25 01:27:18.359341 kubelet[2729]: I0325 01:27:18.359200 2729 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:27:18.359455 kubelet[2729]: I0325 01:27:18.359407 2729 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:27:18.359549 kubelet[2729]: I0325 01:27:18.359517 2729 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:27:18.404665 kubelet[2729]: I0325 01:27:18.404627 2729 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 25 01:27:18.411306 kubelet[2729]: I0325 01:27:18.411267 2729 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 25 01:27:18.411460 kubelet[2729]: I0325 01:27:18.411359 2729 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 25 01:27:18.412542 kubelet[2729]: I0325 01:27:18.412455 2729 topology_manager.go:215] "Topology Admit Handler" podUID="9267c99915bed4bd52787fd4e23fb225" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 25 01:27:18.412685 kubelet[2729]: I0325 01:27:18.412615 2729 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 25 01:27:18.412685 kubelet[2729]: I0325 01:27:18.412685 2729 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 25 01:27:18.418832 kubelet[2729]: E0325 01:27:18.418755 2729 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 25 01:27:18.420224 kubelet[2729]: E0325 01:27:18.420166 2729 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 25 01:27:18.500678 kubelet[2729]: I0325 01:27:18.500526 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9267c99915bed4bd52787fd4e23fb225-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9267c99915bed4bd52787fd4e23fb225\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:27:18.573195 sudo[2763]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 25 01:27:18.573554 sudo[2763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 25 01:27:18.600887 kubelet[2729]: I0325 01:27:18.600831 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:27:18.600960 kubelet[2729]: I0325 01:27:18.600918 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:27:18.600960 kubelet[2729]: I0325 01:27:18.600951 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 25 01:27:18.601044 kubelet[2729]: I0325 01:27:18.600974 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9267c99915bed4bd52787fd4e23fb225-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9267c99915bed4bd52787fd4e23fb225\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:27:18.601044 kubelet[2729]: I0325 01:27:18.601016 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:27:18.601044 kubelet[2729]: I0325 01:27:18.601036 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:27:18.601116 kubelet[2729]: I0325 01:27:18.601057 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:27:18.601154 kubelet[2729]: I0325 01:27:18.601119 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9267c99915bed4bd52787fd4e23fb225-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9267c99915bed4bd52787fd4e23fb225\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:27:18.720421 kubelet[2729]: E0325 01:27:18.720080 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:18.720421 kubelet[2729]: E0325 01:27:18.720331 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:18.720961 kubelet[2729]: E0325 01:27:18.720801 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:19.049053 sudo[2763]: pam_unix(sudo:session): session closed for user root Mar 25 01:27:19.295439 kubelet[2729]: I0325 01:27:19.295408 2729 apiserver.go:52] "Watching apiserver" Mar 25 01:27:19.300056 kubelet[2729]: I0325 01:27:19.299951 2729 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:27:19.333254 kubelet[2729]: E0325 01:27:19.333227 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:19.338589 kubelet[2729]: E0325 01:27:19.338549 2729 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 25 01:27:19.338948 kubelet[2729]: E0325 01:27:19.338925 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:19.339033 kubelet[2729]: E0325 01:27:19.339015 2729 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 25 01:27:19.339419 kubelet[2729]: E0325 01:27:19.339406 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:19.356609 kubelet[2729]: I0325 01:27:19.356438 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.356417738 podStartE2EDuration="3.356417738s" podCreationTimestamp="2025-03-25 01:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:27:19.349863927 +0000 UTC m=+1.164448383" watchObservedRunningTime="2025-03-25 01:27:19.356417738 +0000 UTC m=+1.171002184" Mar 25 01:27:19.356609 kubelet[2729]: I0325 01:27:19.356536 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.356532828 podStartE2EDuration="1.356532828s" podCreationTimestamp="2025-03-25 01:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:27:19.35610913 +0000 UTC m=+1.170693576" watchObservedRunningTime="2025-03-25 01:27:19.356532828 +0000 UTC m=+1.171117274" Mar 25 01:27:19.364222 kubelet[2729]: I0325 01:27:19.364152 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.364132192 podStartE2EDuration="5.364132192s" podCreationTimestamp="2025-03-25 01:27:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:27:19.363897223 +0000 UTC m=+1.178481679" watchObservedRunningTime="2025-03-25 01:27:19.364132192 +0000 UTC m=+1.178716638" Mar 25 01:27:20.334880 kubelet[2729]: E0325 01:27:20.334841 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:20.334880 kubelet[2729]: E0325 01:27:20.334864 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:20.470790 sudo[1682]: pam_unix(sudo:session): session closed for user root Mar 25 01:27:20.472191 sshd[1681]: Connection closed by 10.0.0.1 port 44800 Mar 25 01:27:20.473052 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Mar 25 01:27:20.477226 systemd[1]: sshd@6-10.0.0.48:22-10.0.0.1:44800.service: Deactivated successfully. Mar 25 01:27:20.479669 systemd[1]: session-7.scope: Deactivated successfully. Mar 25 01:27:20.479880 systemd[1]: session-7.scope: Consumed 5.336s CPU time, 272.6M memory peak. Mar 25 01:27:20.481280 systemd-logind[1470]: Session 7 logged out. Waiting for processes to exit. Mar 25 01:27:20.482027 systemd-logind[1470]: Removed session 7. Mar 25 01:27:23.122318 kubelet[2729]: E0325 01:27:23.122262 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:23.339875 kubelet[2729]: E0325 01:27:23.339760 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:23.864606 update_engine[1471]: I20250325 01:27:23.864487 1471 update_attempter.cc:509] Updating boot flags... Mar 25 01:27:24.085021 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2812) Mar 25 01:27:24.120114 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2810) Mar 25 01:27:24.167017 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2810) Mar 25 01:27:25.585617 kubelet[2729]: E0325 01:27:25.585563 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:26.345398 kubelet[2729]: E0325 01:27:26.345350 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:29.238286 kubelet[2729]: E0325 01:27:29.238228 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:31.237414 kubelet[2729]: I0325 01:27:31.237377 2729 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 25 01:27:31.237903 containerd[1485]: time="2025-03-25T01:27:31.237860022Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 25 01:27:31.238188 kubelet[2729]: I0325 01:27:31.238158 2729 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 25 01:27:32.638928 kubelet[2729]: I0325 01:27:32.638813 2729 topology_manager.go:215] "Topology Admit Handler" podUID="abbb6e40-51fd-4065-86b8-d369dc02fafd" podNamespace="kube-system" podName="kube-proxy-b4lrs" Mar 25 01:27:32.644738 kubelet[2729]: I0325 01:27:32.643864 2729 topology_manager.go:215] "Topology Admit Handler" podUID="1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" podNamespace="kube-system" podName="cilium-j2kv9" Mar 25 01:27:32.655288 kubelet[2729]: I0325 01:27:32.655227 2729 topology_manager.go:215] "Topology Admit Handler" podUID="537e8a1c-01a9-422a-ae6d-79803a377e10" podNamespace="kube-system" podName="cilium-operator-599987898-8rxm9" Mar 25 01:27:32.657390 systemd[1]: Created slice kubepods-besteffort-podabbb6e40_51fd_4065_86b8_d369dc02fafd.slice - libcontainer container kubepods-besteffort-podabbb6e40_51fd_4065_86b8_d369dc02fafd.slice. Mar 25 01:27:32.684223 kubelet[2729]: I0325 01:27:32.684148 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-cilium-run\") pod \"cilium-j2kv9\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " pod="kube-system/cilium-j2kv9" Mar 25 01:27:32.684223 kubelet[2729]: I0325 01:27:32.684210 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-cilium-config-path\") pod \"cilium-j2kv9\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " pod="kube-system/cilium-j2kv9" Mar 25 01:27:32.684223 kubelet[2729]: I0325 01:27:32.684231 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67565\" (UniqueName: \"kubernetes.io/projected/abbb6e40-51fd-4065-86b8-d369dc02fafd-kube-api-access-67565\") pod \"kube-proxy-b4lrs\" (UID: \"abbb6e40-51fd-4065-86b8-d369dc02fafd\") " pod="kube-system/kube-proxy-b4lrs" Mar 25 01:27:32.684421 kubelet[2729]: I0325 01:27:32.684250 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-cilium-cgroup\") pod \"cilium-j2kv9\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " pod="kube-system/cilium-j2kv9" Mar 25 01:27:32.684421 kubelet[2729]: I0325 01:27:32.684267 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk7lp\" (UniqueName: \"kubernetes.io/projected/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-kube-api-access-bk7lp\") pod \"cilium-j2kv9\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " pod="kube-system/cilium-j2kv9" Mar 25 01:27:32.684421 kubelet[2729]: I0325 01:27:32.684281 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-host-proc-sys-kernel\") pod \"cilium-j2kv9\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " pod="kube-system/cilium-j2kv9" Mar 25 01:27:32.684421 kubelet[2729]: I0325 01:27:32.684296 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mg4s\" (UniqueName: \"kubernetes.io/projected/537e8a1c-01a9-422a-ae6d-79803a377e10-kube-api-access-8mg4s\") pod \"cilium-operator-599987898-8rxm9\" (UID: \"537e8a1c-01a9-422a-ae6d-79803a377e10\") " pod="kube-system/cilium-operator-599987898-8rxm9" Mar 25 01:27:32.684421 kubelet[2729]: I0325 01:27:32.684321 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/abbb6e40-51fd-4065-86b8-d369dc02fafd-kube-proxy\") pod \"kube-proxy-b4lrs\" (UID: \"abbb6e40-51fd-4065-86b8-d369dc02fafd\") " pod="kube-system/kube-proxy-b4lrs" Mar 25 01:27:32.684546 kubelet[2729]: I0325 01:27:32.684337 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-etc-cni-netd\") pod \"cilium-j2kv9\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " pod="kube-system/cilium-j2kv9" Mar 25 01:27:32.684546 kubelet[2729]: I0325 01:27:32.684353 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-clustermesh-secrets\") pod \"cilium-j2kv9\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " pod="kube-system/cilium-j2kv9" Mar 25 01:27:32.684546 kubelet[2729]: I0325 01:27:32.684368 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abbb6e40-51fd-4065-86b8-d369dc02fafd-lib-modules\") pod \"kube-proxy-b4lrs\" (UID: \"abbb6e40-51fd-4065-86b8-d369dc02fafd\") " pod="kube-system/kube-proxy-b4lrs" Mar 25 01:27:32.684546 kubelet[2729]: I0325 01:27:32.684406 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-cni-path\") pod \"cilium-j2kv9\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " pod="kube-system/cilium-j2kv9" Mar 25 01:27:32.684546 kubelet[2729]: I0325 01:27:32.684424 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-lib-modules\") pod \"cilium-j2kv9\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " pod="kube-system/cilium-j2kv9" Mar 25 01:27:32.684662 kubelet[2729]: I0325 01:27:32.684463 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/537e8a1c-01a9-422a-ae6d-79803a377e10-cilium-config-path\") pod \"cilium-operator-599987898-8rxm9\" (UID: \"537e8a1c-01a9-422a-ae6d-79803a377e10\") " pod="kube-system/cilium-operator-599987898-8rxm9" Mar 25 01:27:32.684662 kubelet[2729]: I0325 01:27:32.684479 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-hostproc\") pod \"cilium-j2kv9\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " pod="kube-system/cilium-j2kv9" Mar 25 01:27:32.684662 kubelet[2729]: I0325 01:27:32.684513 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-xtables-lock\") pod \"cilium-j2kv9\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " pod="kube-system/cilium-j2kv9" Mar 25 01:27:32.684662 kubelet[2729]: I0325 01:27:32.684533 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abbb6e40-51fd-4065-86b8-d369dc02fafd-xtables-lock\") pod \"kube-proxy-b4lrs\" (UID: \"abbb6e40-51fd-4065-86b8-d369dc02fafd\") " pod="kube-system/kube-proxy-b4lrs" Mar 25 01:27:32.684662 kubelet[2729]: I0325 01:27:32.684550 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-bpf-maps\") pod \"cilium-j2kv9\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " pod="kube-system/cilium-j2kv9" Mar 25 01:27:32.684779 kubelet[2729]: I0325 01:27:32.684566 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-host-proc-sys-net\") pod \"cilium-j2kv9\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " pod="kube-system/cilium-j2kv9" Mar 25 01:27:32.684779 kubelet[2729]: I0325 01:27:32.684582 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-hubble-tls\") pod \"cilium-j2kv9\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " pod="kube-system/cilium-j2kv9" Mar 25 01:27:32.687515 systemd[1]: Created slice kubepods-burstable-pod1a901c70_63cd_4b3b_84d7_d7c5fb2b17ae.slice - libcontainer container kubepods-burstable-pod1a901c70_63cd_4b3b_84d7_d7c5fb2b17ae.slice. Mar 25 01:27:32.691680 systemd[1]: Created slice kubepods-besteffort-pod537e8a1c_01a9_422a_ae6d_79803a377e10.slice - libcontainer container kubepods-besteffort-pod537e8a1c_01a9_422a_ae6d_79803a377e10.slice. Mar 25 01:27:32.987561 kubelet[2729]: E0325 01:27:32.987441 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:32.987955 containerd[1485]: time="2025-03-25T01:27:32.987914059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b4lrs,Uid:abbb6e40-51fd-4065-86b8-d369dc02fafd,Namespace:kube-system,Attempt:0,}" Mar 25 01:27:32.990967 kubelet[2729]: E0325 01:27:32.990938 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:32.991633 containerd[1485]: time="2025-03-25T01:27:32.991589387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j2kv9,Uid:1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae,Namespace:kube-system,Attempt:0,}" Mar 25 01:27:32.994904 kubelet[2729]: E0325 01:27:32.994881 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:32.995284 containerd[1485]: time="2025-03-25T01:27:32.995237951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-8rxm9,Uid:537e8a1c-01a9-422a-ae6d-79803a377e10,Namespace:kube-system,Attempt:0,}" Mar 25 01:27:33.686728 containerd[1485]: time="2025-03-25T01:27:33.686666023Z" level=info msg="connecting to shim 8cb1af615dc9d27fdf45fa575abba437b48074ff67e92d76d8f1f15b2584150d" address="unix:///run/containerd/s/9ff87453939f059b306fb7d0c3e34be7d1af976d22d69e638a0eb11408e5a9b8" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:27:33.688571 containerd[1485]: time="2025-03-25T01:27:33.688543011Z" level=info msg="connecting to shim 07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8" address="unix:///run/containerd/s/bfe8127aabf823fa6d326ee352fac4c3484903747f7229783268ca4bdfac2ae8" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:27:33.693092 containerd[1485]: time="2025-03-25T01:27:33.693031930Z" level=info msg="connecting to shim df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80" address="unix:///run/containerd/s/c0a62cbd72788fa0a7fd5a951755909ec94a2a788653ba57d1678d2e6afdc8d9" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:27:33.748329 systemd[1]: Started cri-containerd-07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8.scope - libcontainer container 07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8. Mar 25 01:27:33.753771 systemd[1]: Started cri-containerd-8cb1af615dc9d27fdf45fa575abba437b48074ff67e92d76d8f1f15b2584150d.scope - libcontainer container 8cb1af615dc9d27fdf45fa575abba437b48074ff67e92d76d8f1f15b2584150d. Mar 25 01:27:33.755561 systemd[1]: Started cri-containerd-df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80.scope - libcontainer container df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80. Mar 25 01:27:33.830746 containerd[1485]: time="2025-03-25T01:27:33.830659354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j2kv9,Uid:1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\"" Mar 25 01:27:33.831906 kubelet[2729]: E0325 01:27:33.831881 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:33.833494 containerd[1485]: time="2025-03-25T01:27:33.833421582Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 25 01:27:33.838380 containerd[1485]: time="2025-03-25T01:27:33.838334831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b4lrs,Uid:abbb6e40-51fd-4065-86b8-d369dc02fafd,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cb1af615dc9d27fdf45fa575abba437b48074ff67e92d76d8f1f15b2584150d\"" Mar 25 01:27:33.839512 kubelet[2729]: E0325 01:27:33.839474 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:33.840341 containerd[1485]: time="2025-03-25T01:27:33.840294565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-8rxm9,Uid:537e8a1c-01a9-422a-ae6d-79803a377e10,Namespace:kube-system,Attempt:0,} returns sandbox id \"07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8\"" Mar 25 01:27:33.841087 kubelet[2729]: E0325 01:27:33.840906 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:33.842559 containerd[1485]: time="2025-03-25T01:27:33.842478884Z" level=info msg="CreateContainer within sandbox \"8cb1af615dc9d27fdf45fa575abba437b48074ff67e92d76d8f1f15b2584150d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 25 01:27:33.860671 containerd[1485]: time="2025-03-25T01:27:33.860605923Z" level=info msg="Container 11d4f091b40081a7aeedd2e12cac695c5e4b0af1c064e2ebd449f9005f80bf8b: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:27:33.864892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount377398908.mount: Deactivated successfully. Mar 25 01:27:33.877184 containerd[1485]: time="2025-03-25T01:27:33.877108686Z" level=info msg="CreateContainer within sandbox \"8cb1af615dc9d27fdf45fa575abba437b48074ff67e92d76d8f1f15b2584150d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"11d4f091b40081a7aeedd2e12cac695c5e4b0af1c064e2ebd449f9005f80bf8b\"" Mar 25 01:27:33.877884 containerd[1485]: time="2025-03-25T01:27:33.877842219Z" level=info msg="StartContainer for \"11d4f091b40081a7aeedd2e12cac695c5e4b0af1c064e2ebd449f9005f80bf8b\"" Mar 25 01:27:33.879267 containerd[1485]: time="2025-03-25T01:27:33.879240207Z" level=info msg="connecting to shim 11d4f091b40081a7aeedd2e12cac695c5e4b0af1c064e2ebd449f9005f80bf8b" address="unix:///run/containerd/s/9ff87453939f059b306fb7d0c3e34be7d1af976d22d69e638a0eb11408e5a9b8" protocol=ttrpc version=3 Mar 25 01:27:33.908289 systemd[1]: Started cri-containerd-11d4f091b40081a7aeedd2e12cac695c5e4b0af1c064e2ebd449f9005f80bf8b.scope - libcontainer container 11d4f091b40081a7aeedd2e12cac695c5e4b0af1c064e2ebd449f9005f80bf8b. Mar 25 01:27:33.962962 containerd[1485]: time="2025-03-25T01:27:33.962786161Z" level=info msg="StartContainer for \"11d4f091b40081a7aeedd2e12cac695c5e4b0af1c064e2ebd449f9005f80bf8b\" returns successfully" Mar 25 01:27:34.359727 kubelet[2729]: E0325 01:27:34.359693 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:27:44.215189 systemd[1]: Started sshd@7-10.0.0.48:22-10.0.0.1:55322.service - OpenSSH per-connection server daemon (10.0.0.1:55322). Mar 25 01:27:44.321368 sshd[3113]: Accepted publickey for core from 10.0.0.1 port 55322 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:27:44.323108 sshd-session[3113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:27:44.327860 systemd-logind[1470]: New session 8 of user core. Mar 25 01:27:44.336144 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 25 01:27:44.488357 sshd[3115]: Connection closed by 10.0.0.1 port 55322 Mar 25 01:27:44.488740 sshd-session[3113]: pam_unix(sshd:session): session closed for user core Mar 25 01:27:44.493062 systemd[1]: sshd@7-10.0.0.48:22-10.0.0.1:55322.service: Deactivated successfully. Mar 25 01:27:44.495674 systemd[1]: session-8.scope: Deactivated successfully. Mar 25 01:27:44.496679 systemd-logind[1470]: Session 8 logged out. Waiting for processes to exit. Mar 25 01:27:44.497679 systemd-logind[1470]: Removed session 8. Mar 25 01:27:49.501969 systemd[1]: Started sshd@8-10.0.0.48:22-10.0.0.1:44354.service - OpenSSH per-connection server daemon (10.0.0.1:44354). Mar 25 01:27:49.557153 sshd[3130]: Accepted publickey for core from 10.0.0.1 port 44354 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:27:49.558524 sshd-session[3130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:27:49.562635 systemd-logind[1470]: New session 9 of user core. Mar 25 01:27:49.573125 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 25 01:27:49.694336 sshd[3132]: Connection closed by 10.0.0.1 port 44354 Mar 25 01:27:49.694692 sshd-session[3130]: pam_unix(sshd:session): session closed for user core Mar 25 01:27:49.699833 systemd[1]: sshd@8-10.0.0.48:22-10.0.0.1:44354.service: Deactivated successfully. Mar 25 01:27:49.702735 systemd[1]: session-9.scope: Deactivated successfully. Mar 25 01:27:49.703571 systemd-logind[1470]: Session 9 logged out. Waiting for processes to exit. Mar 25 01:27:49.704669 systemd-logind[1470]: Removed session 9. Mar 25 01:27:54.708687 systemd[1]: Started sshd@9-10.0.0.48:22-10.0.0.1:44366.service - OpenSSH per-connection server daemon (10.0.0.1:44366). Mar 25 01:27:55.092526 sshd[3146]: Accepted publickey for core from 10.0.0.1 port 44366 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:27:55.094043 sshd-session[3146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:27:55.098134 systemd-logind[1470]: New session 10 of user core. Mar 25 01:27:55.107107 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 25 01:27:55.268739 sshd[3148]: Connection closed by 10.0.0.1 port 44366 Mar 25 01:27:55.269070 sshd-session[3146]: pam_unix(sshd:session): session closed for user core Mar 25 01:27:55.272802 systemd[1]: sshd@9-10.0.0.48:22-10.0.0.1:44366.service: Deactivated successfully. Mar 25 01:27:55.274974 systemd[1]: session-10.scope: Deactivated successfully. Mar 25 01:27:55.275753 systemd-logind[1470]: Session 10 logged out. Waiting for processes to exit. Mar 25 01:27:55.276624 systemd-logind[1470]: Removed session 10. Mar 25 01:28:00.116394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1892782025.mount: Deactivated successfully. Mar 25 01:28:00.282434 systemd[1]: Started sshd@10-10.0.0.48:22-10.0.0.1:50838.service - OpenSSH per-connection server daemon (10.0.0.1:50838). Mar 25 01:28:00.492968 sshd[3166]: Accepted publickey for core from 10.0.0.1 port 50838 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:00.494792 sshd-session[3166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:00.499806 systemd-logind[1470]: New session 11 of user core. Mar 25 01:28:00.507452 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 25 01:28:00.635384 sshd[3176]: Connection closed by 10.0.0.1 port 50838 Mar 25 01:28:00.635782 sshd-session[3166]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:00.639494 systemd[1]: sshd@10-10.0.0.48:22-10.0.0.1:50838.service: Deactivated successfully. Mar 25 01:28:00.642213 systemd[1]: session-11.scope: Deactivated successfully. Mar 25 01:28:00.644464 systemd-logind[1470]: Session 11 logged out. Waiting for processes to exit. Mar 25 01:28:00.646514 systemd-logind[1470]: Removed session 11. Mar 25 01:28:04.012592 containerd[1485]: time="2025-03-25T01:28:04.012519276Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:28:04.013805 containerd[1485]: time="2025-03-25T01:28:04.013729513Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 25 01:28:04.015469 containerd[1485]: time="2025-03-25T01:28:04.015413908Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:28:04.017004 containerd[1485]: time="2025-03-25T01:28:04.016939977Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 30.183449778s" Mar 25 01:28:04.017124 containerd[1485]: time="2025-03-25T01:28:04.017011707Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 25 01:28:04.018608 containerd[1485]: time="2025-03-25T01:28:04.018570609Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 25 01:28:04.019666 containerd[1485]: time="2025-03-25T01:28:04.019628811Z" level=info msg="CreateContainer within sandbox \"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 01:28:04.030781 containerd[1485]: time="2025-03-25T01:28:04.030708474Z" level=info msg="Container 62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:28:04.039195 containerd[1485]: time="2025-03-25T01:28:04.039135484Z" level=info msg="CreateContainer within sandbox \"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6\"" Mar 25 01:28:04.040005 containerd[1485]: time="2025-03-25T01:28:04.039956546Z" level=info msg="StartContainer for \"62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6\"" Mar 25 01:28:04.041001 containerd[1485]: time="2025-03-25T01:28:04.040954472Z" level=info msg="connecting to shim 62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6" address="unix:///run/containerd/s/c0a62cbd72788fa0a7fd5a951755909ec94a2a788653ba57d1678d2e6afdc8d9" protocol=ttrpc version=3 Mar 25 01:28:04.070336 systemd[1]: Started cri-containerd-62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6.scope - libcontainer container 62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6. Mar 25 01:28:04.103191 containerd[1485]: time="2025-03-25T01:28:04.103125510Z" level=info msg="StartContainer for \"62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6\" returns successfully" Mar 25 01:28:04.116561 systemd[1]: cri-containerd-62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6.scope: Deactivated successfully. Mar 25 01:28:04.120604 containerd[1485]: time="2025-03-25T01:28:04.120527433Z" level=info msg="received exit event container_id:\"62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6\" id:\"62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6\" pid:3222 exited_at:{seconds:1742866084 nanos:120089243}" Mar 25 01:28:04.120820 containerd[1485]: time="2025-03-25T01:28:04.120604282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6\" id:\"62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6\" pid:3222 exited_at:{seconds:1742866084 nanos:120089243}" Mar 25 01:28:04.144735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6-rootfs.mount: Deactivated successfully. Mar 25 01:28:04.415728 kubelet[2729]: E0325 01:28:04.415524 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:04.418372 containerd[1485]: time="2025-03-25T01:28:04.417882414Z" level=info msg="CreateContainer within sandbox \"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 01:28:04.429645 containerd[1485]: time="2025-03-25T01:28:04.429581798Z" level=info msg="Container 0b7cce54b6f8f3c70cfd63f918674e812f31e8a353dd298bb3e402354df9c8ec: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:28:04.435189 kubelet[2729]: I0325 01:28:04.434841 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b4lrs" podStartSLOduration=32.434806369 podStartE2EDuration="32.434806369s" podCreationTimestamp="2025-03-25 01:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:27:34.407303922 +0000 UTC m=+16.221888369" watchObservedRunningTime="2025-03-25 01:28:04.434806369 +0000 UTC m=+46.249390825" Mar 25 01:28:04.438596 containerd[1485]: time="2025-03-25T01:28:04.438540569Z" level=info msg="CreateContainer within sandbox \"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0b7cce54b6f8f3c70cfd63f918674e812f31e8a353dd298bb3e402354df9c8ec\"" Mar 25 01:28:04.439221 containerd[1485]: time="2025-03-25T01:28:04.439186902Z" level=info msg="StartContainer for \"0b7cce54b6f8f3c70cfd63f918674e812f31e8a353dd298bb3e402354df9c8ec\"" Mar 25 01:28:04.440094 containerd[1485]: time="2025-03-25T01:28:04.440067360Z" level=info msg="connecting to shim 0b7cce54b6f8f3c70cfd63f918674e812f31e8a353dd298bb3e402354df9c8ec" address="unix:///run/containerd/s/c0a62cbd72788fa0a7fd5a951755909ec94a2a788653ba57d1678d2e6afdc8d9" protocol=ttrpc version=3 Mar 25 01:28:04.475292 systemd[1]: Started cri-containerd-0b7cce54b6f8f3c70cfd63f918674e812f31e8a353dd298bb3e402354df9c8ec.scope - libcontainer container 0b7cce54b6f8f3c70cfd63f918674e812f31e8a353dd298bb3e402354df9c8ec. Mar 25 01:28:04.510721 containerd[1485]: time="2025-03-25T01:28:04.510679226Z" level=info msg="StartContainer for \"0b7cce54b6f8f3c70cfd63f918674e812f31e8a353dd298bb3e402354df9c8ec\" returns successfully" Mar 25 01:28:04.524716 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:28:04.525383 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:28:04.525585 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:28:04.527391 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:28:04.527887 systemd[1]: cri-containerd-0b7cce54b6f8f3c70cfd63f918674e812f31e8a353dd298bb3e402354df9c8ec.scope: Deactivated successfully. Mar 25 01:28:04.528891 containerd[1485]: time="2025-03-25T01:28:04.528521051Z" level=info msg="received exit event container_id:\"0b7cce54b6f8f3c70cfd63f918674e812f31e8a353dd298bb3e402354df9c8ec\" id:\"0b7cce54b6f8f3c70cfd63f918674e812f31e8a353dd298bb3e402354df9c8ec\" pid:3265 exited_at:{seconds:1742866084 nanos:528087842}" Mar 25 01:28:04.529948 containerd[1485]: time="2025-03-25T01:28:04.529777247Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0b7cce54b6f8f3c70cfd63f918674e812f31e8a353dd298bb3e402354df9c8ec\" id:\"0b7cce54b6f8f3c70cfd63f918674e812f31e8a353dd298bb3e402354df9c8ec\" pid:3265 exited_at:{seconds:1742866084 nanos:528087842}" Mar 25 01:28:04.562127 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:28:05.419698 kubelet[2729]: E0325 01:28:05.419660 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:05.422221 containerd[1485]: time="2025-03-25T01:28:05.422183835Z" level=info msg="CreateContainer within sandbox \"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 01:28:05.649645 systemd[1]: Started sshd@11-10.0.0.48:22-10.0.0.1:60694.service - OpenSSH per-connection server daemon (10.0.0.1:60694). Mar 25 01:28:05.807753 sshd[3302]: Accepted publickey for core from 10.0.0.1 port 60694 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:05.809724 sshd-session[3302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:05.814098 systemd-logind[1470]: New session 12 of user core. Mar 25 01:28:05.821109 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 25 01:28:05.946485 sshd[3304]: Connection closed by 10.0.0.1 port 60694 Mar 25 01:28:05.946846 sshd-session[3302]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:05.951846 systemd[1]: sshd@11-10.0.0.48:22-10.0.0.1:60694.service: Deactivated successfully. Mar 25 01:28:05.954772 systemd[1]: session-12.scope: Deactivated successfully. Mar 25 01:28:05.955702 systemd-logind[1470]: Session 12 logged out. Waiting for processes to exit. Mar 25 01:28:05.956855 systemd-logind[1470]: Removed session 12. Mar 25 01:28:06.130301 containerd[1485]: time="2025-03-25T01:28:06.130169578Z" level=info msg="Container 08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:28:06.135223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3780838848.mount: Deactivated successfully. Mar 25 01:28:06.312286 containerd[1485]: time="2025-03-25T01:28:06.312234363Z" level=info msg="CreateContainer within sandbox \"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227\"" Mar 25 01:28:06.313705 containerd[1485]: time="2025-03-25T01:28:06.313159054Z" level=info msg="StartContainer for \"08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227\"" Mar 25 01:28:06.314699 containerd[1485]: time="2025-03-25T01:28:06.314644492Z" level=info msg="connecting to shim 08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227" address="unix:///run/containerd/s/c0a62cbd72788fa0a7fd5a951755909ec94a2a788653ba57d1678d2e6afdc8d9" protocol=ttrpc version=3 Mar 25 01:28:06.346219 systemd[1]: Started cri-containerd-08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227.scope - libcontainer container 08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227. Mar 25 01:28:06.390447 systemd[1]: cri-containerd-08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227.scope: Deactivated successfully. Mar 25 01:28:06.391700 containerd[1485]: time="2025-03-25T01:28:06.391663238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227\" id:\"08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227\" pid:3329 exited_at:{seconds:1742866086 nanos:391224969}" Mar 25 01:28:06.391834 containerd[1485]: time="2025-03-25T01:28:06.391759104Z" level=info msg="received exit event container_id:\"08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227\" id:\"08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227\" pid:3329 exited_at:{seconds:1742866086 nanos:391224969}" Mar 25 01:28:06.392658 containerd[1485]: time="2025-03-25T01:28:06.392612117Z" level=info msg="StartContainer for \"08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227\" returns successfully" Mar 25 01:28:06.415290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227-rootfs.mount: Deactivated successfully. Mar 25 01:28:06.428426 kubelet[2729]: E0325 01:28:06.428386 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:07.181292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount363139749.mount: Deactivated successfully. Mar 25 01:28:07.434888 kubelet[2729]: E0325 01:28:07.434748 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:07.439154 containerd[1485]: time="2025-03-25T01:28:07.439104635Z" level=info msg="CreateContainer within sandbox \"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 01:28:07.474861 containerd[1485]: time="2025-03-25T01:28:07.474739930Z" level=info msg="Container 806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:28:07.491519 containerd[1485]: time="2025-03-25T01:28:07.491429001Z" level=info msg="CreateContainer within sandbox \"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e\"" Mar 25 01:28:07.492186 containerd[1485]: time="2025-03-25T01:28:07.492110411Z" level=info msg="StartContainer for \"806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e\"" Mar 25 01:28:07.493358 containerd[1485]: time="2025-03-25T01:28:07.493327918Z" level=info msg="connecting to shim 806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e" address="unix:///run/containerd/s/c0a62cbd72788fa0a7fd5a951755909ec94a2a788653ba57d1678d2e6afdc8d9" protocol=ttrpc version=3 Mar 25 01:28:07.525278 systemd[1]: Started cri-containerd-806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e.scope - libcontainer container 806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e. Mar 25 01:28:07.553369 systemd[1]: cri-containerd-806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e.scope: Deactivated successfully. Mar 25 01:28:07.554018 containerd[1485]: time="2025-03-25T01:28:07.553909787Z" level=info msg="TaskExit event in podsandbox handler container_id:\"806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e\" id:\"806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e\" pid:3375 exited_at:{seconds:1742866087 nanos:553631688}" Mar 25 01:28:07.556437 containerd[1485]: time="2025-03-25T01:28:07.556399398Z" level=info msg="received exit event container_id:\"806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e\" id:\"806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e\" pid:3375 exited_at:{seconds:1742866087 nanos:553631688}" Mar 25 01:28:07.565902 containerd[1485]: time="2025-03-25T01:28:07.565855552Z" level=info msg="StartContainer for \"806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e\" returns successfully" Mar 25 01:28:08.178143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e-rootfs.mount: Deactivated successfully. Mar 25 01:28:08.442686 kubelet[2729]: E0325 01:28:08.442571 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:08.446951 containerd[1485]: time="2025-03-25T01:28:08.446352018Z" level=info msg="CreateContainer within sandbox \"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 01:28:08.459498 containerd[1485]: time="2025-03-25T01:28:08.459434506Z" level=info msg="Container e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:28:08.464717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431131916.mount: Deactivated successfully. Mar 25 01:28:08.475416 containerd[1485]: time="2025-03-25T01:28:08.475367702Z" level=info msg="CreateContainer within sandbox \"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637\"" Mar 25 01:28:08.475963 containerd[1485]: time="2025-03-25T01:28:08.475932716Z" level=info msg="StartContainer for \"e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637\"" Mar 25 01:28:08.476926 containerd[1485]: time="2025-03-25T01:28:08.476886974Z" level=info msg="connecting to shim e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637" address="unix:///run/containerd/s/c0a62cbd72788fa0a7fd5a951755909ec94a2a788653ba57d1678d2e6afdc8d9" protocol=ttrpc version=3 Mar 25 01:28:08.506197 systemd[1]: Started cri-containerd-e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637.scope - libcontainer container e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637. Mar 25 01:28:08.545376 containerd[1485]: time="2025-03-25T01:28:08.545258391Z" level=info msg="StartContainer for \"e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637\" returns successfully" Mar 25 01:28:08.571554 containerd[1485]: time="2025-03-25T01:28:08.571494124Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:28:08.573704 containerd[1485]: time="2025-03-25T01:28:08.573641322Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 25 01:28:08.575077 containerd[1485]: time="2025-03-25T01:28:08.574999781Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:28:08.577082 containerd[1485]: time="2025-03-25T01:28:08.577040692Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.558433283s" Mar 25 01:28:08.577216 containerd[1485]: time="2025-03-25T01:28:08.577171596Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 25 01:28:08.582692 containerd[1485]: time="2025-03-25T01:28:08.581437215Z" level=info msg="CreateContainer within sandbox \"07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 25 01:28:08.594571 containerd[1485]: time="2025-03-25T01:28:08.593822983Z" level=info msg="Container e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:28:08.606259 containerd[1485]: time="2025-03-25T01:28:08.606218792Z" level=info msg="CreateContainer within sandbox \"07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23\"" Mar 25 01:28:08.606954 containerd[1485]: time="2025-03-25T01:28:08.606935900Z" level=info msg="StartContainer for \"e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23\"" Mar 25 01:28:08.608007 containerd[1485]: time="2025-03-25T01:28:08.607973459Z" level=info msg="connecting to shim e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23" address="unix:///run/containerd/s/bfe8127aabf823fa6d326ee352fac4c3484903747f7229783268ca4bdfac2ae8" protocol=ttrpc version=3 Mar 25 01:28:08.629879 systemd[1]: Started cri-containerd-e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23.scope - libcontainer container e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23. Mar 25 01:28:08.631810 containerd[1485]: time="2025-03-25T01:28:08.631767013Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637\" id:\"eb523b5159453d317c4d9fa27e12ca84f0ed10efc4e20deb948383c7153414e2\" pid:3454 exited_at:{seconds:1742866088 nanos:631429580}" Mar 25 01:28:08.633885 kubelet[2729]: I0325 01:28:08.633854 2729 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 25 01:28:08.657364 kubelet[2729]: I0325 01:28:08.657313 2729 topology_manager.go:215] "Topology Admit Handler" podUID="952471d3-1e3e-44aa-9785-9519a9ad3dea" podNamespace="kube-system" podName="coredns-7db6d8ff4d-67l65" Mar 25 01:28:08.657749 kubelet[2729]: I0325 01:28:08.657708 2729 topology_manager.go:215] "Topology Admit Handler" podUID="a0004f58-92e0-436a-b21e-cc2f7acff782" podNamespace="kube-system" podName="coredns-7db6d8ff4d-w5wzj" Mar 25 01:28:08.680028 systemd[1]: Created slice kubepods-burstable-poda0004f58_92e0_436a_b21e_cc2f7acff782.slice - libcontainer container kubepods-burstable-poda0004f58_92e0_436a_b21e_cc2f7acff782.slice. Mar 25 01:28:08.693103 systemd[1]: Created slice kubepods-burstable-pod952471d3_1e3e_44aa_9785_9519a9ad3dea.slice - libcontainer container kubepods-burstable-pod952471d3_1e3e_44aa_9785_9519a9ad3dea.slice. Mar 25 01:28:08.719028 kubelet[2729]: I0325 01:28:08.718950 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0004f58-92e0-436a-b21e-cc2f7acff782-config-volume\") pod \"coredns-7db6d8ff4d-w5wzj\" (UID: \"a0004f58-92e0-436a-b21e-cc2f7acff782\") " pod="kube-system/coredns-7db6d8ff4d-w5wzj" Mar 25 01:28:08.719207 kubelet[2729]: I0325 01:28:08.719097 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw6dh\" (UniqueName: \"kubernetes.io/projected/a0004f58-92e0-436a-b21e-cc2f7acff782-kube-api-access-hw6dh\") pod \"coredns-7db6d8ff4d-w5wzj\" (UID: \"a0004f58-92e0-436a-b21e-cc2f7acff782\") " pod="kube-system/coredns-7db6d8ff4d-w5wzj" Mar 25 01:28:08.719207 kubelet[2729]: I0325 01:28:08.719129 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7pvr\" (UniqueName: \"kubernetes.io/projected/952471d3-1e3e-44aa-9785-9519a9ad3dea-kube-api-access-h7pvr\") pod \"coredns-7db6d8ff4d-67l65\" (UID: \"952471d3-1e3e-44aa-9785-9519a9ad3dea\") " pod="kube-system/coredns-7db6d8ff4d-67l65" Mar 25 01:28:08.719276 kubelet[2729]: I0325 01:28:08.719197 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/952471d3-1e3e-44aa-9785-9519a9ad3dea-config-volume\") pod \"coredns-7db6d8ff4d-67l65\" (UID: \"952471d3-1e3e-44aa-9785-9519a9ad3dea\") " pod="kube-system/coredns-7db6d8ff4d-67l65" Mar 25 01:28:08.885097 containerd[1485]: time="2025-03-25T01:28:08.885039612Z" level=info msg="StartContainer for \"e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23\" returns successfully" Mar 25 01:28:08.987037 kubelet[2729]: E0325 01:28:08.986910 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:08.988122 containerd[1485]: time="2025-03-25T01:28:08.987934668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w5wzj,Uid:a0004f58-92e0-436a-b21e-cc2f7acff782,Namespace:kube-system,Attempt:0,}" Mar 25 01:28:08.998337 kubelet[2729]: E0325 01:28:08.998210 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:08.999079 containerd[1485]: time="2025-03-25T01:28:08.999036371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-67l65,Uid:952471d3-1e3e-44aa-9785-9519a9ad3dea,Namespace:kube-system,Attempt:0,}" Mar 25 01:28:09.447326 kubelet[2729]: E0325 01:28:09.447274 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:09.458355 kubelet[2729]: E0325 01:28:09.457527 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:09.492696 kubelet[2729]: I0325 01:28:09.492625 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-8rxm9" podStartSLOduration=2.754634559 podStartE2EDuration="37.49259224s" podCreationTimestamp="2025-03-25 01:27:32 +0000 UTC" firstStartedPulling="2025-03-25 01:27:33.841475454 +0000 UTC m=+15.656059890" lastFinishedPulling="2025-03-25 01:28:08.579433125 +0000 UTC m=+50.394017571" observedRunningTime="2025-03-25 01:28:09.470197918 +0000 UTC m=+51.284782374" watchObservedRunningTime="2025-03-25 01:28:09.49259224 +0000 UTC m=+51.307176686" Mar 25 01:28:10.459722 kubelet[2729]: E0325 01:28:10.459670 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:10.460247 kubelet[2729]: E0325 01:28:10.459862 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:10.961173 systemd[1]: Started sshd@12-10.0.0.48:22-10.0.0.1:60708.service - OpenSSH per-connection server daemon (10.0.0.1:60708). Mar 25 01:28:11.020209 sshd[3588]: Accepted publickey for core from 10.0.0.1 port 60708 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:11.022049 sshd-session[3588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:11.030085 systemd-logind[1470]: New session 13 of user core. Mar 25 01:28:11.036592 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 25 01:28:11.173568 sshd[3590]: Connection closed by 10.0.0.1 port 60708 Mar 25 01:28:11.174159 sshd-session[3588]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:11.187363 systemd[1]: sshd@12-10.0.0.48:22-10.0.0.1:60708.service: Deactivated successfully. Mar 25 01:28:11.190125 systemd[1]: session-13.scope: Deactivated successfully. Mar 25 01:28:11.192119 systemd-logind[1470]: Session 13 logged out. Waiting for processes to exit. Mar 25 01:28:11.193789 systemd[1]: Started sshd@13-10.0.0.48:22-10.0.0.1:60714.service - OpenSSH per-connection server daemon (10.0.0.1:60714). Mar 25 01:28:11.195082 systemd-logind[1470]: Removed session 13. Mar 25 01:28:11.262449 sshd[3603]: Accepted publickey for core from 10.0.0.1 port 60714 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:11.264297 sshd-session[3603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:11.270357 systemd-logind[1470]: New session 14 of user core. Mar 25 01:28:11.280174 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 25 01:28:11.441997 sshd[3606]: Connection closed by 10.0.0.1 port 60714 Mar 25 01:28:11.443403 sshd-session[3603]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:11.454010 systemd[1]: sshd@13-10.0.0.48:22-10.0.0.1:60714.service: Deactivated successfully. Mar 25 01:28:11.458237 systemd[1]: session-14.scope: Deactivated successfully. Mar 25 01:28:11.460530 systemd-logind[1470]: Session 14 logged out. Waiting for processes to exit. Mar 25 01:28:11.464745 systemd[1]: Started sshd@14-10.0.0.48:22-10.0.0.1:60726.service - OpenSSH per-connection server daemon (10.0.0.1:60726). Mar 25 01:28:11.465006 kubelet[2729]: E0325 01:28:11.464894 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:11.466129 systemd-logind[1470]: Removed session 14. Mar 25 01:28:11.525408 sshd[3617]: Accepted publickey for core from 10.0.0.1 port 60726 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:11.527159 sshd-session[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:11.532027 systemd-logind[1470]: New session 15 of user core. Mar 25 01:28:11.543134 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 25 01:28:11.689053 sshd[3620]: Connection closed by 10.0.0.1 port 60726 Mar 25 01:28:11.690479 systemd-networkd[1394]: cilium_host: Link UP Mar 25 01:28:11.692602 sshd-session[3617]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:11.690693 systemd-networkd[1394]: cilium_net: Link UP Mar 25 01:28:11.690955 systemd-networkd[1394]: cilium_net: Gained carrier Mar 25 01:28:11.694371 systemd-networkd[1394]: cilium_host: Gained carrier Mar 25 01:28:11.699251 systemd[1]: sshd@14-10.0.0.48:22-10.0.0.1:60726.service: Deactivated successfully. Mar 25 01:28:11.704384 systemd[1]: session-15.scope: Deactivated successfully. Mar 25 01:28:11.708051 systemd-logind[1470]: Session 15 logged out. Waiting for processes to exit. Mar 25 01:28:11.710559 systemd-logind[1470]: Removed session 15. Mar 25 01:28:11.819475 systemd-networkd[1394]: cilium_vxlan: Link UP Mar 25 01:28:11.819492 systemd-networkd[1394]: cilium_vxlan: Gained carrier Mar 25 01:28:12.071011 kernel: NET: Registered PF_ALG protocol family Mar 25 01:28:12.252155 systemd-networkd[1394]: cilium_host: Gained IPv6LL Mar 25 01:28:12.637101 systemd-networkd[1394]: cilium_net: Gained IPv6LL Mar 25 01:28:12.756452 systemd-networkd[1394]: lxc_health: Link UP Mar 25 01:28:12.758087 systemd-networkd[1394]: lxc_health: Gained carrier Mar 25 01:28:13.001633 kubelet[2729]: E0325 01:28:13.001397 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:13.023028 systemd-networkd[1394]: lxc09a8ad883140: Link UP Mar 25 01:28:13.038014 kernel: eth0: renamed from tmp4d002 Mar 25 01:28:13.046305 systemd-networkd[1394]: lxc09a8ad883140: Gained carrier Mar 25 01:28:13.066931 systemd-networkd[1394]: lxc33435cae394e: Link UP Mar 25 01:28:13.069012 kernel: eth0: renamed from tmp5d426 Mar 25 01:28:13.074113 systemd-networkd[1394]: lxc33435cae394e: Gained carrier Mar 25 01:28:13.089170 kubelet[2729]: I0325 01:28:13.088547 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j2kv9" podStartSLOduration=10.90309925 podStartE2EDuration="41.088528733s" podCreationTimestamp="2025-03-25 01:27:32 +0000 UTC" firstStartedPulling="2025-03-25 01:27:33.832920979 +0000 UTC m=+15.647505426" lastFinishedPulling="2025-03-25 01:28:04.018350463 +0000 UTC m=+45.832934909" observedRunningTime="2025-03-25 01:28:09.493167533 +0000 UTC m=+51.307751979" watchObservedRunningTime="2025-03-25 01:28:13.088528733 +0000 UTC m=+54.903113179" Mar 25 01:28:13.149232 systemd-networkd[1394]: cilium_vxlan: Gained IPv6LL Mar 25 01:28:13.469122 kubelet[2729]: E0325 01:28:13.469078 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:14.364326 systemd-networkd[1394]: lxc_health: Gained IPv6LL Mar 25 01:28:14.876244 systemd-networkd[1394]: lxc33435cae394e: Gained IPv6LL Mar 25 01:28:15.132250 systemd-networkd[1394]: lxc09a8ad883140: Gained IPv6LL Mar 25 01:28:16.705843 systemd[1]: Started sshd@15-10.0.0.48:22-10.0.0.1:35352.service - OpenSSH per-connection server daemon (10.0.0.1:35352). Mar 25 01:28:16.766119 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 35352 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:16.768700 sshd-session[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:16.774692 systemd-logind[1470]: New session 16 of user core. Mar 25 01:28:16.780282 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 25 01:28:16.917153 sshd[4015]: Connection closed by 10.0.0.1 port 35352 Mar 25 01:28:16.917493 sshd-session[4013]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:16.923131 systemd[1]: sshd@15-10.0.0.48:22-10.0.0.1:35352.service: Deactivated successfully. Mar 25 01:28:16.926596 systemd[1]: session-16.scope: Deactivated successfully. Mar 25 01:28:16.927509 systemd-logind[1470]: Session 16 logged out. Waiting for processes to exit. Mar 25 01:28:16.928628 systemd-logind[1470]: Removed session 16. Mar 25 01:28:17.258499 containerd[1485]: time="2025-03-25T01:28:17.258439190Z" level=info msg="connecting to shim 4d002dfb6ef2f44ddefbc9f28d287ce007c221252309cd803d2f4dc5ec218163" address="unix:///run/containerd/s/37a0ed9153d9b026476d074446dfddfed6f4210ffc92cac1cb39b87bd869e76b" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:28:17.287808 containerd[1485]: time="2025-03-25T01:28:17.287742479Z" level=info msg="connecting to shim 5d426fa41c87b24b8e3b8c9c0c3d6d06eae2249198913a98ff45ba7d9481899a" address="unix:///run/containerd/s/157b8a0d49eb39e162ef1cc7a2999275b49d7f8804c46462be57221606a6a0b0" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:28:17.293291 systemd[1]: Started cri-containerd-4d002dfb6ef2f44ddefbc9f28d287ce007c221252309cd803d2f4dc5ec218163.scope - libcontainer container 4d002dfb6ef2f44ddefbc9f28d287ce007c221252309cd803d2f4dc5ec218163. Mar 25 01:28:17.311496 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 25 01:28:17.318127 systemd[1]: Started cri-containerd-5d426fa41c87b24b8e3b8c9c0c3d6d06eae2249198913a98ff45ba7d9481899a.scope - libcontainer container 5d426fa41c87b24b8e3b8c9c0c3d6d06eae2249198913a98ff45ba7d9481899a. Mar 25 01:28:17.335275 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 25 01:28:17.905346 containerd[1485]: time="2025-03-25T01:28:17.905257975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w5wzj,Uid:a0004f58-92e0-436a-b21e-cc2f7acff782,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d002dfb6ef2f44ddefbc9f28d287ce007c221252309cd803d2f4dc5ec218163\"" Mar 25 01:28:17.906029 kubelet[2729]: E0325 01:28:17.906000 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:17.908444 containerd[1485]: time="2025-03-25T01:28:17.908418918Z" level=info msg="CreateContainer within sandbox \"4d002dfb6ef2f44ddefbc9f28d287ce007c221252309cd803d2f4dc5ec218163\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:28:18.041752 containerd[1485]: time="2025-03-25T01:28:18.041711258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-67l65,Uid:952471d3-1e3e-44aa-9785-9519a9ad3dea,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d426fa41c87b24b8e3b8c9c0c3d6d06eae2249198913a98ff45ba7d9481899a\"" Mar 25 01:28:18.042512 kubelet[2729]: E0325 01:28:18.042482 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:18.044330 containerd[1485]: time="2025-03-25T01:28:18.044301916Z" level=info msg="CreateContainer within sandbox \"5d426fa41c87b24b8e3b8c9c0c3d6d06eae2249198913a98ff45ba7d9481899a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:28:18.923714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount628764943.mount: Deactivated successfully. Mar 25 01:28:18.951037 containerd[1485]: time="2025-03-25T01:28:18.950966991Z" level=info msg="Container 4202f17f45967e56d5c01f77ce2fb37ca01e7c3127f5c322e0ff0955a832d500: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:28:18.952973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount174709162.mount: Deactivated successfully. Mar 25 01:28:19.003140 containerd[1485]: time="2025-03-25T01:28:19.003091761Z" level=info msg="Container 8a47dd0af667cc0292d8f7d2cf3c8b4b142bfdb44a3d3fe0bf2a4619ecabe2e6: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:28:19.383277 containerd[1485]: time="2025-03-25T01:28:19.383215319Z" level=info msg="CreateContainer within sandbox \"5d426fa41c87b24b8e3b8c9c0c3d6d06eae2249198913a98ff45ba7d9481899a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4202f17f45967e56d5c01f77ce2fb37ca01e7c3127f5c322e0ff0955a832d500\"" Mar 25 01:28:19.383795 containerd[1485]: time="2025-03-25T01:28:19.383762005Z" level=info msg="StartContainer for \"4202f17f45967e56d5c01f77ce2fb37ca01e7c3127f5c322e0ff0955a832d500\"" Mar 25 01:28:19.384693 containerd[1485]: time="2025-03-25T01:28:19.384651805Z" level=info msg="connecting to shim 4202f17f45967e56d5c01f77ce2fb37ca01e7c3127f5c322e0ff0955a832d500" address="unix:///run/containerd/s/157b8a0d49eb39e162ef1cc7a2999275b49d7f8804c46462be57221606a6a0b0" protocol=ttrpc version=3 Mar 25 01:28:19.406123 systemd[1]: Started cri-containerd-4202f17f45967e56d5c01f77ce2fb37ca01e7c3127f5c322e0ff0955a832d500.scope - libcontainer container 4202f17f45967e56d5c01f77ce2fb37ca01e7c3127f5c322e0ff0955a832d500. Mar 25 01:28:19.571475 containerd[1485]: time="2025-03-25T01:28:19.571415058Z" level=info msg="CreateContainer within sandbox \"4d002dfb6ef2f44ddefbc9f28d287ce007c221252309cd803d2f4dc5ec218163\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a47dd0af667cc0292d8f7d2cf3c8b4b142bfdb44a3d3fe0bf2a4619ecabe2e6\"" Mar 25 01:28:19.572065 containerd[1485]: time="2025-03-25T01:28:19.572034862Z" level=info msg="StartContainer for \"8a47dd0af667cc0292d8f7d2cf3c8b4b142bfdb44a3d3fe0bf2a4619ecabe2e6\"" Mar 25 01:28:19.573469 containerd[1485]: time="2025-03-25T01:28:19.573375989Z" level=info msg="StartContainer for \"4202f17f45967e56d5c01f77ce2fb37ca01e7c3127f5c322e0ff0955a832d500\" returns successfully" Mar 25 01:28:19.573469 containerd[1485]: time="2025-03-25T01:28:19.573426984Z" level=info msg="connecting to shim 8a47dd0af667cc0292d8f7d2cf3c8b4b142bfdb44a3d3fe0bf2a4619ecabe2e6" address="unix:///run/containerd/s/37a0ed9153d9b026476d074446dfddfed6f4210ffc92cac1cb39b87bd869e76b" protocol=ttrpc version=3 Mar 25 01:28:19.600269 systemd[1]: Started cri-containerd-8a47dd0af667cc0292d8f7d2cf3c8b4b142bfdb44a3d3fe0bf2a4619ecabe2e6.scope - libcontainer container 8a47dd0af667cc0292d8f7d2cf3c8b4b142bfdb44a3d3fe0bf2a4619ecabe2e6. Mar 25 01:28:19.682462 containerd[1485]: time="2025-03-25T01:28:19.682317030Z" level=info msg="StartContainer for \"8a47dd0af667cc0292d8f7d2cf3c8b4b142bfdb44a3d3fe0bf2a4619ecabe2e6\" returns successfully" Mar 25 01:28:20.578423 kubelet[2729]: E0325 01:28:20.577781 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:20.578423 kubelet[2729]: E0325 01:28:20.577932 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:20.602813 kubelet[2729]: I0325 01:28:20.602729 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-67l65" podStartSLOduration=48.602708457 podStartE2EDuration="48.602708457s" podCreationTimestamp="2025-03-25 01:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:28:20.601225372 +0000 UTC m=+62.415809828" watchObservedRunningTime="2025-03-25 01:28:20.602708457 +0000 UTC m=+62.417292913" Mar 25 01:28:20.603132 kubelet[2729]: I0325 01:28:20.602838 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-w5wzj" podStartSLOduration=48.602834263 podStartE2EDuration="48.602834263s" podCreationTimestamp="2025-03-25 01:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:28:20.588949588 +0000 UTC m=+62.403534044" watchObservedRunningTime="2025-03-25 01:28:20.602834263 +0000 UTC m=+62.417418709" Mar 25 01:28:21.579817 kubelet[2729]: E0325 01:28:21.579771 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:21.579817 kubelet[2729]: E0325 01:28:21.579810 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:21.933899 systemd[1]: Started sshd@16-10.0.0.48:22-10.0.0.1:35366.service - OpenSSH per-connection server daemon (10.0.0.1:35366). Mar 25 01:28:21.996749 sshd[4209]: Accepted publickey for core from 10.0.0.1 port 35366 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:21.999370 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:22.005265 systemd-logind[1470]: New session 17 of user core. Mar 25 01:28:22.011213 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 25 01:28:22.133772 sshd[4211]: Connection closed by 10.0.0.1 port 35366 Mar 25 01:28:22.134117 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:22.138167 systemd[1]: sshd@16-10.0.0.48:22-10.0.0.1:35366.service: Deactivated successfully. Mar 25 01:28:22.140425 systemd[1]: session-17.scope: Deactivated successfully. Mar 25 01:28:22.141223 systemd-logind[1470]: Session 17 logged out. Waiting for processes to exit. Mar 25 01:28:22.142171 systemd-logind[1470]: Removed session 17. Mar 25 01:28:22.582301 kubelet[2729]: E0325 01:28:22.582268 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:22.582724 kubelet[2729]: E0325 01:28:22.582477 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:27.148014 systemd[1]: Started sshd@17-10.0.0.48:22-10.0.0.1:53332.service - OpenSSH per-connection server daemon (10.0.0.1:53332). Mar 25 01:28:27.202911 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 53332 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:27.204362 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:27.208837 systemd-logind[1470]: New session 18 of user core. Mar 25 01:28:27.220161 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 25 01:28:27.350818 sshd[4226]: Connection closed by 10.0.0.1 port 53332 Mar 25 01:28:27.351362 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:27.366305 systemd[1]: sshd@17-10.0.0.48:22-10.0.0.1:53332.service: Deactivated successfully. Mar 25 01:28:27.368894 systemd[1]: session-18.scope: Deactivated successfully. Mar 25 01:28:27.371608 systemd-logind[1470]: Session 18 logged out. Waiting for processes to exit. Mar 25 01:28:27.373582 systemd[1]: Started sshd@18-10.0.0.48:22-10.0.0.1:53334.service - OpenSSH per-connection server daemon (10.0.0.1:53334). Mar 25 01:28:27.374781 systemd-logind[1470]: Removed session 18. Mar 25 01:28:27.423119 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 53334 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:27.424950 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:27.430336 systemd-logind[1470]: New session 19 of user core. Mar 25 01:28:27.444301 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 25 01:28:28.048121 sshd[4242]: Connection closed by 10.0.0.1 port 53334 Mar 25 01:28:28.048529 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:28.060502 systemd[1]: sshd@18-10.0.0.48:22-10.0.0.1:53334.service: Deactivated successfully. Mar 25 01:28:28.063556 systemd[1]: session-19.scope: Deactivated successfully. Mar 25 01:28:28.066325 systemd-logind[1470]: Session 19 logged out. Waiting for processes to exit. Mar 25 01:28:28.068080 systemd[1]: Started sshd@19-10.0.0.48:22-10.0.0.1:53346.service - OpenSSH per-connection server daemon (10.0.0.1:53346). Mar 25 01:28:28.069170 systemd-logind[1470]: Removed session 19. Mar 25 01:28:28.126703 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 53346 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:28.128412 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:28.133841 systemd-logind[1470]: New session 20 of user core. Mar 25 01:28:28.143184 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 25 01:28:29.676805 sshd[4256]: Connection closed by 10.0.0.1 port 53346 Mar 25 01:28:29.677296 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:29.695311 systemd[1]: sshd@19-10.0.0.48:22-10.0.0.1:53346.service: Deactivated successfully. Mar 25 01:28:29.697973 systemd[1]: session-20.scope: Deactivated successfully. Mar 25 01:28:29.700186 systemd-logind[1470]: Session 20 logged out. Waiting for processes to exit. Mar 25 01:28:29.701685 systemd[1]: Started sshd@20-10.0.0.48:22-10.0.0.1:53348.service - OpenSSH per-connection server daemon (10.0.0.1:53348). Mar 25 01:28:29.703256 systemd-logind[1470]: Removed session 20. Mar 25 01:28:29.754793 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 53348 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:29.756691 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:29.762019 systemd-logind[1470]: New session 21 of user core. Mar 25 01:28:29.776232 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 25 01:28:30.058790 sshd[4276]: Connection closed by 10.0.0.1 port 53348 Mar 25 01:28:30.059345 sshd-session[4273]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:30.074325 systemd[1]: sshd@20-10.0.0.48:22-10.0.0.1:53348.service: Deactivated successfully. Mar 25 01:28:30.076864 systemd[1]: session-21.scope: Deactivated successfully. Mar 25 01:28:30.079941 systemd-logind[1470]: Session 21 logged out. Waiting for processes to exit. Mar 25 01:28:30.081467 systemd[1]: Started sshd@21-10.0.0.48:22-10.0.0.1:53362.service - OpenSSH per-connection server daemon (10.0.0.1:53362). Mar 25 01:28:30.082527 systemd-logind[1470]: Removed session 21. Mar 25 01:28:30.139658 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 53362 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:30.142271 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:30.147893 systemd-logind[1470]: New session 22 of user core. Mar 25 01:28:30.153180 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 25 01:28:30.276692 sshd[4290]: Connection closed by 10.0.0.1 port 53362 Mar 25 01:28:30.277162 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:30.282663 systemd[1]: sshd@21-10.0.0.48:22-10.0.0.1:53362.service: Deactivated successfully. Mar 25 01:28:30.285778 systemd[1]: session-22.scope: Deactivated successfully. Mar 25 01:28:30.286709 systemd-logind[1470]: Session 22 logged out. Waiting for processes to exit. Mar 25 01:28:30.287814 systemd-logind[1470]: Removed session 22. Mar 25 01:28:33.313160 kubelet[2729]: E0325 01:28:33.313111 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:35.293637 systemd[1]: Started sshd@22-10.0.0.48:22-10.0.0.1:58432.service - OpenSSH per-connection server daemon (10.0.0.1:58432). Mar 25 01:28:35.344441 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 58432 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:35.345700 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:35.350261 systemd-logind[1470]: New session 23 of user core. Mar 25 01:28:35.360089 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 25 01:28:35.474874 sshd[4308]: Connection closed by 10.0.0.1 port 58432 Mar 25 01:28:35.475174 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:35.479017 systemd[1]: sshd@22-10.0.0.48:22-10.0.0.1:58432.service: Deactivated successfully. Mar 25 01:28:35.481255 systemd[1]: session-23.scope: Deactivated successfully. Mar 25 01:28:35.482051 systemd-logind[1470]: Session 23 logged out. Waiting for processes to exit. Mar 25 01:28:35.482929 systemd-logind[1470]: Removed session 23. Mar 25 01:28:40.489628 systemd[1]: Started sshd@23-10.0.0.48:22-10.0.0.1:58434.service - OpenSSH per-connection server daemon (10.0.0.1:58434). Mar 25 01:28:40.539300 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 58434 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:40.541107 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:40.545947 systemd-logind[1470]: New session 24 of user core. Mar 25 01:28:40.555133 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 25 01:28:40.669088 sshd[4326]: Connection closed by 10.0.0.1 port 58434 Mar 25 01:28:40.669451 sshd-session[4324]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:40.674313 systemd[1]: sshd@23-10.0.0.48:22-10.0.0.1:58434.service: Deactivated successfully. Mar 25 01:28:40.676390 systemd[1]: session-24.scope: Deactivated successfully. Mar 25 01:28:40.677146 systemd-logind[1470]: Session 24 logged out. Waiting for processes to exit. Mar 25 01:28:40.678057 systemd-logind[1470]: Removed session 24. Mar 25 01:28:45.682078 systemd[1]: Started sshd@24-10.0.0.48:22-10.0.0.1:36416.service - OpenSSH per-connection server daemon (10.0.0.1:36416). Mar 25 01:28:45.731799 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 36416 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:45.733406 sshd-session[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:45.738778 systemd-logind[1470]: New session 25 of user core. Mar 25 01:28:45.744241 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 25 01:28:45.853757 sshd[4341]: Connection closed by 10.0.0.1 port 36416 Mar 25 01:28:45.854121 sshd-session[4339]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:45.858865 systemd[1]: sshd@24-10.0.0.48:22-10.0.0.1:36416.service: Deactivated successfully. Mar 25 01:28:45.861798 systemd[1]: session-25.scope: Deactivated successfully. Mar 25 01:28:45.862764 systemd-logind[1470]: Session 25 logged out. Waiting for processes to exit. Mar 25 01:28:45.863722 systemd-logind[1470]: Removed session 25. Mar 25 01:28:50.870466 systemd[1]: Started sshd@25-10.0.0.48:22-10.0.0.1:36424.service - OpenSSH per-connection server daemon (10.0.0.1:36424). Mar 25 01:28:50.911381 sshd[4354]: Accepted publickey for core from 10.0.0.1 port 36424 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:50.913148 sshd-session[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:50.917463 systemd-logind[1470]: New session 26 of user core. Mar 25 01:28:50.926140 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 25 01:28:51.030794 sshd[4356]: Connection closed by 10.0.0.1 port 36424 Mar 25 01:28:51.031162 sshd-session[4354]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:51.041892 systemd[1]: sshd@25-10.0.0.48:22-10.0.0.1:36424.service: Deactivated successfully. Mar 25 01:28:51.043721 systemd[1]: session-26.scope: Deactivated successfully. Mar 25 01:28:51.045338 systemd-logind[1470]: Session 26 logged out. Waiting for processes to exit. Mar 25 01:28:51.046808 systemd[1]: Started sshd@26-10.0.0.48:22-10.0.0.1:36432.service - OpenSSH per-connection server daemon (10.0.0.1:36432). Mar 25 01:28:51.047826 systemd-logind[1470]: Removed session 26. Mar 25 01:28:51.093328 sshd[4369]: Accepted publickey for core from 10.0.0.1 port 36432 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:51.095121 sshd-session[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:51.100102 systemd-logind[1470]: New session 27 of user core. Mar 25 01:28:51.110143 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 25 01:28:51.313708 kubelet[2729]: E0325 01:28:51.313671 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:52.471367 containerd[1485]: time="2025-03-25T01:28:52.471308731Z" level=info msg="StopContainer for \"e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23\" with timeout 30 (s)" Mar 25 01:28:52.472578 containerd[1485]: time="2025-03-25T01:28:52.472539696Z" level=info msg="Stop container \"e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23\" with signal terminated" Mar 25 01:28:52.490839 systemd[1]: cri-containerd-e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23.scope: Deactivated successfully. Mar 25 01:28:52.498915 containerd[1485]: time="2025-03-25T01:28:52.493202821Z" level=info msg="received exit event container_id:\"e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23\" id:\"e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23\" pid:3497 exited_at:{seconds:1742866132 nanos:492270666}" Mar 25 01:28:52.498915 containerd[1485]: time="2025-03-25T01:28:52.493544171Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23\" id:\"e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23\" pid:3497 exited_at:{seconds:1742866132 nanos:492270666}" Mar 25 01:28:52.515446 containerd[1485]: time="2025-03-25T01:28:52.514945263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637\" id:\"d04e52925d47fe9b3cf64fa53577dbdceb485a9dc6266fcfc6cfb0e667174adb\" pid:4399 exited_at:{seconds:1742866132 nanos:514267833}" Mar 25 01:28:52.517413 containerd[1485]: time="2025-03-25T01:28:52.517383908Z" level=info msg="StopContainer for \"e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637\" with timeout 2 (s)" Mar 25 01:28:52.517749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23-rootfs.mount: Deactivated successfully. Mar 25 01:28:52.518318 containerd[1485]: time="2025-03-25T01:28:52.518169184Z" level=info msg="Stop container \"e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637\" with signal terminated" Mar 25 01:28:52.521683 containerd[1485]: time="2025-03-25T01:28:52.521635836Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:28:52.525523 systemd-networkd[1394]: lxc_health: Link DOWN Mar 25 01:28:52.525530 systemd-networkd[1394]: lxc_health: Lost carrier Mar 25 01:28:52.535198 containerd[1485]: time="2025-03-25T01:28:52.535163241Z" level=info msg="StopContainer for \"e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23\" returns successfully" Mar 25 01:28:52.535788 containerd[1485]: time="2025-03-25T01:28:52.535755989Z" level=info msg="StopPodSandbox for \"07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8\"" Mar 25 01:28:52.543761 systemd[1]: cri-containerd-e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637.scope: Deactivated successfully. Mar 25 01:28:52.544482 containerd[1485]: time="2025-03-25T01:28:52.544378893Z" level=info msg="received exit event container_id:\"e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637\" id:\"e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637\" pid:3419 exited_at:{seconds:1742866132 nanos:544184643}" Mar 25 01:28:52.544435 systemd[1]: cri-containerd-e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637.scope: Consumed 7.642s CPU time, 123.9M memory peak, 220K read from disk, 13.3M written to disk. Mar 25 01:28:52.544861 containerd[1485]: time="2025-03-25T01:28:52.544829321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637\" id:\"e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637\" pid:3419 exited_at:{seconds:1742866132 nanos:544184643}" Mar 25 01:28:52.547669 containerd[1485]: time="2025-03-25T01:28:52.547591161Z" level=info msg="Container to stop \"e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:28:52.555835 systemd[1]: cri-containerd-07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8.scope: Deactivated successfully. Mar 25 01:28:52.561774 containerd[1485]: time="2025-03-25T01:28:52.561736824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8\" id:\"07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8\" pid:2921 exit_status:137 exited_at:{seconds:1742866132 nanos:561299771}" Mar 25 01:28:52.567618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637-rootfs.mount: Deactivated successfully. Mar 25 01:28:52.589211 containerd[1485]: time="2025-03-25T01:28:52.589169101Z" level=info msg="StopContainer for \"e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637\" returns successfully" Mar 25 01:28:52.590306 containerd[1485]: time="2025-03-25T01:28:52.590280890Z" level=info msg="StopPodSandbox for \"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\"" Mar 25 01:28:52.590574 containerd[1485]: time="2025-03-25T01:28:52.590537328Z" level=info msg="Container to stop \"0b7cce54b6f8f3c70cfd63f918674e812f31e8a353dd298bb3e402354df9c8ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:28:52.590574 containerd[1485]: time="2025-03-25T01:28:52.590558058Z" level=info msg="Container to stop \"08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:28:52.590732 containerd[1485]: time="2025-03-25T01:28:52.590567074Z" level=info msg="Container to stop \"e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:28:52.590732 containerd[1485]: time="2025-03-25T01:28:52.590607441Z" level=info msg="Container to stop \"62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:28:52.590732 containerd[1485]: time="2025-03-25T01:28:52.590616358Z" level=info msg="Container to stop \"806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:28:52.590859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8-rootfs.mount: Deactivated successfully. Mar 25 01:28:52.592914 containerd[1485]: time="2025-03-25T01:28:52.592449369Z" level=info msg="shim disconnected" id=07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8 namespace=k8s.io Mar 25 01:28:52.592914 containerd[1485]: time="2025-03-25T01:28:52.592482272Z" level=warning msg="cleaning up after shim disconnected" id=07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8 namespace=k8s.io Mar 25 01:28:52.592914 containerd[1485]: time="2025-03-25T01:28:52.592490538Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:28:52.597612 systemd[1]: cri-containerd-df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80.scope: Deactivated successfully. Mar 25 01:28:52.618759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80-rootfs.mount: Deactivated successfully. Mar 25 01:28:52.621432 containerd[1485]: time="2025-03-25T01:28:52.621384971Z" level=info msg="shim disconnected" id=df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80 namespace=k8s.io Mar 25 01:28:52.621432 containerd[1485]: time="2025-03-25T01:28:52.621426039Z" level=warning msg="cleaning up after shim disconnected" id=df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80 namespace=k8s.io Mar 25 01:28:52.621610 containerd[1485]: time="2025-03-25T01:28:52.621558271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:28:52.629221 containerd[1485]: time="2025-03-25T01:28:52.629161052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\" id:\"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\" pid:2929 exit_status:137 exited_at:{seconds:1742866132 nanos:597866066}" Mar 25 01:28:52.631641 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8-shm.mount: Deactivated successfully. Mar 25 01:28:52.631791 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80-shm.mount: Deactivated successfully. Mar 25 01:28:52.641873 containerd[1485]: time="2025-03-25T01:28:52.641821696Z" level=info msg="TearDown network for sandbox \"07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8\" successfully" Mar 25 01:28:52.641873 containerd[1485]: time="2025-03-25T01:28:52.641867643Z" level=info msg="StopPodSandbox for \"07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8\" returns successfully" Mar 25 01:28:52.643723 containerd[1485]: time="2025-03-25T01:28:52.643191434Z" level=info msg="TearDown network for sandbox \"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\" successfully" Mar 25 01:28:52.643723 containerd[1485]: time="2025-03-25T01:28:52.643239716Z" level=info msg="StopPodSandbox for \"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\" returns successfully" Mar 25 01:28:52.652061 containerd[1485]: time="2025-03-25T01:28:52.651051915Z" level=info msg="received exit event sandbox_id:\"df5a202108c1e94062c8d09822ad8d49c22b6664143a527cfc5ee6fbe2b9db80\" exit_status:137 exited_at:{seconds:1742866132 nanos:597866066}" Mar 25 01:28:52.652061 containerd[1485]: time="2025-03-25T01:28:52.651147488Z" level=info msg="received exit event sandbox_id:\"07f38c045d8ab4ab7ffebfe4758dc21109d454bc8b98d23271f30245468f1ea8\" exit_status:137 exited_at:{seconds:1742866132 nanos:561299771}" Mar 25 01:28:52.775028 kubelet[2729]: I0325 01:28:52.774971 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-host-proc-sys-net\") pod \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " Mar 25 01:28:52.775562 kubelet[2729]: I0325 01:28:52.775040 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-hubble-tls\") pod \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " Mar 25 01:28:52.775562 kubelet[2729]: I0325 01:28:52.775055 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-xtables-lock\") pod \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " Mar 25 01:28:52.775562 kubelet[2729]: I0325 01:28:52.775074 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/537e8a1c-01a9-422a-ae6d-79803a377e10-cilium-config-path\") pod \"537e8a1c-01a9-422a-ae6d-79803a377e10\" (UID: \"537e8a1c-01a9-422a-ae6d-79803a377e10\") " Mar 25 01:28:52.775562 kubelet[2729]: I0325 01:28:52.775090 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-hostproc\") pod \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " Mar 25 01:28:52.775562 kubelet[2729]: I0325 01:28:52.775108 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-cilium-run\") pod \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " Mar 25 01:28:52.775562 kubelet[2729]: I0325 01:28:52.775123 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-cni-path\") pod \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " Mar 25 01:28:52.775771 kubelet[2729]: I0325 01:28:52.775136 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-bpf-maps\") pod \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " Mar 25 01:28:52.775771 kubelet[2729]: I0325 01:28:52.775155 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-clustermesh-secrets\") pod \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " Mar 25 01:28:52.775771 kubelet[2729]: I0325 01:28:52.775131 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" (UID: "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:28:52.775771 kubelet[2729]: I0325 01:28:52.775172 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-cilium-config-path\") pod \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " Mar 25 01:28:52.775771 kubelet[2729]: I0325 01:28:52.775246 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-host-proc-sys-kernel\") pod \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " Mar 25 01:28:52.775771 kubelet[2729]: I0325 01:28:52.775264 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-lib-modules\") pod \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " Mar 25 01:28:52.775978 kubelet[2729]: I0325 01:28:52.775284 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk7lp\" (UniqueName: \"kubernetes.io/projected/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-kube-api-access-bk7lp\") pod \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " Mar 25 01:28:52.775978 kubelet[2729]: I0325 01:28:52.775297 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-cilium-cgroup\") pod \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " Mar 25 01:28:52.775978 kubelet[2729]: I0325 01:28:52.775311 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mg4s\" (UniqueName: \"kubernetes.io/projected/537e8a1c-01a9-422a-ae6d-79803a377e10-kube-api-access-8mg4s\") pod \"537e8a1c-01a9-422a-ae6d-79803a377e10\" (UID: \"537e8a1c-01a9-422a-ae6d-79803a377e10\") " Mar 25 01:28:52.775978 kubelet[2729]: I0325 01:28:52.775325 2729 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-etc-cni-netd\") pod \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\" (UID: \"1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae\") " Mar 25 01:28:52.775978 kubelet[2729]: I0325 01:28:52.775387 2729 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 25 01:28:52.775978 kubelet[2729]: I0325 01:28:52.775414 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" (UID: "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:28:52.776292 kubelet[2729]: I0325 01:28:52.775433 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" (UID: "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:28:52.776292 kubelet[2729]: I0325 01:28:52.775453 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" (UID: "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:28:52.778650 kubelet[2729]: I0325 01:28:52.778626 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" (UID: "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:28:52.779009 kubelet[2729]: I0325 01:28:52.778654 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-hostproc" (OuterVolumeSpecName: "hostproc") pod "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" (UID: "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:28:52.779009 kubelet[2729]: I0325 01:28:52.778636 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-cni-path" (OuterVolumeSpecName: "cni-path") pod "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" (UID: "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:28:52.779009 kubelet[2729]: I0325 01:28:52.778779 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" (UID: "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:28:52.779374 kubelet[2729]: I0325 01:28:52.779333 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" (UID: "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:28:52.779455 kubelet[2729]: I0325 01:28:52.779401 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/537e8a1c-01a9-422a-ae6d-79803a377e10-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "537e8a1c-01a9-422a-ae6d-79803a377e10" (UID: "537e8a1c-01a9-422a-ae6d-79803a377e10"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 01:28:52.779508 kubelet[2729]: I0325 01:28:52.779483 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" (UID: "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:28:52.780377 kubelet[2729]: I0325 01:28:52.780353 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" (UID: "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:28:52.780464 kubelet[2729]: I0325 01:28:52.780405 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-kube-api-access-bk7lp" (OuterVolumeSpecName: "kube-api-access-bk7lp") pod "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" (UID: "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae"). InnerVolumeSpecName "kube-api-access-bk7lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:28:52.781172 kubelet[2729]: I0325 01:28:52.781130 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" (UID: "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 01:28:52.781935 kubelet[2729]: I0325 01:28:52.781891 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/537e8a1c-01a9-422a-ae6d-79803a377e10-kube-api-access-8mg4s" (OuterVolumeSpecName: "kube-api-access-8mg4s") pod "537e8a1c-01a9-422a-ae6d-79803a377e10" (UID: "537e8a1c-01a9-422a-ae6d-79803a377e10"). InnerVolumeSpecName "kube-api-access-8mg4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:28:52.782291 kubelet[2729]: I0325 01:28:52.782254 2729 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" (UID: "1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 25 01:28:52.875730 kubelet[2729]: I0325 01:28:52.875682 2729 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 25 01:28:52.875730 kubelet[2729]: I0325 01:28:52.875716 2729 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 25 01:28:52.875730 kubelet[2729]: I0325 01:28:52.875729 2729 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 25 01:28:52.875730 kubelet[2729]: I0325 01:28:52.875738 2729 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 25 01:28:52.875965 kubelet[2729]: I0325 01:28:52.875749 2729 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8mg4s\" (UniqueName: \"kubernetes.io/projected/537e8a1c-01a9-422a-ae6d-79803a377e10-kube-api-access-8mg4s\") on node \"localhost\" DevicePath \"\"" Mar 25 01:28:52.875965 kubelet[2729]: I0325 01:28:52.875759 2729 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 25 01:28:52.875965 kubelet[2729]: I0325 01:28:52.875768 2729 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bk7lp\" (UniqueName: \"kubernetes.io/projected/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-kube-api-access-bk7lp\") on node \"localhost\" DevicePath \"\"" Mar 25 01:28:52.875965 kubelet[2729]: I0325 01:28:52.875777 2729 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 25 01:28:52.875965 kubelet[2729]: I0325 01:28:52.875786 2729 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 25 01:28:52.875965 kubelet[2729]: I0325 01:28:52.875795 2729 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/537e8a1c-01a9-422a-ae6d-79803a377e10-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 25 01:28:52.875965 kubelet[2729]: I0325 01:28:52.875804 2729 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 25 01:28:52.875965 kubelet[2729]: I0325 01:28:52.875814 2729 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 25 01:28:52.876202 kubelet[2729]: I0325 01:28:52.875823 2729 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 25 01:28:52.876202 kubelet[2729]: I0325 01:28:52.875832 2729 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 25 01:28:52.876202 kubelet[2729]: I0325 01:28:52.875841 2729 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 25 01:28:53.388552 kubelet[2729]: E0325 01:28:53.388504 2729 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 25 01:28:53.518118 systemd[1]: var-lib-kubelet-pods-1a901c70\x2d63cd\x2d4b3b\x2d84d7\x2dd7c5fb2b17ae-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbk7lp.mount: Deactivated successfully. Mar 25 01:28:53.518273 systemd[1]: var-lib-kubelet-pods-537e8a1c\x2d01a9\x2d422a\x2dae6d\x2d79803a377e10-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8mg4s.mount: Deactivated successfully. Mar 25 01:28:53.518414 systemd[1]: var-lib-kubelet-pods-1a901c70\x2d63cd\x2d4b3b\x2d84d7\x2dd7c5fb2b17ae-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 25 01:28:53.518524 systemd[1]: var-lib-kubelet-pods-1a901c70\x2d63cd\x2d4b3b\x2d84d7\x2dd7c5fb2b17ae-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 25 01:28:53.659815 kubelet[2729]: I0325 01:28:53.659692 2729 scope.go:117] "RemoveContainer" containerID="e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23" Mar 25 01:28:53.662579 containerd[1485]: time="2025-03-25T01:28:53.662546678Z" level=info msg="RemoveContainer for \"e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23\"" Mar 25 01:28:53.665572 systemd[1]: Removed slice kubepods-besteffort-pod537e8a1c_01a9_422a_ae6d_79803a377e10.slice - libcontainer container kubepods-besteffort-pod537e8a1c_01a9_422a_ae6d_79803a377e10.slice. Mar 25 01:28:53.672812 systemd[1]: Removed slice kubepods-burstable-pod1a901c70_63cd_4b3b_84d7_d7c5fb2b17ae.slice - libcontainer container kubepods-burstable-pod1a901c70_63cd_4b3b_84d7_d7c5fb2b17ae.slice. Mar 25 01:28:53.672955 systemd[1]: kubepods-burstable-pod1a901c70_63cd_4b3b_84d7_d7c5fb2b17ae.slice: Consumed 7.766s CPU time, 124.2M memory peak, 236K read from disk, 13.3M written to disk. Mar 25 01:28:53.690226 containerd[1485]: time="2025-03-25T01:28:53.690171385Z" level=info msg="RemoveContainer for \"e1c73d6232c0365f3af7029eb2bacc12a5843f3a044d6f12400453d856757c23\" returns successfully" Mar 25 01:28:53.690572 kubelet[2729]: I0325 01:28:53.690498 2729 scope.go:117] "RemoveContainer" containerID="e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637" Mar 25 01:28:53.693455 containerd[1485]: time="2025-03-25T01:28:53.693383626Z" level=info msg="RemoveContainer for \"e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637\"" Mar 25 01:28:53.698168 containerd[1485]: time="2025-03-25T01:28:53.698120018Z" level=info msg="RemoveContainer for \"e5beae298b21b4963b824727298489541e31b53181534a0572dc349210a48637\" returns successfully" Mar 25 01:28:53.698339 kubelet[2729]: I0325 01:28:53.698299 2729 scope.go:117] "RemoveContainer" containerID="806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e" Mar 25 01:28:53.699954 containerd[1485]: time="2025-03-25T01:28:53.699907775Z" level=info msg="RemoveContainer for \"806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e\"" Mar 25 01:28:53.726170 containerd[1485]: time="2025-03-25T01:28:53.726107387Z" level=info msg="RemoveContainer for \"806d1600ef5ac96d10de798a72cd703f6006829b0bcb730d220646d3d02b608e\" returns successfully" Mar 25 01:28:53.726505 kubelet[2729]: I0325 01:28:53.726458 2729 scope.go:117] "RemoveContainer" containerID="08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227" Mar 25 01:28:53.728834 containerd[1485]: time="2025-03-25T01:28:53.728796470Z" level=info msg="RemoveContainer for \"08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227\"" Mar 25 01:28:53.733035 containerd[1485]: time="2025-03-25T01:28:53.733007262Z" level=info msg="RemoveContainer for \"08f46adf9a6ade750038d390dafdd482afbe34f2c8d1acb53ab33c71959d1227\" returns successfully" Mar 25 01:28:53.733198 kubelet[2729]: I0325 01:28:53.733172 2729 scope.go:117] "RemoveContainer" containerID="0b7cce54b6f8f3c70cfd63f918674e812f31e8a353dd298bb3e402354df9c8ec" Mar 25 01:28:53.734704 containerd[1485]: time="2025-03-25T01:28:53.734668156Z" level=info msg="RemoveContainer for \"0b7cce54b6f8f3c70cfd63f918674e812f31e8a353dd298bb3e402354df9c8ec\"" Mar 25 01:28:53.740193 containerd[1485]: time="2025-03-25T01:28:53.740153807Z" level=info msg="RemoveContainer for \"0b7cce54b6f8f3c70cfd63f918674e812f31e8a353dd298bb3e402354df9c8ec\" returns successfully" Mar 25 01:28:53.740435 kubelet[2729]: I0325 01:28:53.740405 2729 scope.go:117] "RemoveContainer" containerID="62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6" Mar 25 01:28:53.742259 containerd[1485]: time="2025-03-25T01:28:53.742217389Z" level=info msg="RemoveContainer for \"62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6\"" Mar 25 01:28:53.746289 containerd[1485]: time="2025-03-25T01:28:53.746247646Z" level=info msg="RemoveContainer for \"62a6c51cd221747f50bcf52ce4f0dd3f79efeb7d41ac53069e27ea0e5a5a30d6\" returns successfully" Mar 25 01:28:54.316292 kubelet[2729]: I0325 01:28:54.316247 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" path="/var/lib/kubelet/pods/1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae/volumes" Mar 25 01:28:54.317176 kubelet[2729]: I0325 01:28:54.317147 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="537e8a1c-01a9-422a-ae6d-79803a377e10" path="/var/lib/kubelet/pods/537e8a1c-01a9-422a-ae6d-79803a377e10/volumes" Mar 25 01:28:54.432200 sshd[4372]: Connection closed by 10.0.0.1 port 36432 Mar 25 01:28:54.432837 sshd-session[4369]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:54.450553 systemd[1]: sshd@26-10.0.0.48:22-10.0.0.1:36432.service: Deactivated successfully. Mar 25 01:28:54.453812 systemd[1]: session-27.scope: Deactivated successfully. Mar 25 01:28:54.455843 systemd-logind[1470]: Session 27 logged out. Waiting for processes to exit. Mar 25 01:28:54.458300 systemd[1]: Started sshd@27-10.0.0.48:22-10.0.0.1:36434.service - OpenSSH per-connection server daemon (10.0.0.1:36434). Mar 25 01:28:54.459770 systemd-logind[1470]: Removed session 27. Mar 25 01:28:54.517041 sshd[4520]: Accepted publickey for core from 10.0.0.1 port 36434 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:54.518667 sshd-session[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:54.523495 systemd-logind[1470]: New session 28 of user core. Mar 25 01:28:54.534266 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 25 01:28:55.087193 sshd[4524]: Connection closed by 10.0.0.1 port 36434 Mar 25 01:28:55.088897 sshd-session[4520]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:55.104623 systemd[1]: sshd@27-10.0.0.48:22-10.0.0.1:36434.service: Deactivated successfully. Mar 25 01:28:55.107781 kubelet[2729]: I0325 01:28:55.106604 2729 topology_manager.go:215] "Topology Admit Handler" podUID="df6d54e7-c2ff-4b31-81e3-86c0670253ac" podNamespace="kube-system" podName="cilium-k6scn" Mar 25 01:28:55.107781 kubelet[2729]: E0325 01:28:55.106710 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" containerName="apply-sysctl-overwrites" Mar 25 01:28:55.107781 kubelet[2729]: E0325 01:28:55.106719 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" containerName="mount-bpf-fs" Mar 25 01:28:55.107781 kubelet[2729]: E0325 01:28:55.106725 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" containerName="cilium-agent" Mar 25 01:28:55.107781 kubelet[2729]: E0325 01:28:55.106732 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" containerName="mount-cgroup" Mar 25 01:28:55.107781 kubelet[2729]: E0325 01:28:55.106738 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" containerName="clean-cilium-state" Mar 25 01:28:55.107781 kubelet[2729]: E0325 01:28:55.106744 2729 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="537e8a1c-01a9-422a-ae6d-79803a377e10" containerName="cilium-operator" Mar 25 01:28:55.107781 kubelet[2729]: I0325 01:28:55.106763 2729 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a901c70-63cd-4b3b-84d7-d7c5fb2b17ae" containerName="cilium-agent" Mar 25 01:28:55.107781 kubelet[2729]: I0325 01:28:55.106773 2729 memory_manager.go:354] "RemoveStaleState removing state" podUID="537e8a1c-01a9-422a-ae6d-79803a377e10" containerName="cilium-operator" Mar 25 01:28:55.111564 systemd[1]: session-28.scope: Deactivated successfully. Mar 25 01:28:55.112756 systemd-logind[1470]: Session 28 logged out. Waiting for processes to exit. Mar 25 01:28:55.117843 systemd[1]: Started sshd@28-10.0.0.48:22-10.0.0.1:42308.service - OpenSSH per-connection server daemon (10.0.0.1:42308). Mar 25 01:28:55.120463 systemd-logind[1470]: Removed session 28. Mar 25 01:28:55.135430 systemd[1]: Created slice kubepods-burstable-poddf6d54e7_c2ff_4b31_81e3_86c0670253ac.slice - libcontainer container kubepods-burstable-poddf6d54e7_c2ff_4b31_81e3_86c0670253ac.slice. Mar 25 01:28:55.170058 sshd[4535]: Accepted publickey for core from 10.0.0.1 port 42308 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:55.171601 sshd-session[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:55.176019 systemd-logind[1470]: New session 29 of user core. Mar 25 01:28:55.187203 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 25 01:28:55.238464 sshd[4538]: Connection closed by 10.0.0.1 port 42308 Mar 25 01:28:55.238887 sshd-session[4535]: pam_unix(sshd:session): session closed for user core Mar 25 01:28:55.254166 systemd[1]: sshd@28-10.0.0.48:22-10.0.0.1:42308.service: Deactivated successfully. Mar 25 01:28:55.256780 systemd[1]: session-29.scope: Deactivated successfully. Mar 25 01:28:55.259377 systemd-logind[1470]: Session 29 logged out. Waiting for processes to exit. Mar 25 01:28:55.261145 systemd[1]: Started sshd@29-10.0.0.48:22-10.0.0.1:42310.service - OpenSSH per-connection server daemon (10.0.0.1:42310). Mar 25 01:28:55.262238 systemd-logind[1470]: Removed session 29. Mar 25 01:28:55.291155 kubelet[2729]: I0325 01:28:55.291092 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tz4z\" (UniqueName: \"kubernetes.io/projected/df6d54e7-c2ff-4b31-81e3-86c0670253ac-kube-api-access-4tz4z\") pod \"cilium-k6scn\" (UID: \"df6d54e7-c2ff-4b31-81e3-86c0670253ac\") " pod="kube-system/cilium-k6scn" Mar 25 01:28:55.291155 kubelet[2729]: I0325 01:28:55.291154 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df6d54e7-c2ff-4b31-81e3-86c0670253ac-cilium-cgroup\") pod \"cilium-k6scn\" (UID: \"df6d54e7-c2ff-4b31-81e3-86c0670253ac\") " pod="kube-system/cilium-k6scn" Mar 25 01:28:55.291393 kubelet[2729]: I0325 01:28:55.291184 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df6d54e7-c2ff-4b31-81e3-86c0670253ac-cilium-config-path\") pod \"cilium-k6scn\" (UID: \"df6d54e7-c2ff-4b31-81e3-86c0670253ac\") " pod="kube-system/cilium-k6scn" Mar 25 01:28:55.291393 kubelet[2729]: I0325 01:28:55.291205 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df6d54e7-c2ff-4b31-81e3-86c0670253ac-cni-path\") pod \"cilium-k6scn\" (UID: \"df6d54e7-c2ff-4b31-81e3-86c0670253ac\") " pod="kube-system/cilium-k6scn" Mar 25 01:28:55.291393 kubelet[2729]: I0325 01:28:55.291232 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df6d54e7-c2ff-4b31-81e3-86c0670253ac-cilium-run\") pod \"cilium-k6scn\" (UID: \"df6d54e7-c2ff-4b31-81e3-86c0670253ac\") " pod="kube-system/cilium-k6scn" Mar 25 01:28:55.291393 kubelet[2729]: I0325 01:28:55.291251 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df6d54e7-c2ff-4b31-81e3-86c0670253ac-host-proc-sys-net\") pod \"cilium-k6scn\" (UID: \"df6d54e7-c2ff-4b31-81e3-86c0670253ac\") " pod="kube-system/cilium-k6scn" Mar 25 01:28:55.291393 kubelet[2729]: I0325 01:28:55.291273 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df6d54e7-c2ff-4b31-81e3-86c0670253ac-lib-modules\") pod \"cilium-k6scn\" (UID: \"df6d54e7-c2ff-4b31-81e3-86c0670253ac\") " pod="kube-system/cilium-k6scn" Mar 25 01:28:55.291393 kubelet[2729]: I0325 01:28:55.291292 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df6d54e7-c2ff-4b31-81e3-86c0670253ac-xtables-lock\") pod \"cilium-k6scn\" (UID: \"df6d54e7-c2ff-4b31-81e3-86c0670253ac\") " pod="kube-system/cilium-k6scn" Mar 25 01:28:55.291643 kubelet[2729]: I0325 01:28:55.291312 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df6d54e7-c2ff-4b31-81e3-86c0670253ac-bpf-maps\") pod \"cilium-k6scn\" (UID: \"df6d54e7-c2ff-4b31-81e3-86c0670253ac\") " pod="kube-system/cilium-k6scn" Mar 25 01:28:55.291643 kubelet[2729]: I0325 01:28:55.291332 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df6d54e7-c2ff-4b31-81e3-86c0670253ac-etc-cni-netd\") pod \"cilium-k6scn\" (UID: \"df6d54e7-c2ff-4b31-81e3-86c0670253ac\") " pod="kube-system/cilium-k6scn" Mar 25 01:28:55.291643 kubelet[2729]: I0325 01:28:55.291364 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df6d54e7-c2ff-4b31-81e3-86c0670253ac-hostproc\") pod \"cilium-k6scn\" (UID: \"df6d54e7-c2ff-4b31-81e3-86c0670253ac\") " pod="kube-system/cilium-k6scn" Mar 25 01:28:55.291643 kubelet[2729]: I0325 01:28:55.291387 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df6d54e7-c2ff-4b31-81e3-86c0670253ac-clustermesh-secrets\") pod \"cilium-k6scn\" (UID: \"df6d54e7-c2ff-4b31-81e3-86c0670253ac\") " pod="kube-system/cilium-k6scn" Mar 25 01:28:55.291643 kubelet[2729]: I0325 01:28:55.291411 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df6d54e7-c2ff-4b31-81e3-86c0670253ac-host-proc-sys-kernel\") pod \"cilium-k6scn\" (UID: \"df6d54e7-c2ff-4b31-81e3-86c0670253ac\") " pod="kube-system/cilium-k6scn" Mar 25 01:28:55.291643 kubelet[2729]: I0325 01:28:55.291438 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/df6d54e7-c2ff-4b31-81e3-86c0670253ac-cilium-ipsec-secrets\") pod \"cilium-k6scn\" (UID: \"df6d54e7-c2ff-4b31-81e3-86c0670253ac\") " pod="kube-system/cilium-k6scn" Mar 25 01:28:55.291909 kubelet[2729]: I0325 01:28:55.291465 2729 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df6d54e7-c2ff-4b31-81e3-86c0670253ac-hubble-tls\") pod \"cilium-k6scn\" (UID: \"df6d54e7-c2ff-4b31-81e3-86c0670253ac\") " pod="kube-system/cilium-k6scn" Mar 25 01:28:55.313521 sshd[4544]: Accepted publickey for core from 10.0.0.1 port 42310 ssh2: RSA SHA256:4f8HJIPOZgNv5AQupi3isO02sy+ZIziCurPc4FU7/A0 Mar 25 01:28:55.315087 sshd-session[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:28:55.319653 systemd-logind[1470]: New session 30 of user core. Mar 25 01:28:55.329141 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 25 01:28:55.438909 kubelet[2729]: E0325 01:28:55.438681 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:55.440057 containerd[1485]: time="2025-03-25T01:28:55.439546763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k6scn,Uid:df6d54e7-c2ff-4b31-81e3-86c0670253ac,Namespace:kube-system,Attempt:0,}" Mar 25 01:28:55.462400 containerd[1485]: time="2025-03-25T01:28:55.462342142Z" level=info msg="connecting to shim a4718d4b4b027eb1f290eba40d966a242c0907cfa57df16c5d95508225ea4991" address="unix:///run/containerd/s/d1874752756ee0d8c38a6e61941124e9698cd72facee7a702c393c3b89395a7c" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:28:55.493220 systemd[1]: Started cri-containerd-a4718d4b4b027eb1f290eba40d966a242c0907cfa57df16c5d95508225ea4991.scope - libcontainer container a4718d4b4b027eb1f290eba40d966a242c0907cfa57df16c5d95508225ea4991. Mar 25 01:28:55.520842 containerd[1485]: time="2025-03-25T01:28:55.520791826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k6scn,Uid:df6d54e7-c2ff-4b31-81e3-86c0670253ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4718d4b4b027eb1f290eba40d966a242c0907cfa57df16c5d95508225ea4991\"" Mar 25 01:28:55.521599 kubelet[2729]: E0325 01:28:55.521520 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:55.524477 containerd[1485]: time="2025-03-25T01:28:55.524441844Z" level=info msg="CreateContainer within sandbox \"a4718d4b4b027eb1f290eba40d966a242c0907cfa57df16c5d95508225ea4991\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 01:28:55.533011 containerd[1485]: time="2025-03-25T01:28:55.532946804Z" level=info msg="Container 0278af71c582c59e3bc104372eca103a37420cba2ca4bf8ad92c900fa831e299: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:28:55.541132 containerd[1485]: time="2025-03-25T01:28:55.541091478Z" level=info msg="CreateContainer within sandbox \"a4718d4b4b027eb1f290eba40d966a242c0907cfa57df16c5d95508225ea4991\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0278af71c582c59e3bc104372eca103a37420cba2ca4bf8ad92c900fa831e299\"" Mar 25 01:28:55.541735 containerd[1485]: time="2025-03-25T01:28:55.541687343Z" level=info msg="StartContainer for \"0278af71c582c59e3bc104372eca103a37420cba2ca4bf8ad92c900fa831e299\"" Mar 25 01:28:55.542771 containerd[1485]: time="2025-03-25T01:28:55.542727727Z" level=info msg="connecting to shim 0278af71c582c59e3bc104372eca103a37420cba2ca4bf8ad92c900fa831e299" address="unix:///run/containerd/s/d1874752756ee0d8c38a6e61941124e9698cd72facee7a702c393c3b89395a7c" protocol=ttrpc version=3 Mar 25 01:28:55.568206 systemd[1]: Started cri-containerd-0278af71c582c59e3bc104372eca103a37420cba2ca4bf8ad92c900fa831e299.scope - libcontainer container 0278af71c582c59e3bc104372eca103a37420cba2ca4bf8ad92c900fa831e299. Mar 25 01:28:55.602391 containerd[1485]: time="2025-03-25T01:28:55.602341100Z" level=info msg="StartContainer for \"0278af71c582c59e3bc104372eca103a37420cba2ca4bf8ad92c900fa831e299\" returns successfully" Mar 25 01:28:55.611333 systemd[1]: cri-containerd-0278af71c582c59e3bc104372eca103a37420cba2ca4bf8ad92c900fa831e299.scope: Deactivated successfully. Mar 25 01:28:55.613976 containerd[1485]: time="2025-03-25T01:28:55.613932051Z" level=info msg="received exit event container_id:\"0278af71c582c59e3bc104372eca103a37420cba2ca4bf8ad92c900fa831e299\" id:\"0278af71c582c59e3bc104372eca103a37420cba2ca4bf8ad92c900fa831e299\" pid:4615 exited_at:{seconds:1742866135 nanos:613571314}" Mar 25 01:28:55.614162 containerd[1485]: time="2025-03-25T01:28:55.614135109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0278af71c582c59e3bc104372eca103a37420cba2ca4bf8ad92c900fa831e299\" id:\"0278af71c582c59e3bc104372eca103a37420cba2ca4bf8ad92c900fa831e299\" pid:4615 exited_at:{seconds:1742866135 nanos:613571314}" Mar 25 01:28:55.673838 kubelet[2729]: E0325 01:28:55.673663 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:55.676157 containerd[1485]: time="2025-03-25T01:28:55.676108811Z" level=info msg="CreateContainer within sandbox \"a4718d4b4b027eb1f290eba40d966a242c0907cfa57df16c5d95508225ea4991\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 01:28:55.683508 containerd[1485]: time="2025-03-25T01:28:55.683456015Z" level=info msg="Container a694deff9f4f9035ff20d1c348aa527c8b9da3779333f9bf40b17e5dc2f0c85d: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:28:55.690510 containerd[1485]: time="2025-03-25T01:28:55.690330638Z" level=info msg="CreateContainer within sandbox \"a4718d4b4b027eb1f290eba40d966a242c0907cfa57df16c5d95508225ea4991\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a694deff9f4f9035ff20d1c348aa527c8b9da3779333f9bf40b17e5dc2f0c85d\"" Mar 25 01:28:55.691220 containerd[1485]: time="2025-03-25T01:28:55.691025001Z" level=info msg="StartContainer for \"a694deff9f4f9035ff20d1c348aa527c8b9da3779333f9bf40b17e5dc2f0c85d\"" Mar 25 01:28:55.692428 containerd[1485]: time="2025-03-25T01:28:55.692392678Z" level=info msg="connecting to shim a694deff9f4f9035ff20d1c348aa527c8b9da3779333f9bf40b17e5dc2f0c85d" address="unix:///run/containerd/s/d1874752756ee0d8c38a6e61941124e9698cd72facee7a702c393c3b89395a7c" protocol=ttrpc version=3 Mar 25 01:28:55.717195 systemd[1]: Started cri-containerd-a694deff9f4f9035ff20d1c348aa527c8b9da3779333f9bf40b17e5dc2f0c85d.scope - libcontainer container a694deff9f4f9035ff20d1c348aa527c8b9da3779333f9bf40b17e5dc2f0c85d. Mar 25 01:28:55.751864 containerd[1485]: time="2025-03-25T01:28:55.751793366Z" level=info msg="StartContainer for \"a694deff9f4f9035ff20d1c348aa527c8b9da3779333f9bf40b17e5dc2f0c85d\" returns successfully" Mar 25 01:28:55.759225 systemd[1]: cri-containerd-a694deff9f4f9035ff20d1c348aa527c8b9da3779333f9bf40b17e5dc2f0c85d.scope: Deactivated successfully. Mar 25 01:28:55.759595 containerd[1485]: time="2025-03-25T01:28:55.759475919Z" level=info msg="received exit event container_id:\"a694deff9f4f9035ff20d1c348aa527c8b9da3779333f9bf40b17e5dc2f0c85d\" id:\"a694deff9f4f9035ff20d1c348aa527c8b9da3779333f9bf40b17e5dc2f0c85d\" pid:4659 exited_at:{seconds:1742866135 nanos:759241502}" Mar 25 01:28:55.759595 containerd[1485]: time="2025-03-25T01:28:55.759570459Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a694deff9f4f9035ff20d1c348aa527c8b9da3779333f9bf40b17e5dc2f0c85d\" id:\"a694deff9f4f9035ff20d1c348aa527c8b9da3779333f9bf40b17e5dc2f0c85d\" pid:4659 exited_at:{seconds:1742866135 nanos:759241502}" Mar 25 01:28:56.313339 kubelet[2729]: E0325 01:28:56.313296 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:56.678547 kubelet[2729]: E0325 01:28:56.678416 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:56.686648 containerd[1485]: time="2025-03-25T01:28:56.686378651Z" level=info msg="CreateContainer within sandbox \"a4718d4b4b027eb1f290eba40d966a242c0907cfa57df16c5d95508225ea4991\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 01:28:56.700636 containerd[1485]: time="2025-03-25T01:28:56.700084060Z" level=info msg="Container 022643f86fc031bb9166a5acaab3f48e076f24a47e5862a1844f9c7b0120b1a9: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:28:56.720546 containerd[1485]: time="2025-03-25T01:28:56.720502186Z" level=info msg="CreateContainer within sandbox \"a4718d4b4b027eb1f290eba40d966a242c0907cfa57df16c5d95508225ea4991\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"022643f86fc031bb9166a5acaab3f48e076f24a47e5862a1844f9c7b0120b1a9\"" Mar 25 01:28:56.721326 containerd[1485]: time="2025-03-25T01:28:56.721253028Z" level=info msg="StartContainer for \"022643f86fc031bb9166a5acaab3f48e076f24a47e5862a1844f9c7b0120b1a9\"" Mar 25 01:28:56.722831 containerd[1485]: time="2025-03-25T01:28:56.722799275Z" level=info msg="connecting to shim 022643f86fc031bb9166a5acaab3f48e076f24a47e5862a1844f9c7b0120b1a9" address="unix:///run/containerd/s/d1874752756ee0d8c38a6e61941124e9698cd72facee7a702c393c3b89395a7c" protocol=ttrpc version=3 Mar 25 01:28:56.749163 systemd[1]: Started cri-containerd-022643f86fc031bb9166a5acaab3f48e076f24a47e5862a1844f9c7b0120b1a9.scope - libcontainer container 022643f86fc031bb9166a5acaab3f48e076f24a47e5862a1844f9c7b0120b1a9. Mar 25 01:28:56.797277 systemd[1]: cri-containerd-022643f86fc031bb9166a5acaab3f48e076f24a47e5862a1844f9c7b0120b1a9.scope: Deactivated successfully. Mar 25 01:28:56.798472 containerd[1485]: time="2025-03-25T01:28:56.798426984Z" level=info msg="received exit event container_id:\"022643f86fc031bb9166a5acaab3f48e076f24a47e5862a1844f9c7b0120b1a9\" id:\"022643f86fc031bb9166a5acaab3f48e076f24a47e5862a1844f9c7b0120b1a9\" pid:4704 exited_at:{seconds:1742866136 nanos:798168001}" Mar 25 01:28:56.798589 containerd[1485]: time="2025-03-25T01:28:56.798453634Z" level=info msg="TaskExit event in podsandbox handler container_id:\"022643f86fc031bb9166a5acaab3f48e076f24a47e5862a1844f9c7b0120b1a9\" id:\"022643f86fc031bb9166a5acaab3f48e076f24a47e5862a1844f9c7b0120b1a9\" pid:4704 exited_at:{seconds:1742866136 nanos:798168001}" Mar 25 01:28:56.799670 containerd[1485]: time="2025-03-25T01:28:56.799650005Z" level=info msg="StartContainer for \"022643f86fc031bb9166a5acaab3f48e076f24a47e5862a1844f9c7b0120b1a9\" returns successfully" Mar 25 01:28:56.821485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-022643f86fc031bb9166a5acaab3f48e076f24a47e5862a1844f9c7b0120b1a9-rootfs.mount: Deactivated successfully. Mar 25 01:28:57.314205 kubelet[2729]: E0325 01:28:57.314151 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:57.683177 kubelet[2729]: E0325 01:28:57.683059 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:57.685632 containerd[1485]: time="2025-03-25T01:28:57.685407636Z" level=info msg="CreateContainer within sandbox \"a4718d4b4b027eb1f290eba40d966a242c0907cfa57df16c5d95508225ea4991\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 01:28:57.694552 containerd[1485]: time="2025-03-25T01:28:57.694490936Z" level=info msg="Container 46555938212de0f6809638185740bb9943025c49fc1f5f9a4a8d187788a343e4: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:28:57.707201 containerd[1485]: time="2025-03-25T01:28:57.707144204Z" level=info msg="CreateContainer within sandbox \"a4718d4b4b027eb1f290eba40d966a242c0907cfa57df16c5d95508225ea4991\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"46555938212de0f6809638185740bb9943025c49fc1f5f9a4a8d187788a343e4\"" Mar 25 01:28:57.708178 containerd[1485]: time="2025-03-25T01:28:57.707747995Z" level=info msg="StartContainer for \"46555938212de0f6809638185740bb9943025c49fc1f5f9a4a8d187788a343e4\"" Mar 25 01:28:57.709084 containerd[1485]: time="2025-03-25T01:28:57.708999541Z" level=info msg="connecting to shim 46555938212de0f6809638185740bb9943025c49fc1f5f9a4a8d187788a343e4" address="unix:///run/containerd/s/d1874752756ee0d8c38a6e61941124e9698cd72facee7a702c393c3b89395a7c" protocol=ttrpc version=3 Mar 25 01:28:57.732151 systemd[1]: Started cri-containerd-46555938212de0f6809638185740bb9943025c49fc1f5f9a4a8d187788a343e4.scope - libcontainer container 46555938212de0f6809638185740bb9943025c49fc1f5f9a4a8d187788a343e4. Mar 25 01:28:57.761102 systemd[1]: cri-containerd-46555938212de0f6809638185740bb9943025c49fc1f5f9a4a8d187788a343e4.scope: Deactivated successfully. Mar 25 01:28:57.761585 containerd[1485]: time="2025-03-25T01:28:57.761526412Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46555938212de0f6809638185740bb9943025c49fc1f5f9a4a8d187788a343e4\" id:\"46555938212de0f6809638185740bb9943025c49fc1f5f9a4a8d187788a343e4\" pid:4743 exited_at:{seconds:1742866137 nanos:761272488}" Mar 25 01:28:57.763055 containerd[1485]: time="2025-03-25T01:28:57.763027305Z" level=info msg="received exit event container_id:\"46555938212de0f6809638185740bb9943025c49fc1f5f9a4a8d187788a343e4\" id:\"46555938212de0f6809638185740bb9943025c49fc1f5f9a4a8d187788a343e4\" pid:4743 exited_at:{seconds:1742866137 nanos:761272488}" Mar 25 01:28:57.771006 containerd[1485]: time="2025-03-25T01:28:57.770950573Z" level=info msg="StartContainer for \"46555938212de0f6809638185740bb9943025c49fc1f5f9a4a8d187788a343e4\" returns successfully" Mar 25 01:28:57.783667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46555938212de0f6809638185740bb9943025c49fc1f5f9a4a8d187788a343e4-rootfs.mount: Deactivated successfully. Mar 25 01:28:58.390004 kubelet[2729]: E0325 01:28:58.389900 2729 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 25 01:28:58.690155 kubelet[2729]: E0325 01:28:58.689815 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:28:58.692878 containerd[1485]: time="2025-03-25T01:28:58.692832180Z" level=info msg="CreateContainer within sandbox \"a4718d4b4b027eb1f290eba40d966a242c0907cfa57df16c5d95508225ea4991\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 01:28:58.775118 containerd[1485]: time="2025-03-25T01:28:58.775062634Z" level=info msg="Container c0c88fd5b5204c4fa5d5746e4c040ed36622d42f630345b79042fd5ec8e84247: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:28:58.799565 containerd[1485]: time="2025-03-25T01:28:58.799499401Z" level=info msg="CreateContainer within sandbox \"a4718d4b4b027eb1f290eba40d966a242c0907cfa57df16c5d95508225ea4991\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c0c88fd5b5204c4fa5d5746e4c040ed36622d42f630345b79042fd5ec8e84247\"" Mar 25 01:28:58.800233 containerd[1485]: time="2025-03-25T01:28:58.800192463Z" level=info msg="StartContainer for \"c0c88fd5b5204c4fa5d5746e4c040ed36622d42f630345b79042fd5ec8e84247\"" Mar 25 01:28:58.801157 containerd[1485]: time="2025-03-25T01:28:58.801122696Z" level=info msg="connecting to shim c0c88fd5b5204c4fa5d5746e4c040ed36622d42f630345b79042fd5ec8e84247" address="unix:///run/containerd/s/d1874752756ee0d8c38a6e61941124e9698cd72facee7a702c393c3b89395a7c" protocol=ttrpc version=3 Mar 25 01:28:58.822226 systemd[1]: Started cri-containerd-c0c88fd5b5204c4fa5d5746e4c040ed36622d42f630345b79042fd5ec8e84247.scope - libcontainer container c0c88fd5b5204c4fa5d5746e4c040ed36622d42f630345b79042fd5ec8e84247. Mar 25 01:28:58.868419 containerd[1485]: time="2025-03-25T01:28:58.868347631Z" level=info msg="StartContainer for \"c0c88fd5b5204c4fa5d5746e4c040ed36622d42f630345b79042fd5ec8e84247\" returns successfully" Mar 25 01:28:58.937948 containerd[1485]: time="2025-03-25T01:28:58.937881269Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0c88fd5b5204c4fa5d5746e4c040ed36622d42f630345b79042fd5ec8e84247\" id:\"bd4a5d6997555768c5e6373ae48db54de5eb8afa7f02ba3feb56ca54cc36024d\" pid:4811 exited_at:{seconds:1742866138 nanos:937386787}" Mar 25 01:28:59.305014 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 25 01:28:59.696957 kubelet[2729]: E0325 01:28:59.696828 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:29:00.308509 kubelet[2729]: I0325 01:29:00.308452 2729 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-25T01:29:00Z","lastTransitionTime":"2025-03-25T01:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 25 01:29:01.440082 kubelet[2729]: E0325 01:29:01.439961 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:29:01.888061 containerd[1485]: time="2025-03-25T01:29:01.887960459Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0c88fd5b5204c4fa5d5746e4c040ed36622d42f630345b79042fd5ec8e84247\" id:\"d2f0e658ffcf97a962ba3e22d632fbe78dc559c318cf119227aa45116912d3f2\" pid:5194 exit_status:1 exited_at:{seconds:1742866141 nanos:887218572}" Mar 25 01:29:02.513070 systemd-networkd[1394]: lxc_health: Link UP Mar 25 01:29:02.526096 systemd-networkd[1394]: lxc_health: Gained carrier Mar 25 01:29:03.441306 kubelet[2729]: E0325 01:29:03.441271 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:29:03.555139 kubelet[2729]: I0325 01:29:03.555058 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k6scn" podStartSLOduration=8.554640511 podStartE2EDuration="8.554640511s" podCreationTimestamp="2025-03-25 01:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:28:59.828406488 +0000 UTC m=+101.642990934" watchObservedRunningTime="2025-03-25 01:29:03.554640511 +0000 UTC m=+105.369224967" Mar 25 01:29:03.705182 kubelet[2729]: E0325 01:29:03.704947 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:29:03.836252 systemd-networkd[1394]: lxc_health: Gained IPv6LL Mar 25 01:29:04.024365 containerd[1485]: time="2025-03-25T01:29:04.024310149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0c88fd5b5204c4fa5d5746e4c040ed36622d42f630345b79042fd5ec8e84247\" id:\"d358675e64ab55cce5ed0086be27cccae0f20c0dede1cffb2d7fb50a3f88887d\" pid:5380 exited_at:{seconds:1742866144 nanos:23807289}" Mar 25 01:29:04.707173 kubelet[2729]: E0325 01:29:04.707121 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 25 01:29:06.356114 containerd[1485]: time="2025-03-25T01:29:06.356025992Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0c88fd5b5204c4fa5d5746e4c040ed36622d42f630345b79042fd5ec8e84247\" id:\"3344fc5e5c87afe5d884088f11f2405b1266d167810c08af585719a3f11a26a3\" pid:5416 exited_at:{seconds:1742866146 nanos:355536167}" Mar 25 01:29:08.456602 containerd[1485]: time="2025-03-25T01:29:08.456540878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0c88fd5b5204c4fa5d5746e4c040ed36622d42f630345b79042fd5ec8e84247\" id:\"3e7ee4f7f24806c0184b652ed0c3bda0e13823ab80db2356efa837c43e9a2ea3\" pid:5441 exited_at:{seconds:1742866148 nanos:456089656}" Mar 25 01:29:10.574349 containerd[1485]: time="2025-03-25T01:29:10.574279626Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0c88fd5b5204c4fa5d5746e4c040ed36622d42f630345b79042fd5ec8e84247\" id:\"cd0baa97711ef3fd4c4bb589ecf1e4ef147f7a91c06c7a811577082fac68d797\" pid:5465 exited_at:{seconds:1742866150 nanos:573621377}" Mar 25 01:29:10.581749 sshd[4547]: Connection closed by 10.0.0.1 port 42310 Mar 25 01:29:10.582182 sshd-session[4544]: pam_unix(sshd:session): session closed for user core Mar 25 01:29:10.586666 systemd[1]: sshd@29-10.0.0.48:22-10.0.0.1:42310.service: Deactivated successfully. Mar 25 01:29:10.589204 systemd[1]: session-30.scope: Deactivated successfully. Mar 25 01:29:10.590102 systemd-logind[1470]: Session 30 logged out. Waiting for processes to exit. Mar 25 01:29:10.591222 systemd-logind[1470]: Removed session 30.