Mar 25 01:35:12.854580 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 24 23:38:35 -00 2025 Mar 25 01:35:12.854600 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:35:12.854612 kernel: BIOS-provided physical RAM map: Mar 25 01:35:12.854619 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 25 01:35:12.854625 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 25 01:35:12.854632 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 25 01:35:12.854640 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 25 01:35:12.854646 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 25 01:35:12.854653 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 25 01:35:12.854659 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 25 01:35:12.854668 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 25 01:35:12.854675 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 25 01:35:12.854681 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 25 01:35:12.854688 kernel: NX (Execute Disable) protection: active Mar 25 01:35:12.854696 kernel: APIC: Static calls initialized Mar 25 01:35:12.854705 kernel: SMBIOS 2.8 present. Mar 25 01:35:12.854713 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 25 01:35:12.854720 kernel: Hypervisor detected: KVM Mar 25 01:35:12.854727 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 25 01:35:12.854734 kernel: kvm-clock: using sched offset of 2314822029 cycles Mar 25 01:35:12.854742 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 25 01:35:12.854749 kernel: tsc: Detected 2794.748 MHz processor Mar 25 01:35:12.854757 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 25 01:35:12.854765 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 25 01:35:12.854772 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 25 01:35:12.854782 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 25 01:35:12.854789 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 25 01:35:12.854796 kernel: Using GB pages for direct mapping Mar 25 01:35:12.854804 kernel: ACPI: Early table checksum verification disabled Mar 25 01:35:12.854811 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 25 01:35:12.854819 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:35:12.854826 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:35:12.854833 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:35:12.854841 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 25 01:35:12.854850 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:35:12.854857 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:35:12.854865 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:35:12.854872 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:35:12.854879 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Mar 25 01:35:12.854887 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Mar 25 01:35:12.854897 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 25 01:35:12.854907 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Mar 25 01:35:12.854914 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Mar 25 01:35:12.854922 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Mar 25 01:35:12.854929 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Mar 25 01:35:12.854937 kernel: No NUMA configuration found Mar 25 01:35:12.854944 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 25 01:35:12.854952 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 25 01:35:12.854962 kernel: Zone ranges: Mar 25 01:35:12.854969 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 25 01:35:12.854977 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 25 01:35:12.854984 kernel: Normal empty Mar 25 01:35:12.854992 kernel: Movable zone start for each node Mar 25 01:35:12.854999 kernel: Early memory node ranges Mar 25 01:35:12.855007 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 25 01:35:12.855014 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 25 01:35:12.855022 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 25 01:35:12.855029 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 25 01:35:12.855039 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 25 01:35:12.855047 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 25 01:35:12.855054 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 25 01:35:12.855062 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 25 01:35:12.855069 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 25 01:35:12.855077 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 25 01:35:12.855084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 25 01:35:12.855092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 25 01:35:12.855099 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 25 01:35:12.855109 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 25 01:35:12.855116 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 25 01:35:12.855124 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 25 01:35:12.855131 kernel: TSC deadline timer available Mar 25 01:35:12.855139 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 25 01:35:12.855146 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 25 01:35:12.855154 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 25 01:35:12.855161 kernel: kvm-guest: setup PV sched yield Mar 25 01:35:12.855169 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 25 01:35:12.855178 kernel: Booting paravirtualized kernel on KVM Mar 25 01:35:12.855186 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 25 01:35:12.855194 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 25 01:35:12.855202 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 25 01:35:12.855209 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 25 01:35:12.855216 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 25 01:35:12.855224 kernel: kvm-guest: PV spinlocks enabled Mar 25 01:35:12.855231 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 25 01:35:12.855240 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:35:12.855250 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 25 01:35:12.855258 kernel: random: crng init done Mar 25 01:35:12.855265 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 25 01:35:12.855273 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 25 01:35:12.855281 kernel: Fallback order for Node 0: 0 Mar 25 01:35:12.855288 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 25 01:35:12.855296 kernel: Policy zone: DMA32 Mar 25 01:35:12.855303 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 25 01:35:12.855314 kernel: Memory: 2430496K/2571752K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43592K init, 1472K bss, 140996K reserved, 0K cma-reserved) Mar 25 01:35:12.855321 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 25 01:35:12.855329 kernel: ftrace: allocating 37985 entries in 149 pages Mar 25 01:35:12.855337 kernel: ftrace: allocated 149 pages with 4 groups Mar 25 01:35:12.855344 kernel: Dynamic Preempt: voluntary Mar 25 01:35:12.855352 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 25 01:35:12.855360 kernel: rcu: RCU event tracing is enabled. Mar 25 01:35:12.855368 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 25 01:35:12.855375 kernel: Trampoline variant of Tasks RCU enabled. Mar 25 01:35:12.855385 kernel: Rude variant of Tasks RCU enabled. Mar 25 01:35:12.855393 kernel: Tracing variant of Tasks RCU enabled. Mar 25 01:35:12.855400 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 25 01:35:12.855408 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 25 01:35:12.855415 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 25 01:35:12.855526 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 25 01:35:12.855539 kernel: Console: colour VGA+ 80x25 Mar 25 01:35:12.855547 kernel: printk: console [ttyS0] enabled Mar 25 01:35:12.855554 kernel: ACPI: Core revision 20230628 Mar 25 01:35:12.855562 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 25 01:35:12.855573 kernel: APIC: Switch to symmetric I/O mode setup Mar 25 01:35:12.855581 kernel: x2apic enabled Mar 25 01:35:12.855589 kernel: APIC: Switched APIC routing to: physical x2apic Mar 25 01:35:12.855596 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 25 01:35:12.855604 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 25 01:35:12.855612 kernel: kvm-guest: setup PV IPIs Mar 25 01:35:12.855626 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 25 01:35:12.855636 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 25 01:35:12.855644 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Mar 25 01:35:12.855652 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 25 01:35:12.855660 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 25 01:35:12.855670 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 25 01:35:12.855678 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 25 01:35:12.855686 kernel: Spectre V2 : Mitigation: Retpolines Mar 25 01:35:12.855694 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 25 01:35:12.855702 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 25 01:35:12.855712 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 25 01:35:12.855720 kernel: RETBleed: Mitigation: untrained return thunk Mar 25 01:35:12.855727 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 25 01:35:12.855735 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 25 01:35:12.855743 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 25 01:35:12.855752 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 25 01:35:12.855760 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 25 01:35:12.855768 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 25 01:35:12.855778 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 25 01:35:12.855786 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 25 01:35:12.855793 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 25 01:35:12.855801 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 25 01:35:12.855809 kernel: Freeing SMP alternatives memory: 32K Mar 25 01:35:12.855817 kernel: pid_max: default: 32768 minimum: 301 Mar 25 01:35:12.855825 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 25 01:35:12.855832 kernel: landlock: Up and running. Mar 25 01:35:12.855840 kernel: SELinux: Initializing. Mar 25 01:35:12.855848 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 25 01:35:12.855859 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 25 01:35:12.855867 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 25 01:35:12.855875 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 25 01:35:12.855883 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 25 01:35:12.855891 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 25 01:35:12.855899 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 25 01:35:12.855906 kernel: ... version: 0 Mar 25 01:35:12.855914 kernel: ... bit width: 48 Mar 25 01:35:12.855924 kernel: ... generic registers: 6 Mar 25 01:35:12.855934 kernel: ... value mask: 0000ffffffffffff Mar 25 01:35:12.855942 kernel: ... max period: 00007fffffffffff Mar 25 01:35:12.855952 kernel: ... fixed-purpose events: 0 Mar 25 01:35:12.855960 kernel: ... event mask: 000000000000003f Mar 25 01:35:12.855968 kernel: signal: max sigframe size: 1776 Mar 25 01:35:12.855975 kernel: rcu: Hierarchical SRCU implementation. Mar 25 01:35:12.855983 kernel: rcu: Max phase no-delay instances is 400. Mar 25 01:35:12.855991 kernel: smp: Bringing up secondary CPUs ... Mar 25 01:35:12.856001 kernel: smpboot: x86: Booting SMP configuration: Mar 25 01:35:12.856009 kernel: .... node #0, CPUs: #1 #2 #3 Mar 25 01:35:12.856017 kernel: smp: Brought up 1 node, 4 CPUs Mar 25 01:35:12.856024 kernel: smpboot: Max logical packages: 1 Mar 25 01:35:12.856032 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Mar 25 01:35:12.856040 kernel: devtmpfs: initialized Mar 25 01:35:12.856048 kernel: x86/mm: Memory block size: 128MB Mar 25 01:35:12.856056 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 25 01:35:12.856064 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 25 01:35:12.856071 kernel: pinctrl core: initialized pinctrl subsystem Mar 25 01:35:12.856081 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 25 01:35:12.856089 kernel: audit: initializing netlink subsys (disabled) Mar 25 01:35:12.856097 kernel: audit: type=2000 audit(1742866512.439:1): state=initialized audit_enabled=0 res=1 Mar 25 01:35:12.856105 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 25 01:35:12.856113 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 25 01:35:12.856120 kernel: cpuidle: using governor menu Mar 25 01:35:12.856128 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 25 01:35:12.856136 kernel: dca service started, version 1.12.1 Mar 25 01:35:12.856144 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 25 01:35:12.856154 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 25 01:35:12.856162 kernel: PCI: Using configuration type 1 for base access Mar 25 01:35:12.856170 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 25 01:35:12.856178 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 25 01:35:12.856186 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 25 01:35:12.856194 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 25 01:35:12.856202 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 25 01:35:12.856210 kernel: ACPI: Added _OSI(Module Device) Mar 25 01:35:12.856219 kernel: ACPI: Added _OSI(Processor Device) Mar 25 01:35:12.856227 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 25 01:35:12.856235 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 25 01:35:12.856243 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 25 01:35:12.856251 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 25 01:35:12.856258 kernel: ACPI: Interpreter enabled Mar 25 01:35:12.856266 kernel: ACPI: PM: (supports S0 S3 S5) Mar 25 01:35:12.856274 kernel: ACPI: Using IOAPIC for interrupt routing Mar 25 01:35:12.856282 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 25 01:35:12.856290 kernel: PCI: Using E820 reservations for host bridge windows Mar 25 01:35:12.856300 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 25 01:35:12.856308 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 25 01:35:12.856491 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 25 01:35:12.856632 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 25 01:35:12.856757 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 25 01:35:12.856767 kernel: PCI host bridge to bus 0000:00 Mar 25 01:35:12.856895 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 25 01:35:12.857017 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 25 01:35:12.857130 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 25 01:35:12.857243 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 25 01:35:12.857355 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 25 01:35:12.857493 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 25 01:35:12.857614 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 25 01:35:12.857856 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 25 01:35:12.857997 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 25 01:35:12.858122 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 25 01:35:12.858248 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 25 01:35:12.858371 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 25 01:35:12.858512 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 25 01:35:12.858661 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 25 01:35:12.858792 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 25 01:35:12.858925 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 25 01:35:12.859053 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 25 01:35:12.859186 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 25 01:35:12.859311 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 25 01:35:12.859451 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 25 01:35:12.859647 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 25 01:35:12.859801 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 25 01:35:12.859925 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 25 01:35:12.860048 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 25 01:35:12.860171 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 25 01:35:12.860295 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 25 01:35:12.860445 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 25 01:35:12.860583 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 25 01:35:12.860722 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 25 01:35:12.860846 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 25 01:35:12.860974 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 25 01:35:12.861219 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 25 01:35:12.861398 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 25 01:35:12.861410 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 25 01:35:12.861440 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 25 01:35:12.861449 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 25 01:35:12.861457 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 25 01:35:12.861465 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 25 01:35:12.861473 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 25 01:35:12.861482 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 25 01:35:12.861490 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 25 01:35:12.861498 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 25 01:35:12.861507 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 25 01:35:12.861518 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 25 01:35:12.861527 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 25 01:35:12.861542 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 25 01:35:12.861550 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 25 01:35:12.861559 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 25 01:35:12.861567 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 25 01:35:12.861575 kernel: iommu: Default domain type: Translated Mar 25 01:35:12.861583 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 25 01:35:12.861591 kernel: PCI: Using ACPI for IRQ routing Mar 25 01:35:12.861602 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 25 01:35:12.861611 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 25 01:35:12.861620 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 25 01:35:12.861751 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 25 01:35:12.861878 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 25 01:35:12.862007 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 25 01:35:12.862018 kernel: vgaarb: loaded Mar 25 01:35:12.862026 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 25 01:35:12.862035 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 25 01:35:12.862047 kernel: clocksource: Switched to clocksource kvm-clock Mar 25 01:35:12.862055 kernel: VFS: Disk quotas dquot_6.6.0 Mar 25 01:35:12.862064 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 25 01:35:12.862073 kernel: pnp: PnP ACPI init Mar 25 01:35:12.862212 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 25 01:35:12.862225 kernel: pnp: PnP ACPI: found 6 devices Mar 25 01:35:12.862234 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 25 01:35:12.862242 kernel: NET: Registered PF_INET protocol family Mar 25 01:35:12.862254 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 25 01:35:12.862262 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 25 01:35:12.862271 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 25 01:35:12.862279 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 25 01:35:12.862287 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 25 01:35:12.862296 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 25 01:35:12.862304 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 25 01:35:12.862312 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 25 01:35:12.862323 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 25 01:35:12.862331 kernel: NET: Registered PF_XDP protocol family Mar 25 01:35:12.863576 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 25 01:35:12.863697 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 25 01:35:12.863809 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 25 01:35:12.863921 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 25 01:35:12.864033 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 25 01:35:12.864145 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 25 01:35:12.864156 kernel: PCI: CLS 0 bytes, default 64 Mar 25 01:35:12.864169 kernel: Initialise system trusted keyrings Mar 25 01:35:12.864178 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 25 01:35:12.864186 kernel: Key type asymmetric registered Mar 25 01:35:12.864195 kernel: Asymmetric key parser 'x509' registered Mar 25 01:35:12.864203 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 25 01:35:12.864211 kernel: io scheduler mq-deadline registered Mar 25 01:35:12.864219 kernel: io scheduler kyber registered Mar 25 01:35:12.864228 kernel: io scheduler bfq registered Mar 25 01:35:12.864236 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 25 01:35:12.864248 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 25 01:35:12.864256 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 25 01:35:12.864264 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 25 01:35:12.864273 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 25 01:35:12.864281 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 25 01:35:12.864289 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 25 01:35:12.864298 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 25 01:35:12.864306 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 25 01:35:12.864447 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 25 01:35:12.864464 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 25 01:35:12.864599 kernel: rtc_cmos 00:04: registered as rtc0 Mar 25 01:35:12.864786 kernel: rtc_cmos 00:04: setting system clock to 2025-03-25T01:35:12 UTC (1742866512) Mar 25 01:35:12.864905 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 25 01:35:12.864916 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 25 01:35:12.864924 kernel: NET: Registered PF_INET6 protocol family Mar 25 01:35:12.864933 kernel: Segment Routing with IPv6 Mar 25 01:35:12.864941 kernel: In-situ OAM (IOAM) with IPv6 Mar 25 01:35:12.864954 kernel: NET: Registered PF_PACKET protocol family Mar 25 01:35:12.864962 kernel: Key type dns_resolver registered Mar 25 01:35:12.864971 kernel: IPI shorthand broadcast: enabled Mar 25 01:35:12.864979 kernel: sched_clock: Marking stable (540002416, 104351468)->(686023459, -41669575) Mar 25 01:35:12.864987 kernel: registered taskstats version 1 Mar 25 01:35:12.864995 kernel: Loading compiled-in X.509 certificates Mar 25 01:35:12.865003 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: eff01054e94a599f8e404b9a9482f4e2220f5386' Mar 25 01:35:12.865012 kernel: Key type .fscrypt registered Mar 25 01:35:12.865020 kernel: Key type fscrypt-provisioning registered Mar 25 01:35:12.865031 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 25 01:35:12.865039 kernel: ima: Allocated hash algorithm: sha1 Mar 25 01:35:12.865047 kernel: ima: No architecture policies found Mar 25 01:35:12.865055 kernel: clk: Disabling unused clocks Mar 25 01:35:12.865064 kernel: Freeing unused kernel image (initmem) memory: 43592K Mar 25 01:35:12.865072 kernel: Write protecting the kernel read-only data: 40960k Mar 25 01:35:12.865080 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 25 01:35:12.865088 kernel: Run /init as init process Mar 25 01:35:12.865096 kernel: with arguments: Mar 25 01:35:12.865106 kernel: /init Mar 25 01:35:12.865115 kernel: with environment: Mar 25 01:35:12.865123 kernel: HOME=/ Mar 25 01:35:12.865131 kernel: TERM=linux Mar 25 01:35:12.865139 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 25 01:35:12.865148 systemd[1]: Successfully made /usr/ read-only. Mar 25 01:35:12.865161 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:35:12.865174 systemd[1]: Detected virtualization kvm. Mar 25 01:35:12.865183 systemd[1]: Detected architecture x86-64. Mar 25 01:35:12.865191 systemd[1]: Running in initrd. Mar 25 01:35:12.865200 systemd[1]: No hostname configured, using default hostname. Mar 25 01:35:12.865209 systemd[1]: Hostname set to . Mar 25 01:35:12.865217 systemd[1]: Initializing machine ID from VM UUID. Mar 25 01:35:12.865226 systemd[1]: Queued start job for default target initrd.target. Mar 25 01:35:12.865235 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:35:12.865244 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:35:12.865270 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 25 01:35:12.865282 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:35:12.865291 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 25 01:35:12.865300 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 25 01:35:12.865313 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 25 01:35:12.865322 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 25 01:35:12.865331 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:35:12.865340 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:35:12.865349 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:35:12.865358 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:35:12.865367 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:35:12.865375 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:35:12.865385 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:35:12.865396 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:35:12.865405 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 25 01:35:12.865415 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 25 01:35:12.865439 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:35:12.865448 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:35:12.865457 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:35:12.865466 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:35:12.865475 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 25 01:35:12.865487 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:35:12.865495 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 25 01:35:12.865504 systemd[1]: Starting systemd-fsck-usr.service... Mar 25 01:35:12.865513 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:35:12.865522 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:35:12.865531 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:35:12.865547 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 25 01:35:12.865555 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:35:12.865568 systemd[1]: Finished systemd-fsck-usr.service. Mar 25 01:35:12.865609 systemd-journald[191]: Collecting audit messages is disabled. Mar 25 01:35:12.865660 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:35:12.865670 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:35:12.865680 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:35:12.865689 systemd-journald[191]: Journal started Mar 25 01:35:12.865715 systemd-journald[191]: Runtime Journal (/run/log/journal/2f9a55f066d04350bfb3bb769487b8c0) is 6M, max 48.3M, 42.3M free. Mar 25 01:35:12.863497 systemd-modules-load[193]: Inserted module 'overlay' Mar 25 01:35:12.901880 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:35:12.901898 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 25 01:35:12.901911 kernel: Bridge firewalling registered Mar 25 01:35:12.889864 systemd-modules-load[193]: Inserted module 'br_netfilter' Mar 25 01:35:12.903283 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:35:12.905688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:35:12.908044 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:35:12.914291 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:35:12.917476 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:35:12.918665 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:35:12.934650 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:35:12.935096 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:35:12.937287 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:35:12.951615 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:35:12.955064 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 25 01:35:12.978175 dracut-cmdline[232]: dracut-dracut-053 Mar 25 01:35:12.981095 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:35:12.986001 systemd-resolved[224]: Positive Trust Anchors: Mar 25 01:35:12.986017 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:35:12.986048 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:35:12.988493 systemd-resolved[224]: Defaulting to hostname 'linux'. Mar 25 01:35:12.989625 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:35:12.995603 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:35:13.066458 kernel: SCSI subsystem initialized Mar 25 01:35:13.075445 kernel: Loading iSCSI transport class v2.0-870. Mar 25 01:35:13.085453 kernel: iscsi: registered transport (tcp) Mar 25 01:35:13.105449 kernel: iscsi: registered transport (qla4xxx) Mar 25 01:35:13.105481 kernel: QLogic iSCSI HBA Driver Mar 25 01:35:13.152970 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 25 01:35:13.156498 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 25 01:35:13.193564 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 25 01:35:13.193598 kernel: device-mapper: uevent: version 1.0.3 Mar 25 01:35:13.194600 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 25 01:35:13.235439 kernel: raid6: avx2x4 gen() 30811 MB/s Mar 25 01:35:13.252444 kernel: raid6: avx2x2 gen() 31486 MB/s Mar 25 01:35:13.269516 kernel: raid6: avx2x1 gen() 26062 MB/s Mar 25 01:35:13.269538 kernel: raid6: using algorithm avx2x2 gen() 31486 MB/s Mar 25 01:35:13.287536 kernel: raid6: .... xor() 19998 MB/s, rmw enabled Mar 25 01:35:13.287560 kernel: raid6: using avx2x2 recovery algorithm Mar 25 01:35:13.307443 kernel: xor: automatically using best checksumming function avx Mar 25 01:35:13.452453 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 25 01:35:13.465855 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:35:13.468558 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:35:13.501379 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 25 01:35:13.507620 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:35:13.510771 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 25 01:35:13.541751 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Mar 25 01:35:13.573621 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:35:13.577196 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:35:13.651826 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:35:13.655553 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 25 01:35:13.677442 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 25 01:35:13.694575 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 25 01:35:13.699507 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 25 01:35:13.699530 kernel: GPT:9289727 != 19775487 Mar 25 01:35:13.699541 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 25 01:35:13.699552 kernel: GPT:9289727 != 19775487 Mar 25 01:35:13.699562 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 25 01:35:13.699573 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 01:35:13.699586 kernel: cryptd: max_cpu_qlen set to 1000 Mar 25 01:35:13.679988 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 25 01:35:13.683487 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:35:13.688314 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:35:13.707485 kernel: AVX2 version of gcm_enc/dec engaged. Mar 25 01:35:13.707506 kernel: AES CTR mode by8 optimization enabled Mar 25 01:35:13.689521 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:35:13.691797 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 25 01:35:13.714437 kernel: libata version 3.00 loaded. Mar 25 01:35:13.720430 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:35:13.724217 kernel: ahci 0000:00:1f.2: version 3.0 Mar 25 01:35:13.745432 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 25 01:35:13.745449 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 25 01:35:13.745622 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 25 01:35:13.745766 kernel: scsi host0: ahci Mar 25 01:35:13.745917 kernel: scsi host1: ahci Mar 25 01:35:13.746063 kernel: scsi host2: ahci Mar 25 01:35:13.746220 kernel: scsi host3: ahci Mar 25 01:35:13.746364 kernel: scsi host4: ahci Mar 25 01:35:13.746542 kernel: scsi host5: ahci Mar 25 01:35:13.746702 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 25 01:35:13.746715 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 25 01:35:13.746726 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 25 01:35:13.746736 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 25 01:35:13.746752 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 25 01:35:13.746762 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 25 01:35:13.731255 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:35:13.731368 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:35:13.733796 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:35:13.756173 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (459) Mar 25 01:35:13.735510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:35:13.735721 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:35:13.744774 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:35:13.750319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:35:13.763031 kernel: BTRFS: device fsid 6d9424cd-1432-492b-b006-b311869817e2 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (456) Mar 25 01:35:13.788854 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 25 01:35:13.812004 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:35:13.821035 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 25 01:35:13.829689 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 25 01:35:13.836686 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 25 01:35:13.836947 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 25 01:35:13.840874 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 25 01:35:13.844694 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:35:13.862895 disk-uuid[566]: Primary Header is updated. Mar 25 01:35:13.862895 disk-uuid[566]: Secondary Entries is updated. Mar 25 01:35:13.862895 disk-uuid[566]: Secondary Header is updated. Mar 25 01:35:13.865440 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 01:35:13.869450 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 01:35:13.871278 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:35:14.051452 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 25 01:35:14.051499 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 25 01:35:14.059449 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 25 01:35:14.059468 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 25 01:35:14.059486 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 25 01:35:14.060449 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 25 01:35:14.061644 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 25 01:35:14.061659 kernel: ata3.00: applying bridge limits Mar 25 01:35:14.062697 kernel: ata3.00: configured for UDMA/100 Mar 25 01:35:14.063451 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 25 01:35:14.106949 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 25 01:35:14.119288 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 25 01:35:14.119311 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 25 01:35:14.870446 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 01:35:14.870971 disk-uuid[571]: The operation has completed successfully. Mar 25 01:35:14.908037 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 25 01:35:14.908161 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 25 01:35:14.934081 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 25 01:35:14.948574 sh[590]: Success Mar 25 01:35:14.960445 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 25 01:35:14.992899 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 25 01:35:14.994971 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 25 01:35:15.011510 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 25 01:35:15.018864 kernel: BTRFS info (device dm-0): first mount of filesystem 6d9424cd-1432-492b-b006-b311869817e2 Mar 25 01:35:15.018889 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:35:15.018900 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 25 01:35:15.021043 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 25 01:35:15.021057 kernel: BTRFS info (device dm-0): using free space tree Mar 25 01:35:15.026092 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 25 01:35:15.028358 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 25 01:35:15.031056 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 25 01:35:15.033651 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 25 01:35:15.062021 kernel: BTRFS info (device vda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:35:15.062047 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:35:15.062059 kernel: BTRFS info (device vda6): using free space tree Mar 25 01:35:15.064445 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 01:35:15.069439 kernel: BTRFS info (device vda6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:35:15.074610 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 25 01:35:15.077993 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 25 01:35:15.137216 ignition[685]: Ignition 2.20.0 Mar 25 01:35:15.137230 ignition[685]: Stage: fetch-offline Mar 25 01:35:15.137264 ignition[685]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:35:15.137274 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:35:15.137388 ignition[685]: parsed url from cmdline: "" Mar 25 01:35:15.137393 ignition[685]: no config URL provided Mar 25 01:35:15.137398 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:35:15.137408 ignition[685]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:35:15.137450 ignition[685]: op(1): [started] loading QEMU firmware config module Mar 25 01:35:15.137456 ignition[685]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 25 01:35:15.145433 ignition[685]: op(1): [finished] loading QEMU firmware config module Mar 25 01:35:15.145456 ignition[685]: QEMU firmware config was not found. Ignoring... Mar 25 01:35:15.151011 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:35:15.156037 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:35:15.189363 ignition[685]: parsing config with SHA512: 2dea406fe7fea0f9d95581e22a274c86c765ff087a34563a71b12509b62995934c0b8219b1539a3af96c7bb0ea94a19a8f9240ee4ea10d442975a735365023de Mar 25 01:35:15.194413 unknown[685]: fetched base config from "system" Mar 25 01:35:15.194572 unknown[685]: fetched user config from "qemu" Mar 25 01:35:15.195667 ignition[685]: fetch-offline: fetch-offline passed Mar 25 01:35:15.195759 ignition[685]: Ignition finished successfully Mar 25 01:35:15.198508 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:35:15.204632 systemd-networkd[776]: lo: Link UP Mar 25 01:35:15.204643 systemd-networkd[776]: lo: Gained carrier Mar 25 01:35:15.207594 systemd-networkd[776]: Enumeration completed Mar 25 01:35:15.207672 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:35:15.209415 systemd[1]: Reached target network.target - Network. Mar 25 01:35:15.209920 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 25 01:35:15.210501 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:35:15.210506 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:35:15.213074 systemd-networkd[776]: eth0: Link UP Mar 25 01:35:15.213078 systemd-networkd[776]: eth0: Gained carrier Mar 25 01:35:15.213084 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:35:15.214266 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 25 01:35:15.235497 systemd-networkd[776]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 25 01:35:15.238895 ignition[780]: Ignition 2.20.0 Mar 25 01:35:15.238906 ignition[780]: Stage: kargs Mar 25 01:35:15.239053 ignition[780]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:35:15.239064 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:35:15.239972 ignition[780]: kargs: kargs passed Mar 25 01:35:15.240017 ignition[780]: Ignition finished successfully Mar 25 01:35:15.244496 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 25 01:35:15.247464 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 25 01:35:15.274551 ignition[790]: Ignition 2.20.0 Mar 25 01:35:15.274561 ignition[790]: Stage: disks Mar 25 01:35:15.274715 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:35:15.274730 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:35:15.275627 ignition[790]: disks: disks passed Mar 25 01:35:15.275671 ignition[790]: Ignition finished successfully Mar 25 01:35:15.281472 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 25 01:35:15.283651 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 25 01:35:15.283938 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 25 01:35:15.286039 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:35:15.288704 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:35:15.289027 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:35:15.293644 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 25 01:35:15.321851 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 25 01:35:15.327777 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 25 01:35:15.328889 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 25 01:35:15.421358 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 25 01:35:15.422926 kernel: EXT4-fs (vda9): mounted filesystem 4e6dca82-2e50-453c-be25-61f944b72008 r/w with ordered data mode. Quota mode: none. Mar 25 01:35:15.422246 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 25 01:35:15.425192 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:35:15.427516 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 25 01:35:15.428005 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 25 01:35:15.428042 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 25 01:35:15.428065 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:35:15.442576 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 25 01:35:15.444137 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 25 01:35:15.449439 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (809) Mar 25 01:35:15.452042 kernel: BTRFS info (device vda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:35:15.452062 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:35:15.452073 kernel: BTRFS info (device vda6): using free space tree Mar 25 01:35:15.455433 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 01:35:15.457134 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:35:15.479610 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Mar 25 01:35:15.484811 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Mar 25 01:35:15.489464 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Mar 25 01:35:15.494151 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Mar 25 01:35:15.577317 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 25 01:35:15.579555 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 25 01:35:15.580646 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 25 01:35:15.602449 kernel: BTRFS info (device vda6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:35:15.623633 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 25 01:35:15.634949 ignition[923]: INFO : Ignition 2.20.0 Mar 25 01:35:15.634949 ignition[923]: INFO : Stage: mount Mar 25 01:35:15.636581 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:35:15.636581 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:35:15.636581 ignition[923]: INFO : mount: mount passed Mar 25 01:35:15.636581 ignition[923]: INFO : Ignition finished successfully Mar 25 01:35:15.638085 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 25 01:35:15.640633 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 25 01:35:16.017917 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 25 01:35:16.019557 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:35:16.042443 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (936) Mar 25 01:35:16.042480 kernel: BTRFS info (device vda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:35:16.043701 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:35:16.043721 kernel: BTRFS info (device vda6): using free space tree Mar 25 01:35:16.046451 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 01:35:16.047981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:35:16.076585 ignition[953]: INFO : Ignition 2.20.0 Mar 25 01:35:16.076585 ignition[953]: INFO : Stage: files Mar 25 01:35:16.078183 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:35:16.078183 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:35:16.080962 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Mar 25 01:35:16.082197 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 25 01:35:16.082197 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 25 01:35:16.086820 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 25 01:35:16.088267 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 25 01:35:16.089958 unknown[953]: wrote ssh authorized keys file for user: core Mar 25 01:35:16.091032 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 25 01:35:16.093472 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 25 01:35:16.095325 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Mar 25 01:35:16.145737 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 25 01:35:16.280833 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 25 01:35:16.282801 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:35:16.282801 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 25 01:35:16.541867 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 25 01:35:16.671506 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:35:16.673385 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 25 01:35:16.675129 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 25 01:35:16.676798 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:35:16.678546 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:35:16.680203 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:35:16.681944 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:35:16.683629 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:35:16.685349 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:35:16.687232 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:35:16.689103 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:35:16.690876 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 25 01:35:16.693415 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 25 01:35:16.696140 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 25 01:35:16.696140 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Mar 25 01:35:17.087782 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 25 01:35:17.156568 systemd-networkd[776]: eth0: Gained IPv6LL Mar 25 01:35:17.499184 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 25 01:35:17.499184 ignition[953]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 25 01:35:17.503082 ignition[953]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:35:17.503082 ignition[953]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:35:17.503082 ignition[953]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 25 01:35:17.503082 ignition[953]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 25 01:35:17.503082 ignition[953]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 25 01:35:17.503082 ignition[953]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 25 01:35:17.503082 ignition[953]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 25 01:35:17.503082 ignition[953]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 25 01:35:17.530085 ignition[953]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 25 01:35:17.534541 ignition[953]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 25 01:35:17.536351 ignition[953]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 25 01:35:17.536351 ignition[953]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 25 01:35:17.536351 ignition[953]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 25 01:35:17.536351 ignition[953]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:35:17.536351 ignition[953]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:35:17.536351 ignition[953]: INFO : files: files passed Mar 25 01:35:17.536351 ignition[953]: INFO : Ignition finished successfully Mar 25 01:35:17.537828 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 25 01:35:17.540319 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 25 01:35:17.542592 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 25 01:35:17.556092 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 25 01:35:17.556198 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 25 01:35:17.559959 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Mar 25 01:35:17.561483 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:35:17.561483 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:35:17.566027 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:35:17.563213 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:35:17.566629 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 25 01:35:17.569545 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 25 01:35:17.608239 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 25 01:35:17.608357 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 25 01:35:17.610803 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 25 01:35:17.613006 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 25 01:35:17.613510 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 25 01:35:17.614218 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 25 01:35:17.641770 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:35:17.645587 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 25 01:35:17.667480 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:35:17.667817 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:35:17.670182 systemd[1]: Stopped target timers.target - Timer Units. Mar 25 01:35:17.670519 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 25 01:35:17.670622 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:35:17.671313 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 25 01:35:17.671822 systemd[1]: Stopped target basic.target - Basic System. Mar 25 01:35:17.672145 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 25 01:35:17.672493 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:35:17.682803 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 25 01:35:17.683362 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 25 01:35:17.683874 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:35:17.684214 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 25 01:35:17.684721 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 25 01:35:17.685043 systemd[1]: Stopped target swap.target - Swaps. Mar 25 01:35:17.685345 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 25 01:35:17.685485 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:35:17.686208 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:35:17.686739 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:35:17.687031 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 25 01:35:17.687134 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:35:17.687377 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 25 01:35:17.687503 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 25 01:35:17.688235 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 25 01:35:17.688341 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:35:17.688833 systemd[1]: Stopped target paths.target - Path Units. Mar 25 01:35:17.689087 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 25 01:35:17.714543 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:35:17.717550 systemd[1]: Stopped target slices.target - Slice Units. Mar 25 01:35:17.719500 systemd[1]: Stopped target sockets.target - Socket Units. Mar 25 01:35:17.719879 systemd[1]: iscsid.socket: Deactivated successfully. Mar 25 01:35:17.719992 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:35:17.721818 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 25 01:35:17.721902 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:35:17.723377 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 25 01:35:17.723541 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:35:17.725361 systemd[1]: ignition-files.service: Deactivated successfully. Mar 25 01:35:17.725504 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 25 01:35:17.731182 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 25 01:35:17.731786 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 25 01:35:17.731896 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:35:17.732904 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 25 01:35:17.736037 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 25 01:35:17.736153 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:35:17.736476 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 25 01:35:17.736583 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:35:17.747709 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 25 01:35:17.747828 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 25 01:35:17.764827 ignition[1008]: INFO : Ignition 2.20.0 Mar 25 01:35:17.764827 ignition[1008]: INFO : Stage: umount Mar 25 01:35:17.766643 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:35:17.766643 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 25 01:35:17.766643 ignition[1008]: INFO : umount: umount passed Mar 25 01:35:17.766643 ignition[1008]: INFO : Ignition finished successfully Mar 25 01:35:17.769053 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 25 01:35:17.773722 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 25 01:35:17.773873 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 25 01:35:17.775883 systemd[1]: Stopped target network.target - Network. Mar 25 01:35:17.776092 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 25 01:35:17.776142 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 25 01:35:17.776468 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 25 01:35:17.776513 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 25 01:35:17.776971 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 25 01:35:17.777015 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 25 01:35:17.777304 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 25 01:35:17.777344 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 25 01:35:17.777911 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 25 01:35:17.778214 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 25 01:35:17.790604 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 25 01:35:17.790758 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 25 01:35:17.794734 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 25 01:35:17.795084 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 25 01:35:17.795133 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:35:17.798530 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:35:17.798770 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 25 01:35:17.798888 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 25 01:35:17.803009 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 25 01:35:17.803649 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 25 01:35:17.803715 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:35:17.805084 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 25 01:35:17.808696 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 25 01:35:17.808757 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:35:17.809111 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:35:17.809156 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:35:17.815654 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 25 01:35:17.815718 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 25 01:35:17.817229 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:35:17.818751 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:35:17.830969 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 25 01:35:17.831148 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:35:17.833857 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 25 01:35:17.833927 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 25 01:35:17.835653 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 25 01:35:17.835693 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:35:17.837714 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 25 01:35:17.837762 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:35:17.840018 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 25 01:35:17.840065 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 25 01:35:17.842001 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:35:17.842052 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:35:17.845161 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 25 01:35:17.846266 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 25 01:35:17.846317 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:35:17.848555 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:35:17.848604 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:35:17.853731 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 25 01:35:17.853844 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 25 01:35:17.860557 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 25 01:35:17.860665 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 25 01:35:17.970231 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 25 01:35:17.970371 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 25 01:35:17.972473 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 25 01:35:17.974192 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 25 01:35:17.974248 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 25 01:35:17.977088 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 25 01:35:17.996257 systemd[1]: Switching root. Mar 25 01:35:18.027845 systemd-journald[191]: Journal stopped Mar 25 01:35:19.271225 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Mar 25 01:35:19.271297 kernel: SELinux: policy capability network_peer_controls=1 Mar 25 01:35:19.271315 kernel: SELinux: policy capability open_perms=1 Mar 25 01:35:19.271326 kernel: SELinux: policy capability extended_socket_class=1 Mar 25 01:35:19.271338 kernel: SELinux: policy capability always_check_network=0 Mar 25 01:35:19.271349 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 25 01:35:19.271361 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 25 01:35:19.271372 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 25 01:35:19.271390 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 25 01:35:19.271411 kernel: audit: type=1403 audit(1742866518.459:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 25 01:35:19.271434 systemd[1]: Successfully loaded SELinux policy in 39.676ms. Mar 25 01:35:19.271460 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.958ms. Mar 25 01:35:19.271473 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:35:19.271486 systemd[1]: Detected virtualization kvm. Mar 25 01:35:19.271499 systemd[1]: Detected architecture x86-64. Mar 25 01:35:19.271513 systemd[1]: Detected first boot. Mar 25 01:35:19.271525 systemd[1]: Initializing machine ID from VM UUID. Mar 25 01:35:19.271537 zram_generator::config[1058]: No configuration found. Mar 25 01:35:19.271553 kernel: Guest personality initialized and is inactive Mar 25 01:35:19.271565 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 25 01:35:19.271577 kernel: Initialized host personality Mar 25 01:35:19.271588 kernel: NET: Registered PF_VSOCK protocol family Mar 25 01:35:19.271599 systemd[1]: Populated /etc with preset unit settings. Mar 25 01:35:19.271620 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 25 01:35:19.271645 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 25 01:35:19.271667 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 25 01:35:19.271700 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 25 01:35:19.271725 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 25 01:35:19.271752 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 25 01:35:19.271776 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 25 01:35:19.271800 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 25 01:35:19.271825 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 25 01:35:19.271847 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 25 01:35:19.271875 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 25 01:35:19.271899 systemd[1]: Created slice user.slice - User and Session Slice. Mar 25 01:35:19.271930 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:35:19.271957 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:35:19.271978 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 25 01:35:19.272003 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 25 01:35:19.272017 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 25 01:35:19.272032 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:35:19.272044 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 25 01:35:19.272059 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:35:19.272072 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 25 01:35:19.272084 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 25 01:35:19.272097 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 25 01:35:19.272109 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 25 01:35:19.272121 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:35:19.272133 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:35:19.272145 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:35:19.272157 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:35:19.272176 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 25 01:35:19.272190 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 25 01:35:19.272203 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 25 01:35:19.272215 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:35:19.272227 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:35:19.272239 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:35:19.272254 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 25 01:35:19.272266 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 25 01:35:19.272278 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 25 01:35:19.272290 systemd[1]: Mounting media.mount - External Media Directory... Mar 25 01:35:19.272311 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:35:19.272324 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 25 01:35:19.272336 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 25 01:35:19.272348 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 25 01:35:19.272360 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 25 01:35:19.272373 systemd[1]: Reached target machines.target - Containers. Mar 25 01:35:19.272393 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 25 01:35:19.272406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:35:19.272431 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:35:19.272444 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 25 01:35:19.272456 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:35:19.272469 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:35:19.272481 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:35:19.272493 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 25 01:35:19.272505 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:35:19.272518 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 25 01:35:19.272533 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 25 01:35:19.272547 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 25 01:35:19.272559 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 25 01:35:19.272572 systemd[1]: Stopped systemd-fsck-usr.service. Mar 25 01:35:19.272585 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:35:19.272598 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:35:19.272610 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:35:19.272623 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 25 01:35:19.272634 kernel: loop: module loaded Mar 25 01:35:19.272649 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 25 01:35:19.272661 kernel: fuse: init (API version 7.39) Mar 25 01:35:19.272673 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 25 01:35:19.272685 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:35:19.272697 systemd[1]: verity-setup.service: Deactivated successfully. Mar 25 01:35:19.272712 systemd[1]: Stopped verity-setup.service. Mar 25 01:35:19.272725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:35:19.272737 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 25 01:35:19.272749 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 25 01:35:19.272762 systemd[1]: Mounted media.mount - External Media Directory. Mar 25 01:35:19.272774 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 25 01:35:19.272786 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 25 01:35:19.272826 systemd-journald[1136]: Collecting audit messages is disabled. Mar 25 01:35:19.272853 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 25 01:35:19.272865 kernel: ACPI: bus type drm_connector registered Mar 25 01:35:19.272877 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 25 01:35:19.272890 systemd-journald[1136]: Journal started Mar 25 01:35:19.272912 systemd-journald[1136]: Runtime Journal (/run/log/journal/2f9a55f066d04350bfb3bb769487b8c0) is 6M, max 48.3M, 42.3M free. Mar 25 01:35:19.032139 systemd[1]: Queued start job for default target multi-user.target. Mar 25 01:35:19.044465 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 25 01:35:19.044972 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 25 01:35:19.277288 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:35:19.278130 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:35:19.279688 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 25 01:35:19.279921 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 25 01:35:19.281512 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:35:19.281748 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:35:19.283197 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:35:19.283434 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:35:19.284813 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:35:19.285025 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:35:19.286540 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 25 01:35:19.286753 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 25 01:35:19.288119 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:35:19.288331 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:35:19.289781 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:35:19.291202 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 25 01:35:19.292785 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 25 01:35:19.294356 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 25 01:35:19.309727 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 25 01:35:19.312316 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 25 01:35:19.314550 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 25 01:35:19.315683 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 25 01:35:19.315710 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:35:19.317035 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 25 01:35:19.323526 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 25 01:35:19.325718 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 25 01:35:19.326867 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:35:19.330174 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 25 01:35:19.333063 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 25 01:35:19.335713 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:35:19.337365 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 25 01:35:19.338712 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:35:19.341037 systemd-journald[1136]: Time spent on flushing to /var/log/journal/2f9a55f066d04350bfb3bb769487b8c0 is 13.616ms for 964 entries. Mar 25 01:35:19.341037 systemd-journald[1136]: System Journal (/var/log/journal/2f9a55f066d04350bfb3bb769487b8c0) is 8M, max 195.6M, 187.6M free. Mar 25 01:35:19.375899 systemd-journald[1136]: Received client request to flush runtime journal. Mar 25 01:35:19.375968 kernel: loop0: detected capacity change from 0 to 109808 Mar 25 01:35:19.342517 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:35:19.347642 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 25 01:35:19.350618 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 25 01:35:19.355619 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 25 01:35:19.357069 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 25 01:35:19.358675 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 25 01:35:19.360917 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 25 01:35:19.371212 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 25 01:35:19.380667 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 25 01:35:19.382576 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:35:19.384335 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 25 01:35:19.395806 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 25 01:35:19.397154 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 25 01:35:19.398056 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:35:19.408184 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 25 01:35:19.414243 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:35:19.416010 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 25 01:35:19.436452 kernel: loop1: detected capacity change from 0 to 218376 Mar 25 01:35:19.444537 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Mar 25 01:35:19.444550 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Mar 25 01:35:19.445129 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 25 01:35:19.451297 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:35:19.470446 kernel: loop2: detected capacity change from 0 to 151640 Mar 25 01:35:19.506463 kernel: loop3: detected capacity change from 0 to 109808 Mar 25 01:35:19.516466 kernel: loop4: detected capacity change from 0 to 218376 Mar 25 01:35:19.525691 kernel: loop5: detected capacity change from 0 to 151640 Mar 25 01:35:19.537756 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 25 01:35:19.538369 (sd-merge)[1202]: Merged extensions into '/usr'. Mar 25 01:35:19.543580 systemd[1]: Reload requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Mar 25 01:35:19.543598 systemd[1]: Reloading... Mar 25 01:35:19.610449 zram_generator::config[1233]: No configuration found. Mar 25 01:35:19.667911 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 25 01:35:19.728096 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:35:19.791915 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 25 01:35:19.792177 systemd[1]: Reloading finished in 248 ms. Mar 25 01:35:19.812051 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 25 01:35:19.813731 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 25 01:35:19.826014 systemd[1]: Starting ensure-sysext.service... Mar 25 01:35:19.827966 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:35:19.837876 systemd[1]: Reload requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Mar 25 01:35:19.837891 systemd[1]: Reloading... Mar 25 01:35:19.862256 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 25 01:35:19.862575 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 25 01:35:19.863525 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 25 01:35:19.863791 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Mar 25 01:35:19.863866 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Mar 25 01:35:19.867849 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:35:19.867971 systemd-tmpfiles[1269]: Skipping /boot Mar 25 01:35:19.884498 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:35:19.885898 systemd-tmpfiles[1269]: Skipping /boot Mar 25 01:35:19.911452 zram_generator::config[1298]: No configuration found. Mar 25 01:35:20.018095 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:35:20.083646 systemd[1]: Reloading finished in 245 ms. Mar 25 01:35:20.098607 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 25 01:35:20.115884 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:35:20.125855 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:35:20.128512 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 25 01:35:20.138478 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 25 01:35:20.142540 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:35:20.146156 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:35:20.151780 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 25 01:35:20.155847 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:35:20.156031 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:35:20.159450 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:35:20.162663 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:35:20.165721 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:35:20.166953 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:35:20.167060 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:35:20.169604 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 25 01:35:20.170841 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:35:20.172319 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:35:20.172802 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:35:20.175179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:35:20.175456 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:35:20.177199 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:35:20.177590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:35:20.183917 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 25 01:35:20.196623 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:35:20.196847 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:35:20.197922 augenrules[1370]: No rules Mar 25 01:35:20.197983 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Mar 25 01:35:20.198996 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:35:20.202258 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:35:20.212409 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:35:20.213661 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:35:20.213809 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:35:20.216276 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 25 01:35:20.217521 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:35:20.218901 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 25 01:35:20.221126 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:35:20.221449 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:35:20.223163 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 25 01:35:20.226041 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 25 01:35:20.228314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:35:20.228594 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:35:20.232178 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:35:20.232403 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:35:20.234530 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:35:20.234753 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:35:20.236597 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 25 01:35:20.238179 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:35:20.262003 systemd[1]: Finished ensure-sysext.service. Mar 25 01:35:20.264729 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:35:20.266183 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:35:20.267530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:35:20.271600 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:35:20.277626 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:35:20.285271 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:35:20.292783 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:35:20.295635 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:35:20.295682 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:35:20.301613 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:35:20.307604 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 25 01:35:20.308909 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 01:35:20.308945 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:35:20.310548 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:35:20.310786 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:35:20.311769 systemd-resolved[1341]: Positive Trust Anchors: Mar 25 01:35:20.311790 systemd-resolved[1341]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:35:20.311821 systemd-resolved[1341]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:35:20.318804 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:35:20.319022 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:35:20.321189 systemd-resolved[1341]: Defaulting to hostname 'linux'. Mar 25 01:35:20.323689 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:35:20.325301 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:35:20.325582 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:35:20.326874 augenrules[1411]: /sbin/augenrules: No change Mar 25 01:35:20.327676 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:35:20.327912 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:35:20.332449 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1391) Mar 25 01:35:20.338736 augenrules[1445]: No rules Mar 25 01:35:20.338928 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:35:20.339213 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:35:20.342873 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 25 01:35:20.355675 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:35:20.357140 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:35:20.357217 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:35:20.364462 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 25 01:35:20.369680 kernel: ACPI: button: Power Button [PWRF] Mar 25 01:35:20.377663 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 25 01:35:20.380544 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 25 01:35:20.390070 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 25 01:35:20.393590 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 25 01:35:20.393894 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 25 01:35:20.398441 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 25 01:35:20.414757 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 25 01:35:20.427661 systemd-networkd[1425]: lo: Link UP Mar 25 01:35:20.427670 systemd-networkd[1425]: lo: Gained carrier Mar 25 01:35:20.429317 systemd-networkd[1425]: Enumeration completed Mar 25 01:35:20.429463 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:35:20.429709 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:35:20.429720 systemd-networkd[1425]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:35:20.430600 systemd-networkd[1425]: eth0: Link UP Mar 25 01:35:20.430609 systemd-networkd[1425]: eth0: Gained carrier Mar 25 01:35:20.430622 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:35:20.430832 systemd[1]: Reached target network.target - Network. Mar 25 01:35:20.434370 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 25 01:35:20.436825 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 25 01:35:20.438117 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 25 01:35:20.441695 systemd[1]: Reached target time-set.target - System Time Set. Mar 25 01:35:20.442470 systemd-networkd[1425]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 25 01:35:20.443006 systemd-timesyncd[1428]: Network configuration changed, trying to establish connection. Mar 25 01:35:20.952209 systemd-resolved[1341]: Clock change detected. Flushing caches. Mar 25 01:35:20.952252 systemd-timesyncd[1428]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 25 01:35:20.952290 systemd-timesyncd[1428]: Initial clock synchronization to Tue 2025-03-25 01:35:20.952175 UTC. Mar 25 01:35:20.981381 kernel: mousedev: PS/2 mouse device common for all mice Mar 25 01:35:20.980895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:35:20.988627 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 25 01:35:21.038231 kernel: kvm_amd: TSC scaling supported Mar 25 01:35:21.038276 kernel: kvm_amd: Nested Virtualization enabled Mar 25 01:35:21.038290 kernel: kvm_amd: Nested Paging enabled Mar 25 01:35:21.039222 kernel: kvm_amd: LBR virtualization supported Mar 25 01:35:21.039239 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 25 01:35:21.041029 kernel: kvm_amd: Virtual GIF supported Mar 25 01:35:21.058897 kernel: EDAC MC: Ver: 3.0.0 Mar 25 01:35:21.093366 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 25 01:35:21.117089 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 25 01:35:21.119814 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:35:21.135595 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:35:21.169253 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 25 01:35:21.171585 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:35:21.172760 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:35:21.173969 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 25 01:35:21.175240 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 25 01:35:21.176696 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 25 01:35:21.177942 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 25 01:35:21.179405 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 25 01:35:21.180655 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 25 01:35:21.180686 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:35:21.181617 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:35:21.183399 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 25 01:35:21.186339 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 25 01:35:21.189896 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 25 01:35:21.191388 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 25 01:35:21.192667 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 25 01:35:21.198304 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 25 01:35:21.200021 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 25 01:35:21.202584 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 25 01:35:21.204307 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 25 01:35:21.205495 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:35:21.206458 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:35:21.207458 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:35:21.207492 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:35:21.208531 systemd[1]: Starting containerd.service - containerd container runtime... Mar 25 01:35:21.210616 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 25 01:35:21.212949 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:35:21.212523 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 25 01:35:21.215360 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 25 01:35:21.216616 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 25 01:35:21.217860 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 25 01:35:21.220904 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 25 01:35:21.225207 jq[1479]: false Mar 25 01:35:21.225703 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 25 01:35:21.228156 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 25 01:35:21.232265 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 25 01:35:21.234107 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 25 01:35:21.234551 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 25 01:35:21.236083 systemd[1]: Starting update-engine.service - Update Engine... Mar 25 01:35:21.238570 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 25 01:35:21.246075 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 25 01:35:21.249413 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 25 01:35:21.249670 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 25 01:35:21.250029 systemd[1]: motdgen.service: Deactivated successfully. Mar 25 01:35:21.250257 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 25 01:35:21.251647 jq[1492]: true Mar 25 01:35:21.252599 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 25 01:35:21.252921 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 25 01:35:21.253948 dbus-daemon[1478]: [system] SELinux support is enabled Mar 25 01:35:21.255302 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 25 01:35:21.260204 extend-filesystems[1480]: Found loop3 Mar 25 01:35:21.260204 extend-filesystems[1480]: Found loop4 Mar 25 01:35:21.264938 extend-filesystems[1480]: Found loop5 Mar 25 01:35:21.264938 extend-filesystems[1480]: Found sr0 Mar 25 01:35:21.264938 extend-filesystems[1480]: Found vda Mar 25 01:35:21.264938 extend-filesystems[1480]: Found vda1 Mar 25 01:35:21.264938 extend-filesystems[1480]: Found vda2 Mar 25 01:35:21.264938 extend-filesystems[1480]: Found vda3 Mar 25 01:35:21.264938 extend-filesystems[1480]: Found usr Mar 25 01:35:21.264938 extend-filesystems[1480]: Found vda4 Mar 25 01:35:21.264938 extend-filesystems[1480]: Found vda6 Mar 25 01:35:21.264938 extend-filesystems[1480]: Found vda7 Mar 25 01:35:21.264938 extend-filesystems[1480]: Found vda9 Mar 25 01:35:21.264938 extend-filesystems[1480]: Checking size of /dev/vda9 Mar 25 01:35:21.278892 jq[1499]: true Mar 25 01:35:21.284638 extend-filesystems[1480]: Resized partition /dev/vda9 Mar 25 01:35:21.289661 update_engine[1488]: I20250325 01:35:21.284369 1488 main.cc:92] Flatcar Update Engine starting Mar 25 01:35:21.284246 (ntainerd)[1501]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 25 01:35:21.299025 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 25 01:35:21.299075 tar[1498]: linux-amd64/LICENSE Mar 25 01:35:21.299075 tar[1498]: linux-amd64/helm Mar 25 01:35:21.300569 extend-filesystems[1515]: resize2fs 1.47.2 (1-Jan-2025) Mar 25 01:35:21.303261 update_engine[1488]: I20250325 01:35:21.293563 1488 update_check_scheduler.cc:74] Next update check in 3m19s Mar 25 01:35:21.286232 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 25 01:35:21.286281 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 25 01:35:21.290000 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 25 01:35:21.290018 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 25 01:35:21.293218 systemd[1]: Started update-engine.service - Update Engine. Mar 25 01:35:21.299075 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 25 01:35:21.318918 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1386) Mar 25 01:35:21.337454 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 25 01:35:21.363280 extend-filesystems[1515]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 25 01:35:21.363280 extend-filesystems[1515]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 25 01:35:21.363280 extend-filesystems[1515]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 25 01:35:21.373458 extend-filesystems[1480]: Resized filesystem in /dev/vda9 Mar 25 01:35:21.375146 bash[1531]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:35:21.364324 systemd-logind[1486]: Watching system buttons on /dev/input/event1 (Power Button) Mar 25 01:35:21.364434 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 25 01:35:21.364678 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 25 01:35:21.364992 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 25 01:35:21.365383 systemd-logind[1486]: New seat seat0. Mar 25 01:35:21.374996 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 25 01:35:21.376467 systemd[1]: Started systemd-logind.service - User Login Management. Mar 25 01:35:21.381306 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 25 01:35:21.388119 locksmithd[1530]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 25 01:35:21.482385 containerd[1501]: time="2025-03-25T01:35:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 25 01:35:21.484082 containerd[1501]: time="2025-03-25T01:35:21.484027941Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 25 01:35:21.494485 containerd[1501]: time="2025-03-25T01:35:21.494447732Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.312µs" Mar 25 01:35:21.494485 containerd[1501]: time="2025-03-25T01:35:21.494475103Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 25 01:35:21.494551 containerd[1501]: time="2025-03-25T01:35:21.494495091Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 25 01:35:21.494679 containerd[1501]: time="2025-03-25T01:35:21.494650733Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 25 01:35:21.494679 containerd[1501]: time="2025-03-25T01:35:21.494672313Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 25 01:35:21.494725 containerd[1501]: time="2025-03-25T01:35:21.494696609Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:35:21.494808 containerd[1501]: time="2025-03-25T01:35:21.494775096Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:35:21.494808 containerd[1501]: time="2025-03-25T01:35:21.494797798Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:35:21.495070 containerd[1501]: time="2025-03-25T01:35:21.495040123Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:35:21.495070 containerd[1501]: time="2025-03-25T01:35:21.495060611Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:35:21.495124 containerd[1501]: time="2025-03-25T01:35:21.495072724Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:35:21.495124 containerd[1501]: time="2025-03-25T01:35:21.495082602Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 25 01:35:21.495203 containerd[1501]: time="2025-03-25T01:35:21.495177390Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 25 01:35:21.495438 containerd[1501]: time="2025-03-25T01:35:21.495411249Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:35:21.495461 containerd[1501]: time="2025-03-25T01:35:21.495446244Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:35:21.495461 containerd[1501]: time="2025-03-25T01:35:21.495458778Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 25 01:35:21.495501 containerd[1501]: time="2025-03-25T01:35:21.495492221Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 25 01:35:21.495744 containerd[1501]: time="2025-03-25T01:35:21.495723695Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 25 01:35:21.495815 containerd[1501]: time="2025-03-25T01:35:21.495795419Z" level=info msg="metadata content store policy set" policy=shared Mar 25 01:35:21.502520 containerd[1501]: time="2025-03-25T01:35:21.502480676Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 25 01:35:21.502520 containerd[1501]: time="2025-03-25T01:35:21.502519269Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 25 01:35:21.502576 containerd[1501]: time="2025-03-25T01:35:21.502534087Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 25 01:35:21.502576 containerd[1501]: time="2025-03-25T01:35:21.502546069Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 25 01:35:21.502576 containerd[1501]: time="2025-03-25T01:35:21.502566868Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 25 01:35:21.502631 containerd[1501]: time="2025-03-25T01:35:21.502580123Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 25 01:35:21.502631 containerd[1501]: time="2025-03-25T01:35:21.502593728Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 25 01:35:21.502631 containerd[1501]: time="2025-03-25T01:35:21.502606072Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 25 01:35:21.502631 containerd[1501]: time="2025-03-25T01:35:21.502616782Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 25 01:35:21.502631 containerd[1501]: time="2025-03-25T01:35:21.502627822Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 25 01:35:21.502738 containerd[1501]: time="2025-03-25T01:35:21.502638352Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 25 01:35:21.502738 containerd[1501]: time="2025-03-25T01:35:21.502651236Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 25 01:35:21.502780 containerd[1501]: time="2025-03-25T01:35:21.502761523Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 25 01:35:21.502808 containerd[1501]: time="2025-03-25T01:35:21.502781921Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 25 01:35:21.502808 containerd[1501]: time="2025-03-25T01:35:21.502804293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 25 01:35:21.502846 containerd[1501]: time="2025-03-25T01:35:21.502816376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 25 01:35:21.502846 containerd[1501]: time="2025-03-25T01:35:21.502828489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 25 01:35:21.502846 containerd[1501]: time="2025-03-25T01:35:21.502839069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 25 01:35:21.502927 containerd[1501]: time="2025-03-25T01:35:21.502852053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 25 01:35:21.502927 containerd[1501]: time="2025-03-25T01:35:21.502863905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 25 01:35:21.502927 containerd[1501]: time="2025-03-25T01:35:21.502894543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 25 01:35:21.502927 containerd[1501]: time="2025-03-25T01:35:21.502908318Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 25 01:35:21.502927 containerd[1501]: time="2025-03-25T01:35:21.502918988Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 25 01:35:21.503024 containerd[1501]: time="2025-03-25T01:35:21.502988198Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 25 01:35:21.503024 containerd[1501]: time="2025-03-25T01:35:21.503001774Z" level=info msg="Start snapshots syncer" Mar 25 01:35:21.503063 containerd[1501]: time="2025-03-25T01:35:21.503026049Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 25 01:35:21.503336 containerd[1501]: time="2025-03-25T01:35:21.503294423Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 25 01:35:21.503448 containerd[1501]: time="2025-03-25T01:35:21.503341120Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 25 01:35:21.503448 containerd[1501]: time="2025-03-25T01:35:21.503406302Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 25 01:35:21.503796 containerd[1501]: time="2025-03-25T01:35:21.503498946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 25 01:35:21.503796 containerd[1501]: time="2025-03-25T01:35:21.503525516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 25 01:35:21.503796 containerd[1501]: time="2025-03-25T01:35:21.503535394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 25 01:35:21.503796 containerd[1501]: time="2025-03-25T01:35:21.503545223Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 25 01:35:21.503796 containerd[1501]: time="2025-03-25T01:35:21.503556765Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 25 01:35:21.503796 containerd[1501]: time="2025-03-25T01:35:21.503568487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 25 01:35:21.503796 containerd[1501]: time="2025-03-25T01:35:21.503593113Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 25 01:35:21.503796 containerd[1501]: time="2025-03-25T01:35:21.503616266Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 25 01:35:21.503796 containerd[1501]: time="2025-03-25T01:35:21.503627918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 25 01:35:21.503796 containerd[1501]: time="2025-03-25T01:35:21.503637626Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 25 01:35:21.503796 containerd[1501]: time="2025-03-25T01:35:21.503667402Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:35:21.503796 containerd[1501]: time="2025-03-25T01:35:21.503678773Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:35:21.503796 containerd[1501]: time="2025-03-25T01:35:21.503688421Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:35:21.504062 containerd[1501]: time="2025-03-25T01:35:21.503697549Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:35:21.504062 containerd[1501]: time="2025-03-25T01:35:21.503705544Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 25 01:35:21.504062 containerd[1501]: time="2025-03-25T01:35:21.503781927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 25 01:35:21.504062 containerd[1501]: time="2025-03-25T01:35:21.503800361Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 25 01:35:21.504062 containerd[1501]: time="2025-03-25T01:35:21.503821772Z" level=info msg="runtime interface created" Mar 25 01:35:21.504062 containerd[1501]: time="2025-03-25T01:35:21.503828634Z" level=info msg="created NRI interface" Mar 25 01:35:21.504062 containerd[1501]: time="2025-03-25T01:35:21.503843452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 25 01:35:21.504062 containerd[1501]: time="2025-03-25T01:35:21.503854182Z" level=info msg="Connect containerd service" Mar 25 01:35:21.504062 containerd[1501]: time="2025-03-25T01:35:21.503890140Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 25 01:35:21.506274 containerd[1501]: time="2025-03-25T01:35:21.505592903Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:35:21.586755 containerd[1501]: time="2025-03-25T01:35:21.586627111Z" level=info msg="Start subscribing containerd event" Mar 25 01:35:21.586755 containerd[1501]: time="2025-03-25T01:35:21.586722049Z" level=info msg="Start recovering state" Mar 25 01:35:21.586885 containerd[1501]: time="2025-03-25T01:35:21.586865087Z" level=info msg="Start event monitor" Mar 25 01:35:21.587005 containerd[1501]: time="2025-03-25T01:35:21.586955196Z" level=info msg="Start cni network conf syncer for default" Mar 25 01:35:21.587005 containerd[1501]: time="2025-03-25T01:35:21.587000602Z" level=info msg="Start streaming server" Mar 25 01:35:21.587047 containerd[1501]: time="2025-03-25T01:35:21.587012574Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 25 01:35:21.587047 containerd[1501]: time="2025-03-25T01:35:21.587022112Z" level=info msg="runtime interface starting up..." Mar 25 01:35:21.587047 containerd[1501]: time="2025-03-25T01:35:21.587028634Z" level=info msg="starting plugins..." Mar 25 01:35:21.587047 containerd[1501]: time="2025-03-25T01:35:21.587045235Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 25 01:35:21.587136 containerd[1501]: time="2025-03-25T01:35:21.586928937Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 25 01:35:21.587190 containerd[1501]: time="2025-03-25T01:35:21.587165992Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 25 01:35:21.587514 systemd[1]: Started containerd.service - containerd container runtime. Mar 25 01:35:21.587892 containerd[1501]: time="2025-03-25T01:35:21.587858150Z" level=info msg="containerd successfully booted in 0.106001s" Mar 25 01:35:21.618845 sshd_keygen[1497]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 25 01:35:21.643510 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 25 01:35:21.646500 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 25 01:35:21.671203 systemd[1]: issuegen.service: Deactivated successfully. Mar 25 01:35:21.671464 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 25 01:35:21.674098 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 25 01:35:21.702503 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 25 01:35:21.705524 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 25 01:35:21.707681 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 25 01:35:21.709008 systemd[1]: Reached target getty.target - Login Prompts. Mar 25 01:35:21.745406 tar[1498]: linux-amd64/README.md Mar 25 01:35:21.764261 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 25 01:35:22.913081 systemd-networkd[1425]: eth0: Gained IPv6LL Mar 25 01:35:22.916736 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 25 01:35:22.918547 systemd[1]: Reached target network-online.target - Network is Online. Mar 25 01:35:22.921271 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 25 01:35:22.924057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:35:22.926204 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 25 01:35:22.972297 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 25 01:35:22.974405 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 25 01:35:22.974668 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 25 01:35:22.977194 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 25 01:35:23.620144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:35:23.621752 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 25 01:35:23.625986 systemd[1]: Startup finished in 670ms (kernel) + 5.770s (initrd) + 4.695s (userspace) = 11.136s. Mar 25 01:35:23.654266 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:35:24.061998 kubelet[1603]: E0325 01:35:24.061844 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:35:24.066079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:35:24.066302 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:35:24.066699 systemd[1]: kubelet.service: Consumed 967ms CPU time, 253.6M memory peak. Mar 25 01:35:26.170269 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 25 01:35:26.171624 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:42908.service - OpenSSH per-connection server daemon (10.0.0.1:42908). Mar 25 01:35:26.229514 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 42908 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:35:26.231700 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:26.242228 systemd-logind[1486]: New session 1 of user core. Mar 25 01:35:26.243522 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 25 01:35:26.244739 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 25 01:35:26.270960 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 25 01:35:26.273808 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 25 01:35:26.289271 (systemd)[1621]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 25 01:35:26.291365 systemd-logind[1486]: New session c1 of user core. Mar 25 01:35:26.440866 systemd[1621]: Queued start job for default target default.target. Mar 25 01:35:26.456430 systemd[1621]: Created slice app.slice - User Application Slice. Mar 25 01:35:26.456462 systemd[1621]: Reached target paths.target - Paths. Mar 25 01:35:26.456509 systemd[1621]: Reached target timers.target - Timers. Mar 25 01:35:26.458322 systemd[1621]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 25 01:35:26.469242 systemd[1621]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 25 01:35:26.469386 systemd[1621]: Reached target sockets.target - Sockets. Mar 25 01:35:26.469432 systemd[1621]: Reached target basic.target - Basic System. Mar 25 01:35:26.469475 systemd[1621]: Reached target default.target - Main User Target. Mar 25 01:35:26.469515 systemd[1621]: Startup finished in 170ms. Mar 25 01:35:26.469886 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 25 01:35:26.471513 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 25 01:35:26.538764 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:42918.service - OpenSSH per-connection server daemon (10.0.0.1:42918). Mar 25 01:35:26.576883 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 42918 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:35:26.578302 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:26.582245 systemd-logind[1486]: New session 2 of user core. Mar 25 01:35:26.597015 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 25 01:35:26.649619 sshd[1634]: Connection closed by 10.0.0.1 port 42918 Mar 25 01:35:26.649934 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:26.662527 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:42918.service: Deactivated successfully. Mar 25 01:35:26.664346 systemd[1]: session-2.scope: Deactivated successfully. Mar 25 01:35:26.666007 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. Mar 25 01:35:26.667246 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:42928.service - OpenSSH per-connection server daemon (10.0.0.1:42928). Mar 25 01:35:26.667950 systemd-logind[1486]: Removed session 2. Mar 25 01:35:26.715621 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 42928 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:35:26.717081 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:26.721030 systemd-logind[1486]: New session 3 of user core. Mar 25 01:35:26.738997 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 25 01:35:26.786979 sshd[1642]: Connection closed by 10.0.0.1 port 42928 Mar 25 01:35:26.787278 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:26.797302 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:42928.service: Deactivated successfully. Mar 25 01:35:26.799190 systemd[1]: session-3.scope: Deactivated successfully. Mar 25 01:35:26.800857 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. Mar 25 01:35:26.802086 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:42940.service - OpenSSH per-connection server daemon (10.0.0.1:42940). Mar 25 01:35:26.802740 systemd-logind[1486]: Removed session 3. Mar 25 01:35:26.848153 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 42940 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:35:26.849397 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:26.853194 systemd-logind[1486]: New session 4 of user core. Mar 25 01:35:26.862987 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 25 01:35:26.914285 sshd[1650]: Connection closed by 10.0.0.1 port 42940 Mar 25 01:35:26.914515 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:26.929388 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:42940.service: Deactivated successfully. Mar 25 01:35:26.931144 systemd[1]: session-4.scope: Deactivated successfully. Mar 25 01:35:26.932706 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. Mar 25 01:35:26.933904 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:42954.service - OpenSSH per-connection server daemon (10.0.0.1:42954). Mar 25 01:35:26.934583 systemd-logind[1486]: Removed session 4. Mar 25 01:35:26.980201 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 42954 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:35:26.981417 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:26.985348 systemd-logind[1486]: New session 5 of user core. Mar 25 01:35:26.994984 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 25 01:35:27.052942 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 25 01:35:27.053354 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:35:27.073070 sudo[1659]: pam_unix(sudo:session): session closed for user root Mar 25 01:35:27.074695 sshd[1658]: Connection closed by 10.0.0.1 port 42954 Mar 25 01:35:27.075018 sshd-session[1655]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:27.087352 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:42954.service: Deactivated successfully. Mar 25 01:35:27.089584 systemd[1]: session-5.scope: Deactivated successfully. Mar 25 01:35:27.091527 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. Mar 25 01:35:27.093130 systemd[1]: Started sshd@5-10.0.0.139:22-10.0.0.1:42956.service - OpenSSH per-connection server daemon (10.0.0.1:42956). Mar 25 01:35:27.093946 systemd-logind[1486]: Removed session 5. Mar 25 01:35:27.139820 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 42956 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:35:27.141172 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:27.145039 systemd-logind[1486]: New session 6 of user core. Mar 25 01:35:27.162993 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 25 01:35:27.215334 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 25 01:35:27.215637 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:35:27.218858 sudo[1669]: pam_unix(sudo:session): session closed for user root Mar 25 01:35:27.224844 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 25 01:35:27.225187 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:35:27.234285 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:35:27.281620 augenrules[1691]: No rules Mar 25 01:35:27.282572 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:35:27.282856 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:35:27.283898 sudo[1668]: pam_unix(sudo:session): session closed for user root Mar 25 01:35:27.285238 sshd[1667]: Connection closed by 10.0.0.1 port 42956 Mar 25 01:35:27.285572 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:27.293629 systemd[1]: sshd@5-10.0.0.139:22-10.0.0.1:42956.service: Deactivated successfully. Mar 25 01:35:27.295579 systemd[1]: session-6.scope: Deactivated successfully. Mar 25 01:35:27.297161 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Mar 25 01:35:27.298360 systemd[1]: Started sshd@6-10.0.0.139:22-10.0.0.1:42964.service - OpenSSH per-connection server daemon (10.0.0.1:42964). Mar 25 01:35:27.299105 systemd-logind[1486]: Removed session 6. Mar 25 01:35:27.349776 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 42964 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:35:27.351102 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:27.354842 systemd-logind[1486]: New session 7 of user core. Mar 25 01:35:27.364984 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 25 01:35:27.416600 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 25 01:35:27.416944 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:35:27.707844 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 25 01:35:27.727185 (dockerd)[1723]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 25 01:35:27.974845 dockerd[1723]: time="2025-03-25T01:35:27.974709436Z" level=info msg="Starting up" Mar 25 01:35:27.977455 dockerd[1723]: time="2025-03-25T01:35:27.977415420Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 25 01:35:28.281817 dockerd[1723]: time="2025-03-25T01:35:28.281665748Z" level=info msg="Loading containers: start." Mar 25 01:35:28.445902 kernel: Initializing XFRM netlink socket Mar 25 01:35:28.515848 systemd-networkd[1425]: docker0: Link UP Mar 25 01:35:28.582332 dockerd[1723]: time="2025-03-25T01:35:28.582236526Z" level=info msg="Loading containers: done." Mar 25 01:35:28.595265 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1767831051-merged.mount: Deactivated successfully. Mar 25 01:35:28.598190 dockerd[1723]: time="2025-03-25T01:35:28.598147124Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 25 01:35:28.598249 dockerd[1723]: time="2025-03-25T01:35:28.598239487Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 25 01:35:28.598375 dockerd[1723]: time="2025-03-25T01:35:28.598355945Z" level=info msg="Daemon has completed initialization" Mar 25 01:35:28.636452 dockerd[1723]: time="2025-03-25T01:35:28.636392087Z" level=info msg="API listen on /run/docker.sock" Mar 25 01:35:28.636560 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 25 01:35:29.165529 containerd[1501]: time="2025-03-25T01:35:29.165491126Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 25 01:35:29.730432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3731737676.mount: Deactivated successfully. Mar 25 01:35:30.592248 containerd[1501]: time="2025-03-25T01:35:30.592189144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:30.593001 containerd[1501]: time="2025-03-25T01:35:30.592927138Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=28682430" Mar 25 01:35:30.594311 containerd[1501]: time="2025-03-25T01:35:30.594282741Z" level=info msg="ImageCreate event name:\"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:30.596789 containerd[1501]: time="2025-03-25T01:35:30.596738827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:30.597578 containerd[1501]: time="2025-03-25T01:35:30.597543145Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"28679230\" in 1.432009278s" Mar 25 01:35:30.597621 containerd[1501]: time="2025-03-25T01:35:30.597582910Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\"" Mar 25 01:35:30.598173 containerd[1501]: time="2025-03-25T01:35:30.598137690Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 25 01:35:31.707144 containerd[1501]: time="2025-03-25T01:35:31.707083369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:31.708014 containerd[1501]: time="2025-03-25T01:35:31.707930127Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=24779684" Mar 25 01:35:31.709090 containerd[1501]: time="2025-03-25T01:35:31.709057982Z" level=info msg="ImageCreate event name:\"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:31.711794 containerd[1501]: time="2025-03-25T01:35:31.711766732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:31.712640 containerd[1501]: time="2025-03-25T01:35:31.712596769Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"26267292\" in 1.114420075s" Mar 25 01:35:31.712640 containerd[1501]: time="2025-03-25T01:35:31.712631283Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\"" Mar 25 01:35:31.713138 containerd[1501]: time="2025-03-25T01:35:31.713115161Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 25 01:35:33.014336 containerd[1501]: time="2025-03-25T01:35:33.014277324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:33.015201 containerd[1501]: time="2025-03-25T01:35:33.015120255Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=19171419" Mar 25 01:35:33.016388 containerd[1501]: time="2025-03-25T01:35:33.016354289Z" level=info msg="ImageCreate event name:\"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:33.018980 containerd[1501]: time="2025-03-25T01:35:33.018952953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:33.019955 containerd[1501]: time="2025-03-25T01:35:33.019912102Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"20659045\" in 1.306765632s" Mar 25 01:35:33.019995 containerd[1501]: time="2025-03-25T01:35:33.019955052Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\"" Mar 25 01:35:33.020576 containerd[1501]: time="2025-03-25T01:35:33.020403764Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 25 01:35:34.091439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2265295676.mount: Deactivated successfully. Mar 25 01:35:34.092693 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 25 01:35:34.094140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:35:34.295460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:35:34.306298 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:35:34.352677 kubelet[2008]: E0325 01:35:34.352580 2008 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:35:34.358345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:35:34.358621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:35:34.359081 systemd[1]: kubelet.service: Consumed 235ms CPU time, 105.2M memory peak. Mar 25 01:35:35.119591 containerd[1501]: time="2025-03-25T01:35:35.119512341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:35.120340 containerd[1501]: time="2025-03-25T01:35:35.120290882Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=30918185" Mar 25 01:35:35.121676 containerd[1501]: time="2025-03-25T01:35:35.121644811Z" level=info msg="ImageCreate event name:\"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:35.123642 containerd[1501]: time="2025-03-25T01:35:35.123575161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:35.124150 containerd[1501]: time="2025-03-25T01:35:35.124118850Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"30917204\" in 2.103685732s" Mar 25 01:35:35.124177 containerd[1501]: time="2025-03-25T01:35:35.124150670Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\"" Mar 25 01:35:35.124710 containerd[1501]: time="2025-03-25T01:35:35.124677859Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 25 01:35:35.825548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1859068028.mount: Deactivated successfully. Mar 25 01:35:36.459014 containerd[1501]: time="2025-03-25T01:35:36.458949645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:36.459668 containerd[1501]: time="2025-03-25T01:35:36.459620032Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Mar 25 01:35:36.460737 containerd[1501]: time="2025-03-25T01:35:36.460710306Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:36.463148 containerd[1501]: time="2025-03-25T01:35:36.463113804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:36.464093 containerd[1501]: time="2025-03-25T01:35:36.464053316Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.339336884s" Mar 25 01:35:36.464093 containerd[1501]: time="2025-03-25T01:35:36.464087029Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Mar 25 01:35:36.464600 containerd[1501]: time="2025-03-25T01:35:36.464560687Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 25 01:35:36.941683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1024666627.mount: Deactivated successfully. Mar 25 01:35:36.947268 containerd[1501]: time="2025-03-25T01:35:36.947222420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:35:36.948094 containerd[1501]: time="2025-03-25T01:35:36.948023062Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 25 01:35:36.949064 containerd[1501]: time="2025-03-25T01:35:36.949025973Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:35:36.950945 containerd[1501]: time="2025-03-25T01:35:36.950902392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:35:36.951540 containerd[1501]: time="2025-03-25T01:35:36.951507727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 486.909419ms" Mar 25 01:35:36.951565 containerd[1501]: time="2025-03-25T01:35:36.951541821Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 25 01:35:36.952005 containerd[1501]: time="2025-03-25T01:35:36.951985924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 25 01:35:37.466259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2333238074.mount: Deactivated successfully. Mar 25 01:35:38.939674 containerd[1501]: time="2025-03-25T01:35:38.939598947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:38.940419 containerd[1501]: time="2025-03-25T01:35:38.940332894Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Mar 25 01:35:38.941652 containerd[1501]: time="2025-03-25T01:35:38.941615770Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:38.944209 containerd[1501]: time="2025-03-25T01:35:38.944172414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:38.945086 containerd[1501]: time="2025-03-25T01:35:38.945036976Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.993024864s" Mar 25 01:35:38.945124 containerd[1501]: time="2025-03-25T01:35:38.945082732Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Mar 25 01:35:40.737563 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:35:40.737748 systemd[1]: kubelet.service: Consumed 235ms CPU time, 105.2M memory peak. Mar 25 01:35:40.740037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:35:40.765098 systemd[1]: Reload requested from client PID 2158 ('systemctl') (unit session-7.scope)... Mar 25 01:35:40.765111 systemd[1]: Reloading... Mar 25 01:35:40.866898 zram_generator::config[2204]: No configuration found. Mar 25 01:35:41.027436 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:35:41.127615 systemd[1]: Reloading finished in 362 ms. Mar 25 01:35:41.181983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:35:41.184453 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:35:41.186103 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:35:41.186384 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:35:41.186418 systemd[1]: kubelet.service: Consumed 152ms CPU time, 91.9M memory peak. Mar 25 01:35:41.187941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:35:41.363248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:35:41.374160 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:35:41.411419 kubelet[2252]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:35:41.411419 kubelet[2252]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 25 01:35:41.411419 kubelet[2252]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:35:41.411744 kubelet[2252]: I0325 01:35:41.411507 2252 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:35:41.783270 kubelet[2252]: I0325 01:35:41.782747 2252 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 25 01:35:41.783270 kubelet[2252]: I0325 01:35:41.782776 2252 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:35:41.783468 kubelet[2252]: I0325 01:35:41.783328 2252 server.go:954] "Client rotation is on, will bootstrap in background" Mar 25 01:35:41.801128 kubelet[2252]: E0325 01:35:41.801083 2252 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:35:41.803003 kubelet[2252]: I0325 01:35:41.802955 2252 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:35:41.811545 kubelet[2252]: I0325 01:35:41.811522 2252 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 01:35:41.817185 kubelet[2252]: I0325 01:35:41.817156 2252 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:35:41.818205 kubelet[2252]: I0325 01:35:41.818163 2252 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:35:41.818381 kubelet[2252]: I0325 01:35:41.818193 2252 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 01:35:41.818381 kubelet[2252]: I0325 01:35:41.818377 2252 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:35:41.818501 kubelet[2252]: I0325 01:35:41.818387 2252 container_manager_linux.go:304] "Creating device plugin manager" Mar 25 01:35:41.818530 kubelet[2252]: I0325 01:35:41.818513 2252 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:35:41.821258 kubelet[2252]: I0325 01:35:41.821225 2252 kubelet.go:446] "Attempting to sync node with API server" Mar 25 01:35:41.821258 kubelet[2252]: I0325 01:35:41.821255 2252 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:35:41.821329 kubelet[2252]: I0325 01:35:41.821280 2252 kubelet.go:352] "Adding apiserver pod source" Mar 25 01:35:41.821329 kubelet[2252]: I0325 01:35:41.821294 2252 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:35:41.826266 kubelet[2252]: W0325 01:35:41.825607 2252 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Mar 25 01:35:41.826266 kubelet[2252]: E0325 01:35:41.825675 2252 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:35:41.826266 kubelet[2252]: I0325 01:35:41.825752 2252 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:35:41.826266 kubelet[2252]: W0325 01:35:41.825847 2252 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Mar 25 01:35:41.826266 kubelet[2252]: E0325 01:35:41.825949 2252 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:35:41.826266 kubelet[2252]: I0325 01:35:41.826132 2252 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:35:41.827098 kubelet[2252]: W0325 01:35:41.826737 2252 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 25 01:35:41.828761 kubelet[2252]: I0325 01:35:41.828729 2252 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 25 01:35:41.828809 kubelet[2252]: I0325 01:35:41.828780 2252 server.go:1287] "Started kubelet" Mar 25 01:35:41.830625 kubelet[2252]: I0325 01:35:41.830595 2252 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:35:41.830806 kubelet[2252]: I0325 01:35:41.830766 2252 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:35:41.831173 kubelet[2252]: I0325 01:35:41.831157 2252 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:35:41.834887 kubelet[2252]: I0325 01:35:41.832542 2252 server.go:490] "Adding debug handlers to kubelet server" Mar 25 01:35:41.834887 kubelet[2252]: I0325 01:35:41.832905 2252 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:35:41.834887 kubelet[2252]: I0325 01:35:41.833601 2252 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 01:35:41.834887 kubelet[2252]: E0325 01:35:41.833315 2252 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182fe7e2bc9dc078 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-25 01:35:41.82875148 +0000 UTC m=+0.451073766,LastTimestamp:2025-03-25 01:35:41.82875148 +0000 UTC m=+0.451073766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 25 01:35:41.835559 kubelet[2252]: E0325 01:35:41.835532 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:41.835641 kubelet[2252]: I0325 01:35:41.835630 2252 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 25 01:35:41.835842 kubelet[2252]: I0325 01:35:41.835827 2252 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:35:41.835957 kubelet[2252]: I0325 01:35:41.835946 2252 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:35:41.836234 kubelet[2252]: E0325 01:35:41.836197 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="200ms" Mar 25 01:35:41.836346 kubelet[2252]: W0325 01:35:41.836316 2252 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Mar 25 01:35:41.836409 kubelet[2252]: E0325 01:35:41.836397 2252 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:35:41.836459 kubelet[2252]: E0325 01:35:41.836350 2252 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:35:41.836715 kubelet[2252]: I0325 01:35:41.836688 2252 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:35:41.837650 kubelet[2252]: I0325 01:35:41.837532 2252 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:35:41.837650 kubelet[2252]: I0325 01:35:41.837564 2252 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:35:41.849152 kubelet[2252]: I0325 01:35:41.849014 2252 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:35:41.850378 kubelet[2252]: I0325 01:35:41.850335 2252 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:35:41.850378 kubelet[2252]: I0325 01:35:41.850373 2252 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 25 01:35:41.850449 kubelet[2252]: I0325 01:35:41.850397 2252 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 25 01:35:41.850449 kubelet[2252]: I0325 01:35:41.850407 2252 kubelet.go:2388] "Starting kubelet main sync loop" Mar 25 01:35:41.850488 kubelet[2252]: E0325 01:35:41.850460 2252 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:35:41.854634 kubelet[2252]: W0325 01:35:41.854593 2252 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Mar 25 01:35:41.854672 kubelet[2252]: E0325 01:35:41.854636 2252 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:35:41.855035 kubelet[2252]: I0325 01:35:41.855000 2252 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 25 01:35:41.855035 kubelet[2252]: I0325 01:35:41.855018 2252 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 25 01:35:41.855035 kubelet[2252]: I0325 01:35:41.855036 2252 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:35:41.936592 kubelet[2252]: E0325 01:35:41.936542 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:41.950786 kubelet[2252]: E0325 01:35:41.950738 2252 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 25 01:35:42.037268 kubelet[2252]: E0325 01:35:42.037153 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:42.037825 kubelet[2252]: E0325 01:35:42.037712 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="400ms" Mar 25 01:35:42.125694 kubelet[2252]: I0325 01:35:42.125638 2252 policy_none.go:49] "None policy: Start" Mar 25 01:35:42.125694 kubelet[2252]: I0325 01:35:42.125677 2252 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 25 01:35:42.125694 kubelet[2252]: I0325 01:35:42.125691 2252 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:35:42.134538 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 25 01:35:42.140453 kubelet[2252]: E0325 01:35:42.138184 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:42.144672 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 25 01:35:42.148268 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 25 01:35:42.151361 kubelet[2252]: E0325 01:35:42.151323 2252 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 25 01:35:42.154016 kubelet[2252]: I0325 01:35:42.153977 2252 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:35:42.154278 kubelet[2252]: I0325 01:35:42.154262 2252 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 01:35:42.154330 kubelet[2252]: I0325 01:35:42.154284 2252 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:35:42.155013 kubelet[2252]: I0325 01:35:42.154666 2252 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:35:42.155490 kubelet[2252]: E0325 01:35:42.155448 2252 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 25 01:35:42.155552 kubelet[2252]: E0325 01:35:42.155498 2252 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 25 01:35:42.256859 kubelet[2252]: I0325 01:35:42.256797 2252 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 25 01:35:42.257405 kubelet[2252]: E0325 01:35:42.257343 2252 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Mar 25 01:35:42.439345 kubelet[2252]: E0325 01:35:42.439177 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="800ms" Mar 25 01:35:42.459614 kubelet[2252]: I0325 01:35:42.459570 2252 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 25 01:35:42.460140 kubelet[2252]: E0325 01:35:42.460094 2252 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Mar 25 01:35:42.561467 systemd[1]: Created slice kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice - libcontainer container kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice. Mar 25 01:35:42.578362 kubelet[2252]: E0325 01:35:42.578310 2252 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 25 01:35:42.581074 systemd[1]: Created slice kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice - libcontainer container kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice. Mar 25 01:35:42.582683 kubelet[2252]: E0325 01:35:42.582663 2252 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 25 01:35:42.585169 systemd[1]: Created slice kubepods-burstable-pod8136c544e6c0272c43808d95c920f300.slice - libcontainer container kubepods-burstable-pod8136c544e6c0272c43808d95c920f300.slice. Mar 25 01:35:42.586879 kubelet[2252]: E0325 01:35:42.586844 2252 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 25 01:35:42.640121 kubelet[2252]: I0325 01:35:42.640086 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:35:42.640121 kubelet[2252]: I0325 01:35:42.640115 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:35:42.640238 kubelet[2252]: I0325 01:35:42.640136 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:35:42.640238 kubelet[2252]: I0325 01:35:42.640152 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:35:42.640238 kubelet[2252]: I0325 01:35:42.640167 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:35:42.640238 kubelet[2252]: I0325 01:35:42.640183 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 25 01:35:42.640238 kubelet[2252]: I0325 01:35:42.640198 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8136c544e6c0272c43808d95c920f300-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8136c544e6c0272c43808d95c920f300\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:35:42.640398 kubelet[2252]: I0325 01:35:42.640262 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8136c544e6c0272c43808d95c920f300-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8136c544e6c0272c43808d95c920f300\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:35:42.640398 kubelet[2252]: I0325 01:35:42.640351 2252 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8136c544e6c0272c43808d95c920f300-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8136c544e6c0272c43808d95c920f300\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:35:42.861959 kubelet[2252]: I0325 01:35:42.861806 2252 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 25 01:35:42.862236 kubelet[2252]: E0325 01:35:42.862193 2252 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Mar 25 01:35:42.880100 containerd[1501]: time="2025-03-25T01:35:42.880047906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,}" Mar 25 01:35:42.884238 containerd[1501]: time="2025-03-25T01:35:42.884195484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,}" Mar 25 01:35:42.887824 containerd[1501]: time="2025-03-25T01:35:42.887791618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8136c544e6c0272c43808d95c920f300,Namespace:kube-system,Attempt:0,}" Mar 25 01:35:42.911368 containerd[1501]: time="2025-03-25T01:35:42.911307459Z" level=info msg="connecting to shim 6a15909ec9e965eb38b130530a94baea0eabc489e36bd2487e8da9ed66a9689e" address="unix:///run/containerd/s/33e846e5d8c5196cadd6cde322ce9b9c5e1010f27cc303b261aeef8a69a4a2fd" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:35:42.923178 containerd[1501]: time="2025-03-25T01:35:42.922906491Z" level=info msg="connecting to shim bf513ac2ef5894bb766a47d2fc0629671682392d233507094452b64f82a16e59" address="unix:///run/containerd/s/177a53d5fb3c2fbb69c522f332f319a59bf346d275d2d3e9e86c10fd50fe75d0" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:35:42.940967 containerd[1501]: time="2025-03-25T01:35:42.940841415Z" level=info msg="connecting to shim 8c72222bd2cbc23ceeb7e1f528be343fe22bc0531deabc3027336ecdc1b3d95c" address="unix:///run/containerd/s/57da0c008f37546cc9069c1e35447580bd22ec2d97d684584515c0ea55dd0121" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:35:42.943127 systemd[1]: Started cri-containerd-6a15909ec9e965eb38b130530a94baea0eabc489e36bd2487e8da9ed66a9689e.scope - libcontainer container 6a15909ec9e965eb38b130530a94baea0eabc489e36bd2487e8da9ed66a9689e. Mar 25 01:35:42.952661 systemd[1]: Started cri-containerd-bf513ac2ef5894bb766a47d2fc0629671682392d233507094452b64f82a16e59.scope - libcontainer container bf513ac2ef5894bb766a47d2fc0629671682392d233507094452b64f82a16e59. Mar 25 01:35:42.960775 systemd[1]: Started cri-containerd-8c72222bd2cbc23ceeb7e1f528be343fe22bc0531deabc3027336ecdc1b3d95c.scope - libcontainer container 8c72222bd2cbc23ceeb7e1f528be343fe22bc0531deabc3027336ecdc1b3d95c. Mar 25 01:35:42.996058 containerd[1501]: time="2025-03-25T01:35:42.995949803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a15909ec9e965eb38b130530a94baea0eabc489e36bd2487e8da9ed66a9689e\"" Mar 25 01:35:42.998584 containerd[1501]: time="2025-03-25T01:35:42.998522718Z" level=info msg="CreateContainer within sandbox \"6a15909ec9e965eb38b130530a94baea0eabc489e36bd2487e8da9ed66a9689e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 25 01:35:43.006202 containerd[1501]: time="2025-03-25T01:35:43.006163858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf513ac2ef5894bb766a47d2fc0629671682392d233507094452b64f82a16e59\"" Mar 25 01:35:43.008467 containerd[1501]: time="2025-03-25T01:35:43.008436280Z" level=info msg="CreateContainer within sandbox \"bf513ac2ef5894bb766a47d2fc0629671682392d233507094452b64f82a16e59\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 25 01:35:43.008758 containerd[1501]: time="2025-03-25T01:35:43.008729900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8136c544e6c0272c43808d95c920f300,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c72222bd2cbc23ceeb7e1f528be343fe22bc0531deabc3027336ecdc1b3d95c\"" Mar 25 01:35:43.010421 containerd[1501]: time="2025-03-25T01:35:43.010384173Z" level=info msg="CreateContainer within sandbox \"8c72222bd2cbc23ceeb7e1f528be343fe22bc0531deabc3027336ecdc1b3d95c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 25 01:35:43.010828 kubelet[2252]: W0325 01:35:43.010772 2252 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Mar 25 01:35:43.010864 kubelet[2252]: E0325 01:35:43.010835 2252 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:35:43.012487 containerd[1501]: time="2025-03-25T01:35:43.012456770Z" level=info msg="Container f314a5dd6947821db9d7a325e3cffa2a7b3aedb7c16e96273c1f31067522d750: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:43.019707 containerd[1501]: time="2025-03-25T01:35:43.019674295Z" level=info msg="Container 9827154d95d213e8ee08c960c7f42bff68df551d19b5741a74ef72fda2ddb92f: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:43.025493 containerd[1501]: time="2025-03-25T01:35:43.025463071Z" level=info msg="CreateContainer within sandbox \"6a15909ec9e965eb38b130530a94baea0eabc489e36bd2487e8da9ed66a9689e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f314a5dd6947821db9d7a325e3cffa2a7b3aedb7c16e96273c1f31067522d750\"" Mar 25 01:35:43.025975 containerd[1501]: time="2025-03-25T01:35:43.025951287Z" level=info msg="StartContainer for \"f314a5dd6947821db9d7a325e3cffa2a7b3aedb7c16e96273c1f31067522d750\"" Mar 25 01:35:43.026998 containerd[1501]: time="2025-03-25T01:35:43.026962794Z" level=info msg="connecting to shim f314a5dd6947821db9d7a325e3cffa2a7b3aedb7c16e96273c1f31067522d750" address="unix:///run/containerd/s/33e846e5d8c5196cadd6cde322ce9b9c5e1010f27cc303b261aeef8a69a4a2fd" protocol=ttrpc version=3 Mar 25 01:35:43.027909 containerd[1501]: time="2025-03-25T01:35:43.027746393Z" level=info msg="Container d4a4b89b538e0c3475b0585b00e382aa86401ff1164c9b493b709ede7f79048b: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:43.032497 containerd[1501]: time="2025-03-25T01:35:43.032460814Z" level=info msg="CreateContainer within sandbox \"bf513ac2ef5894bb766a47d2fc0629671682392d233507094452b64f82a16e59\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9827154d95d213e8ee08c960c7f42bff68df551d19b5741a74ef72fda2ddb92f\"" Mar 25 01:35:43.033020 containerd[1501]: time="2025-03-25T01:35:43.032990468Z" level=info msg="StartContainer for \"9827154d95d213e8ee08c960c7f42bff68df551d19b5741a74ef72fda2ddb92f\"" Mar 25 01:35:43.034116 containerd[1501]: time="2025-03-25T01:35:43.034089218Z" level=info msg="connecting to shim 9827154d95d213e8ee08c960c7f42bff68df551d19b5741a74ef72fda2ddb92f" address="unix:///run/containerd/s/177a53d5fb3c2fbb69c522f332f319a59bf346d275d2d3e9e86c10fd50fe75d0" protocol=ttrpc version=3 Mar 25 01:35:43.034792 containerd[1501]: time="2025-03-25T01:35:43.034766669Z" level=info msg="CreateContainer within sandbox \"8c72222bd2cbc23ceeb7e1f528be343fe22bc0531deabc3027336ecdc1b3d95c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d4a4b89b538e0c3475b0585b00e382aa86401ff1164c9b493b709ede7f79048b\"" Mar 25 01:35:43.035157 containerd[1501]: time="2025-03-25T01:35:43.035128267Z" level=info msg="StartContainer for \"d4a4b89b538e0c3475b0585b00e382aa86401ff1164c9b493b709ede7f79048b\"" Mar 25 01:35:43.036125 containerd[1501]: time="2025-03-25T01:35:43.036102013Z" level=info msg="connecting to shim d4a4b89b538e0c3475b0585b00e382aa86401ff1164c9b493b709ede7f79048b" address="unix:///run/containerd/s/57da0c008f37546cc9069c1e35447580bd22ec2d97d684584515c0ea55dd0121" protocol=ttrpc version=3 Mar 25 01:35:43.051026 systemd[1]: Started cri-containerd-f314a5dd6947821db9d7a325e3cffa2a7b3aedb7c16e96273c1f31067522d750.scope - libcontainer container f314a5dd6947821db9d7a325e3cffa2a7b3aedb7c16e96273c1f31067522d750. Mar 25 01:35:43.055017 systemd[1]: Started cri-containerd-9827154d95d213e8ee08c960c7f42bff68df551d19b5741a74ef72fda2ddb92f.scope - libcontainer container 9827154d95d213e8ee08c960c7f42bff68df551d19b5741a74ef72fda2ddb92f. Mar 25 01:35:43.056603 systemd[1]: Started cri-containerd-d4a4b89b538e0c3475b0585b00e382aa86401ff1164c9b493b709ede7f79048b.scope - libcontainer container d4a4b89b538e0c3475b0585b00e382aa86401ff1164c9b493b709ede7f79048b. Mar 25 01:35:43.080075 kubelet[2252]: W0325 01:35:43.079987 2252 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Mar 25 01:35:43.080075 kubelet[2252]: E0325 01:35:43.080049 2252 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:35:43.113703 containerd[1501]: time="2025-03-25T01:35:43.113325665Z" level=info msg="StartContainer for \"d4a4b89b538e0c3475b0585b00e382aa86401ff1164c9b493b709ede7f79048b\" returns successfully" Mar 25 01:35:43.113703 containerd[1501]: time="2025-03-25T01:35:43.113387310Z" level=info msg="StartContainer for \"9827154d95d213e8ee08c960c7f42bff68df551d19b5741a74ef72fda2ddb92f\" returns successfully" Mar 25 01:35:43.114740 containerd[1501]: time="2025-03-25T01:35:43.114650139Z" level=info msg="StartContainer for \"f314a5dd6947821db9d7a325e3cffa2a7b3aedb7c16e96273c1f31067522d750\" returns successfully" Mar 25 01:35:43.664178 kubelet[2252]: I0325 01:35:43.664141 2252 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 25 01:35:43.864604 kubelet[2252]: E0325 01:35:43.864556 2252 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 25 01:35:43.877898 kubelet[2252]: E0325 01:35:43.875304 2252 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 25 01:35:43.879187 kubelet[2252]: E0325 01:35:43.879163 2252 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 25 01:35:44.059798 kubelet[2252]: E0325 01:35:44.059684 2252 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 25 01:35:44.154520 kubelet[2252]: I0325 01:35:44.154486 2252 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 25 01:35:44.154520 kubelet[2252]: E0325 01:35:44.154520 2252 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 25 01:35:44.157539 kubelet[2252]: E0325 01:35:44.157510 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:44.258509 kubelet[2252]: E0325 01:35:44.258456 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:44.359603 kubelet[2252]: E0325 01:35:44.359469 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:44.460044 kubelet[2252]: E0325 01:35:44.459990 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:44.560547 kubelet[2252]: E0325 01:35:44.560512 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:44.661119 kubelet[2252]: E0325 01:35:44.661026 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:44.761698 kubelet[2252]: E0325 01:35:44.761652 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:44.861934 kubelet[2252]: E0325 01:35:44.861900 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:44.878142 kubelet[2252]: E0325 01:35:44.878112 2252 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 25 01:35:44.878474 kubelet[2252]: E0325 01:35:44.878443 2252 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 25 01:35:44.878541 kubelet[2252]: E0325 01:35:44.878509 2252 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 25 01:35:44.962475 kubelet[2252]: E0325 01:35:44.962360 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:45.063162 kubelet[2252]: E0325 01:35:45.063125 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:45.164086 kubelet[2252]: E0325 01:35:45.164041 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:45.264651 kubelet[2252]: E0325 01:35:45.264526 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:45.364788 kubelet[2252]: E0325 01:35:45.364743 2252 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:45.435839 kubelet[2252]: I0325 01:35:45.435720 2252 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 25 01:35:45.444605 kubelet[2252]: I0325 01:35:45.444478 2252 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 25 01:35:45.447765 kubelet[2252]: I0325 01:35:45.447743 2252 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 25 01:35:45.824825 kubelet[2252]: I0325 01:35:45.824775 2252 apiserver.go:52] "Watching apiserver" Mar 25 01:35:45.836025 kubelet[2252]: I0325 01:35:45.836000 2252 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:35:45.878715 kubelet[2252]: I0325 01:35:45.878690 2252 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 25 01:35:45.883521 kubelet[2252]: E0325 01:35:45.883495 2252 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 25 01:35:46.301352 systemd[1]: Reload requested from client PID 2523 ('systemctl') (unit session-7.scope)... Mar 25 01:35:46.301368 systemd[1]: Reloading... Mar 25 01:35:46.389931 zram_generator::config[2570]: No configuration found. Mar 25 01:35:46.848238 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:35:46.962328 systemd[1]: Reloading finished in 660 ms. Mar 25 01:35:46.985990 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:35:47.006341 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:35:47.006632 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:35:47.006679 systemd[1]: kubelet.service: Consumed 929ms CPU time, 125.3M memory peak. Mar 25 01:35:47.008477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:35:47.193428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:35:47.202227 (kubelet)[2612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:35:47.242153 kubelet[2612]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:35:47.242153 kubelet[2612]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 25 01:35:47.242153 kubelet[2612]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:35:47.242529 kubelet[2612]: I0325 01:35:47.242238 2612 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:35:47.248449 kubelet[2612]: I0325 01:35:47.248421 2612 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 25 01:35:47.248449 kubelet[2612]: I0325 01:35:47.248447 2612 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:35:47.248665 kubelet[2612]: I0325 01:35:47.248644 2612 server.go:954] "Client rotation is on, will bootstrap in background" Mar 25 01:35:47.249733 kubelet[2612]: I0325 01:35:47.249709 2612 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 25 01:35:47.251937 kubelet[2612]: I0325 01:35:47.251915 2612 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:35:47.256511 kubelet[2612]: I0325 01:35:47.256484 2612 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 01:35:47.261102 kubelet[2612]: I0325 01:35:47.261075 2612 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:35:47.261356 kubelet[2612]: I0325 01:35:47.261322 2612 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:35:47.261512 kubelet[2612]: I0325 01:35:47.261349 2612 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 01:35:47.261512 kubelet[2612]: I0325 01:35:47.261511 2612 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:35:47.261610 kubelet[2612]: I0325 01:35:47.261521 2612 container_manager_linux.go:304] "Creating device plugin manager" Mar 25 01:35:47.261610 kubelet[2612]: I0325 01:35:47.261562 2612 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:35:47.261718 kubelet[2612]: I0325 01:35:47.261699 2612 kubelet.go:446] "Attempting to sync node with API server" Mar 25 01:35:47.261718 kubelet[2612]: I0325 01:35:47.261715 2612 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:35:47.261759 kubelet[2612]: I0325 01:35:47.261735 2612 kubelet.go:352] "Adding apiserver pod source" Mar 25 01:35:47.261759 kubelet[2612]: I0325 01:35:47.261746 2612 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:35:47.262416 kubelet[2612]: I0325 01:35:47.262394 2612 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:35:47.262811 kubelet[2612]: I0325 01:35:47.262787 2612 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:35:47.263248 kubelet[2612]: I0325 01:35:47.263230 2612 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 25 01:35:47.263310 kubelet[2612]: I0325 01:35:47.263264 2612 server.go:1287] "Started kubelet" Mar 25 01:35:47.263867 kubelet[2612]: I0325 01:35:47.263373 2612 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:35:47.263867 kubelet[2612]: I0325 01:35:47.263566 2612 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:35:47.263867 kubelet[2612]: I0325 01:35:47.263764 2612 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:35:47.264479 kubelet[2612]: I0325 01:35:47.264452 2612 server.go:490] "Adding debug handlers to kubelet server" Mar 25 01:35:47.265264 kubelet[2612]: I0325 01:35:47.265237 2612 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:35:47.266228 kubelet[2612]: I0325 01:35:47.266196 2612 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 01:35:47.266963 kubelet[2612]: E0325 01:35:47.266935 2612 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 25 01:35:47.269076 kubelet[2612]: I0325 01:35:47.269059 2612 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 25 01:35:47.269423 kubelet[2612]: I0325 01:35:47.269411 2612 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:35:47.269836 kubelet[2612]: I0325 01:35:47.269825 2612 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:35:47.274664 kubelet[2612]: I0325 01:35:47.274636 2612 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:35:47.276015 kubelet[2612]: I0325 01:35:47.275976 2612 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:35:47.294193 kubelet[2612]: E0325 01:35:47.294150 2612 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:35:47.298891 kubelet[2612]: I0325 01:35:47.294416 2612 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:35:47.303348 kubelet[2612]: I0325 01:35:47.303316 2612 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:35:47.308127 kubelet[2612]: I0325 01:35:47.308095 2612 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:35:47.308190 kubelet[2612]: I0325 01:35:47.308145 2612 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 25 01:35:47.308215 kubelet[2612]: I0325 01:35:47.308191 2612 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 25 01:35:47.308215 kubelet[2612]: I0325 01:35:47.308200 2612 kubelet.go:2388] "Starting kubelet main sync loop" Mar 25 01:35:47.308261 kubelet[2612]: E0325 01:35:47.308244 2612 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:35:47.316709 sudo[2643]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 25 01:35:47.317071 sudo[2643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 25 01:35:47.334740 kubelet[2612]: I0325 01:35:47.334715 2612 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 25 01:35:47.334740 kubelet[2612]: I0325 01:35:47.334731 2612 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 25 01:35:47.334816 kubelet[2612]: I0325 01:35:47.334747 2612 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:35:47.334926 kubelet[2612]: I0325 01:35:47.334905 2612 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 25 01:35:47.334962 kubelet[2612]: I0325 01:35:47.334919 2612 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 25 01:35:47.334962 kubelet[2612]: I0325 01:35:47.334938 2612 policy_none.go:49] "None policy: Start" Mar 25 01:35:47.334962 kubelet[2612]: I0325 01:35:47.334946 2612 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 25 01:35:47.334962 kubelet[2612]: I0325 01:35:47.334958 2612 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:35:47.335049 kubelet[2612]: I0325 01:35:47.335044 2612 state_mem.go:75] "Updated machine memory state" Mar 25 01:35:47.341640 kubelet[2612]: I0325 01:35:47.341609 2612 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:35:47.341973 kubelet[2612]: I0325 01:35:47.341774 2612 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 01:35:47.341973 kubelet[2612]: I0325 01:35:47.341789 2612 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:35:47.342046 kubelet[2612]: I0325 01:35:47.342011 2612 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:35:47.343954 kubelet[2612]: E0325 01:35:47.342938 2612 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 25 01:35:47.409592 kubelet[2612]: I0325 01:35:47.409316 2612 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 25 01:35:47.409592 kubelet[2612]: I0325 01:35:47.409403 2612 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 25 01:35:47.409592 kubelet[2612]: I0325 01:35:47.409466 2612 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 25 01:35:47.415356 kubelet[2612]: E0325 01:35:47.415325 2612 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 25 01:35:47.415912 kubelet[2612]: E0325 01:35:47.415836 2612 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 25 01:35:47.416001 kubelet[2612]: E0325 01:35:47.415978 2612 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 25 01:35:47.443306 kubelet[2612]: I0325 01:35:47.443281 2612 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 25 01:35:47.449224 kubelet[2612]: I0325 01:35:47.449127 2612 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Mar 25 01:35:47.449224 kubelet[2612]: I0325 01:35:47.449195 2612 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 25 01:35:47.471719 kubelet[2612]: I0325 01:35:47.471689 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:35:47.471719 kubelet[2612]: I0325 01:35:47.471718 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:35:47.472009 kubelet[2612]: I0325 01:35:47.471738 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8136c544e6c0272c43808d95c920f300-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8136c544e6c0272c43808d95c920f300\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:35:47.472009 kubelet[2612]: I0325 01:35:47.471756 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:35:47.472009 kubelet[2612]: I0325 01:35:47.471773 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:35:47.472009 kubelet[2612]: I0325 01:35:47.471792 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 25 01:35:47.472009 kubelet[2612]: I0325 01:35:47.471807 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 25 01:35:47.472122 kubelet[2612]: I0325 01:35:47.471826 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8136c544e6c0272c43808d95c920f300-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8136c544e6c0272c43808d95c920f300\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:35:47.472122 kubelet[2612]: I0325 01:35:47.471859 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8136c544e6c0272c43808d95c920f300-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8136c544e6c0272c43808d95c920f300\") " pod="kube-system/kube-apiserver-localhost" Mar 25 01:35:47.772408 sudo[2643]: pam_unix(sudo:session): session closed for user root Mar 25 01:35:48.263067 kubelet[2612]: I0325 01:35:48.262960 2612 apiserver.go:52] "Watching apiserver" Mar 25 01:35:48.269802 kubelet[2612]: I0325 01:35:48.269776 2612 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:35:48.321492 kubelet[2612]: I0325 01:35:48.321465 2612 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 25 01:35:48.378629 kubelet[2612]: E0325 01:35:48.378590 2612 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 25 01:35:48.398594 kubelet[2612]: I0325 01:35:48.398506 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.398464567 podStartE2EDuration="3.398464567s" podCreationTimestamp="2025-03-25 01:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:35:48.398149606 +0000 UTC m=+1.191901361" watchObservedRunningTime="2025-03-25 01:35:48.398464567 +0000 UTC m=+1.192216322" Mar 25 01:35:48.403428 kubelet[2612]: I0325 01:35:48.403390 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.4033820390000002 podStartE2EDuration="3.403382039s" podCreationTimestamp="2025-03-25 01:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:35:48.403341503 +0000 UTC m=+1.197093258" watchObservedRunningTime="2025-03-25 01:35:48.403382039 +0000 UTC m=+1.197133794" Mar 25 01:35:48.409064 kubelet[2612]: I0325 01:35:48.409014 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.408998462 podStartE2EDuration="3.408998462s" podCreationTimestamp="2025-03-25 01:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:35:48.408813344 +0000 UTC m=+1.202565100" watchObservedRunningTime="2025-03-25 01:35:48.408998462 +0000 UTC m=+1.202750217" Mar 25 01:35:49.092544 sudo[1703]: pam_unix(sudo:session): session closed for user root Mar 25 01:35:49.094175 sshd[1702]: Connection closed by 10.0.0.1 port 42964 Mar 25 01:35:49.094516 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:49.098517 systemd[1]: sshd@6-10.0.0.139:22-10.0.0.1:42964.service: Deactivated successfully. Mar 25 01:35:49.100894 systemd[1]: session-7.scope: Deactivated successfully. Mar 25 01:35:49.101118 systemd[1]: session-7.scope: Consumed 3.948s CPU time, 258.8M memory peak. Mar 25 01:35:49.102323 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Mar 25 01:35:49.103100 systemd-logind[1486]: Removed session 7. Mar 25 01:35:52.547561 kubelet[2612]: I0325 01:35:52.547523 2612 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 25 01:35:52.548098 kubelet[2612]: I0325 01:35:52.547941 2612 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 25 01:35:52.548141 containerd[1501]: time="2025-03-25T01:35:52.547778451Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 25 01:35:53.492135 systemd[1]: Created slice kubepods-besteffort-pod19a8e3ea_16d0_414e_b183_a9915cba4ac4.slice - libcontainer container kubepods-besteffort-pod19a8e3ea_16d0_414e_b183_a9915cba4ac4.slice. Mar 25 01:35:53.506791 systemd[1]: Created slice kubepods-burstable-pod5ff86e4e_2980_4c86_b328_6fdc209e86c4.slice - libcontainer container kubepods-burstable-pod5ff86e4e_2980_4c86_b328_6fdc209e86c4.slice. Mar 25 01:35:53.516890 kubelet[2612]: I0325 01:35:53.515237 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl7dm\" (UniqueName: \"kubernetes.io/projected/19a8e3ea-16d0-414e-b183-a9915cba4ac4-kube-api-access-rl7dm\") pod \"kube-proxy-9hgtz\" (UID: \"19a8e3ea-16d0-414e-b183-a9915cba4ac4\") " pod="kube-system/kube-proxy-9hgtz" Mar 25 01:35:53.516890 kubelet[2612]: I0325 01:35:53.515267 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ff86e4e-2980-4c86-b328-6fdc209e86c4-hubble-tls\") pod \"cilium-2rbnw\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " pod="kube-system/cilium-2rbnw" Mar 25 01:35:53.516890 kubelet[2612]: I0325 01:35:53.515282 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19a8e3ea-16d0-414e-b183-a9915cba4ac4-xtables-lock\") pod \"kube-proxy-9hgtz\" (UID: \"19a8e3ea-16d0-414e-b183-a9915cba4ac4\") " pod="kube-system/kube-proxy-9hgtz" Mar 25 01:35:53.516890 kubelet[2612]: I0325 01:35:53.515295 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-lib-modules\") pod \"cilium-2rbnw\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " pod="kube-system/cilium-2rbnw" Mar 25 01:35:53.516890 kubelet[2612]: I0325 01:35:53.515308 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-cilium-run\") pod \"cilium-2rbnw\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " pod="kube-system/cilium-2rbnw" Mar 25 01:35:53.516890 kubelet[2612]: I0325 01:35:53.515321 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19a8e3ea-16d0-414e-b183-a9915cba4ac4-lib-modules\") pod \"kube-proxy-9hgtz\" (UID: \"19a8e3ea-16d0-414e-b183-a9915cba4ac4\") " pod="kube-system/kube-proxy-9hgtz" Mar 25 01:35:53.517117 kubelet[2612]: I0325 01:35:53.515334 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ff86e4e-2980-4c86-b328-6fdc209e86c4-clustermesh-secrets\") pod \"cilium-2rbnw\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " pod="kube-system/cilium-2rbnw" Mar 25 01:35:53.517117 kubelet[2612]: I0325 01:35:53.515347 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-host-proc-sys-net\") pod \"cilium-2rbnw\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " pod="kube-system/cilium-2rbnw" Mar 25 01:35:53.517117 kubelet[2612]: I0325 01:35:53.515360 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-host-proc-sys-kernel\") pod \"cilium-2rbnw\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " pod="kube-system/cilium-2rbnw" Mar 25 01:35:53.517117 kubelet[2612]: I0325 01:35:53.515374 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxt44\" (UniqueName: \"kubernetes.io/projected/5ff86e4e-2980-4c86-b328-6fdc209e86c4-kube-api-access-sxt44\") pod \"cilium-2rbnw\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " pod="kube-system/cilium-2rbnw" Mar 25 01:35:53.517117 kubelet[2612]: I0325 01:35:53.515388 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/19a8e3ea-16d0-414e-b183-a9915cba4ac4-kube-proxy\") pod \"kube-proxy-9hgtz\" (UID: \"19a8e3ea-16d0-414e-b183-a9915cba4ac4\") " pod="kube-system/kube-proxy-9hgtz" Mar 25 01:35:53.517233 kubelet[2612]: I0325 01:35:53.515401 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-etc-cni-netd\") pod \"cilium-2rbnw\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " pod="kube-system/cilium-2rbnw" Mar 25 01:35:53.517233 kubelet[2612]: I0325 01:35:53.515415 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-bpf-maps\") pod \"cilium-2rbnw\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " pod="kube-system/cilium-2rbnw" Mar 25 01:35:53.517233 kubelet[2612]: I0325 01:35:53.515437 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-xtables-lock\") pod \"cilium-2rbnw\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " pod="kube-system/cilium-2rbnw" Mar 25 01:35:53.517233 kubelet[2612]: I0325 01:35:53.515455 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-hostproc\") pod \"cilium-2rbnw\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " pod="kube-system/cilium-2rbnw" Mar 25 01:35:53.517233 kubelet[2612]: I0325 01:35:53.515468 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ff86e4e-2980-4c86-b328-6fdc209e86c4-cilium-config-path\") pod \"cilium-2rbnw\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " pod="kube-system/cilium-2rbnw" Mar 25 01:35:53.517233 kubelet[2612]: I0325 01:35:53.515487 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-cilium-cgroup\") pod \"cilium-2rbnw\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " pod="kube-system/cilium-2rbnw" Mar 25 01:35:53.517367 kubelet[2612]: I0325 01:35:53.515501 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-cni-path\") pod \"cilium-2rbnw\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " pod="kube-system/cilium-2rbnw" Mar 25 01:35:53.646319 systemd[1]: Created slice kubepods-besteffort-poddd04a783_1f09_497a_8a63_eba9a7e3f810.slice - libcontainer container kubepods-besteffort-poddd04a783_1f09_497a_8a63_eba9a7e3f810.slice. Mar 25 01:35:53.716760 kubelet[2612]: I0325 01:35:53.716720 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd04a783-1f09-497a-8a63-eba9a7e3f810-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-924lk\" (UID: \"dd04a783-1f09-497a-8a63-eba9a7e3f810\") " pod="kube-system/cilium-operator-6c4d7847fc-924lk" Mar 25 01:35:53.716760 kubelet[2612]: I0325 01:35:53.716761 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7fjp\" (UniqueName: \"kubernetes.io/projected/dd04a783-1f09-497a-8a63-eba9a7e3f810-kube-api-access-m7fjp\") pod \"cilium-operator-6c4d7847fc-924lk\" (UID: \"dd04a783-1f09-497a-8a63-eba9a7e3f810\") " pod="kube-system/cilium-operator-6c4d7847fc-924lk" Mar 25 01:35:53.800988 containerd[1501]: time="2025-03-25T01:35:53.800905151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9hgtz,Uid:19a8e3ea-16d0-414e-b183-a9915cba4ac4,Namespace:kube-system,Attempt:0,}" Mar 25 01:35:53.815793 containerd[1501]: time="2025-03-25T01:35:53.815758417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2rbnw,Uid:5ff86e4e-2980-4c86-b328-6fdc209e86c4,Namespace:kube-system,Attempt:0,}" Mar 25 01:35:53.823822 containerd[1501]: time="2025-03-25T01:35:53.823650196Z" level=info msg="connecting to shim 2b1f521d2e71ce05d50236657b9f1419be658ee6d589c5d1db1503c3af80f4d4" address="unix:///run/containerd/s/f7b2fb1eb3c0b4bf401b9906c09a9fd25b6eb149328b1fc589f10fb021764cb4" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:35:53.839826 containerd[1501]: time="2025-03-25T01:35:53.839783803Z" level=info msg="connecting to shim 0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e" address="unix:///run/containerd/s/329eda23e22348e4f4d102c7445b4d7f81dfcdc628bdf87d4625de2279b557f5" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:35:53.846028 systemd[1]: Started cri-containerd-2b1f521d2e71ce05d50236657b9f1419be658ee6d589c5d1db1503c3af80f4d4.scope - libcontainer container 2b1f521d2e71ce05d50236657b9f1419be658ee6d589c5d1db1503c3af80f4d4. Mar 25 01:35:53.867017 systemd[1]: Started cri-containerd-0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e.scope - libcontainer container 0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e. Mar 25 01:35:53.876547 containerd[1501]: time="2025-03-25T01:35:53.876495390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9hgtz,Uid:19a8e3ea-16d0-414e-b183-a9915cba4ac4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b1f521d2e71ce05d50236657b9f1419be658ee6d589c5d1db1503c3af80f4d4\"" Mar 25 01:35:53.879748 containerd[1501]: time="2025-03-25T01:35:53.879710189Z" level=info msg="CreateContainer within sandbox \"2b1f521d2e71ce05d50236657b9f1419be658ee6d589c5d1db1503c3af80f4d4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 25 01:35:53.897374 containerd[1501]: time="2025-03-25T01:35:53.897293965Z" level=info msg="Container 4eb195981d3e23b158103d1a9f874454eddbcbc7c535e8f82c12eb032777c62b: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:53.897824 containerd[1501]: time="2025-03-25T01:35:53.897725204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2rbnw,Uid:5ff86e4e-2980-4c86-b328-6fdc209e86c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\"" Mar 25 01:35:53.899582 containerd[1501]: time="2025-03-25T01:35:53.899558982Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 25 01:35:53.907624 containerd[1501]: time="2025-03-25T01:35:53.907596646Z" level=info msg="CreateContainer within sandbox \"2b1f521d2e71ce05d50236657b9f1419be658ee6d589c5d1db1503c3af80f4d4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4eb195981d3e23b158103d1a9f874454eddbcbc7c535e8f82c12eb032777c62b\"" Mar 25 01:35:53.908067 containerd[1501]: time="2025-03-25T01:35:53.907998910Z" level=info msg="StartContainer for \"4eb195981d3e23b158103d1a9f874454eddbcbc7c535e8f82c12eb032777c62b\"" Mar 25 01:35:53.909469 containerd[1501]: time="2025-03-25T01:35:53.909441045Z" level=info msg="connecting to shim 4eb195981d3e23b158103d1a9f874454eddbcbc7c535e8f82c12eb032777c62b" address="unix:///run/containerd/s/f7b2fb1eb3c0b4bf401b9906c09a9fd25b6eb149328b1fc589f10fb021764cb4" protocol=ttrpc version=3 Mar 25 01:35:53.935998 systemd[1]: Started cri-containerd-4eb195981d3e23b158103d1a9f874454eddbcbc7c535e8f82c12eb032777c62b.scope - libcontainer container 4eb195981d3e23b158103d1a9f874454eddbcbc7c535e8f82c12eb032777c62b. Mar 25 01:35:53.951251 containerd[1501]: time="2025-03-25T01:35:53.951220707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-924lk,Uid:dd04a783-1f09-497a-8a63-eba9a7e3f810,Namespace:kube-system,Attempt:0,}" Mar 25 01:35:53.977924 containerd[1501]: time="2025-03-25T01:35:53.977051078Z" level=info msg="StartContainer for \"4eb195981d3e23b158103d1a9f874454eddbcbc7c535e8f82c12eb032777c62b\" returns successfully" Mar 25 01:35:53.989215 containerd[1501]: time="2025-03-25T01:35:53.989088392Z" level=info msg="connecting to shim c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b" address="unix:///run/containerd/s/12d840acda484160968f27ffc619ca704476ac1c8cccf94a1bfac35e87e3a9c4" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:35:54.032033 systemd[1]: Started cri-containerd-c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b.scope - libcontainer container c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b. Mar 25 01:35:54.074593 containerd[1501]: time="2025-03-25T01:35:54.074123176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-924lk,Uid:dd04a783-1f09-497a-8a63-eba9a7e3f810,Namespace:kube-system,Attempt:0,} returns sandbox id \"c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b\"" Mar 25 01:35:54.348418 kubelet[2612]: I0325 01:35:54.348289 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9hgtz" podStartSLOduration=1.348275367 podStartE2EDuration="1.348275367s" podCreationTimestamp="2025-03-25 01:35:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:35:54.348190393 +0000 UTC m=+7.141942169" watchObservedRunningTime="2025-03-25 01:35:54.348275367 +0000 UTC m=+7.142027122" Mar 25 01:36:06.497617 update_engine[1488]: I20250325 01:36:06.497532 1488 update_attempter.cc:509] Updating boot flags... Mar 25 01:36:06.526005 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2994) Mar 25 01:36:06.571903 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2997) Mar 25 01:36:06.599968 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2997) Mar 25 01:36:09.207963 systemd[1]: Started sshd@7-10.0.0.139:22-10.0.0.1:60250.service - OpenSSH per-connection server daemon (10.0.0.1:60250). Mar 25 01:36:09.257128 sshd[3004]: Accepted publickey for core from 10.0.0.1 port 60250 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:09.258651 sshd-session[3004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:09.262770 systemd-logind[1486]: New session 8 of user core. Mar 25 01:36:09.274006 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 25 01:36:09.397067 sshd[3006]: Connection closed by 10.0.0.1 port 60250 Mar 25 01:36:09.397336 sshd-session[3004]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:09.401014 systemd[1]: sshd@7-10.0.0.139:22-10.0.0.1:60250.service: Deactivated successfully. Mar 25 01:36:09.403203 systemd[1]: session-8.scope: Deactivated successfully. Mar 25 01:36:09.403881 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Mar 25 01:36:09.404703 systemd-logind[1486]: Removed session 8. Mar 25 01:36:14.410422 systemd[1]: Started sshd@8-10.0.0.139:22-10.0.0.1:60264.service - OpenSSH per-connection server daemon (10.0.0.1:60264). Mar 25 01:36:14.452718 sshd[3020]: Accepted publickey for core from 10.0.0.1 port 60264 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:14.454329 sshd-session[3020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:14.458658 systemd-logind[1486]: New session 9 of user core. Mar 25 01:36:14.466016 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 25 01:36:14.579355 sshd[3022]: Connection closed by 10.0.0.1 port 60264 Mar 25 01:36:14.579754 sshd-session[3020]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:14.583754 systemd[1]: sshd@8-10.0.0.139:22-10.0.0.1:60264.service: Deactivated successfully. Mar 25 01:36:14.586248 systemd[1]: session-9.scope: Deactivated successfully. Mar 25 01:36:14.587027 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Mar 25 01:36:14.588066 systemd-logind[1486]: Removed session 9. Mar 25 01:36:16.977770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3040797463.mount: Deactivated successfully. Mar 25 01:36:18.871732 containerd[1501]: time="2025-03-25T01:36:18.871674383Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:36:18.872336 containerd[1501]: time="2025-03-25T01:36:18.872296245Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 25 01:36:18.873347 containerd[1501]: time="2025-03-25T01:36:18.873316980Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:36:18.874652 containerd[1501]: time="2025-03-25T01:36:18.874615777Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 24.975020396s" Mar 25 01:36:18.874691 containerd[1501]: time="2025-03-25T01:36:18.874657436Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 25 01:36:18.875555 containerd[1501]: time="2025-03-25T01:36:18.875346305Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 25 01:36:18.876415 containerd[1501]: time="2025-03-25T01:36:18.876384542Z" level=info msg="CreateContainer within sandbox \"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 01:36:18.884784 containerd[1501]: time="2025-03-25T01:36:18.884743640Z" level=info msg="Container 27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:36:18.890177 containerd[1501]: time="2025-03-25T01:36:18.890144752Z" level=info msg="CreateContainer within sandbox \"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a\"" Mar 25 01:36:18.890577 containerd[1501]: time="2025-03-25T01:36:18.890544606Z" level=info msg="StartContainer for \"27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a\"" Mar 25 01:36:18.891429 containerd[1501]: time="2025-03-25T01:36:18.891391342Z" level=info msg="connecting to shim 27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a" address="unix:///run/containerd/s/329eda23e22348e4f4d102c7445b4d7f81dfcdc628bdf87d4625de2279b557f5" protocol=ttrpc version=3 Mar 25 01:36:18.916032 systemd[1]: Started cri-containerd-27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a.scope - libcontainer container 27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a. Mar 25 01:36:18.945154 containerd[1501]: time="2025-03-25T01:36:18.945117995Z" level=info msg="StartContainer for \"27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a\" returns successfully" Mar 25 01:36:18.955444 systemd[1]: cri-containerd-27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a.scope: Deactivated successfully. Mar 25 01:36:18.957069 containerd[1501]: time="2025-03-25T01:36:18.957033648Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a\" id:\"27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a\" pid:3075 exited_at:{seconds:1742866578 nanos:956567278}" Mar 25 01:36:18.957154 containerd[1501]: time="2025-03-25T01:36:18.957083491Z" level=info msg="received exit event container_id:\"27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a\" id:\"27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a\" pid:3075 exited_at:{seconds:1742866578 nanos:956567278}" Mar 25 01:36:18.976331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a-rootfs.mount: Deactivated successfully. Mar 25 01:36:19.371334 containerd[1501]: time="2025-03-25T01:36:19.371295765Z" level=info msg="CreateContainer within sandbox \"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 01:36:19.381481 containerd[1501]: time="2025-03-25T01:36:19.381438489Z" level=info msg="Container 5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:36:19.387606 containerd[1501]: time="2025-03-25T01:36:19.387566358Z" level=info msg="CreateContainer within sandbox \"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4\"" Mar 25 01:36:19.388036 containerd[1501]: time="2025-03-25T01:36:19.388002890Z" level=info msg="StartContainer for \"5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4\"" Mar 25 01:36:19.388813 containerd[1501]: time="2025-03-25T01:36:19.388785385Z" level=info msg="connecting to shim 5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4" address="unix:///run/containerd/s/329eda23e22348e4f4d102c7445b4d7f81dfcdc628bdf87d4625de2279b557f5" protocol=ttrpc version=3 Mar 25 01:36:19.411992 systemd[1]: Started cri-containerd-5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4.scope - libcontainer container 5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4. Mar 25 01:36:19.437787 containerd[1501]: time="2025-03-25T01:36:19.437748828Z" level=info msg="StartContainer for \"5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4\" returns successfully" Mar 25 01:36:19.449918 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:36:19.450165 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:36:19.450520 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:36:19.451935 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:36:19.452850 systemd[1]: cri-containerd-5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4.scope: Deactivated successfully. Mar 25 01:36:19.453562 containerd[1501]: time="2025-03-25T01:36:19.453260560Z" level=info msg="received exit event container_id:\"5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4\" id:\"5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4\" pid:3120 exited_at:{seconds:1742866579 nanos:452969011}" Mar 25 01:36:19.453562 containerd[1501]: time="2025-03-25T01:36:19.453276701Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4\" id:\"5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4\" pid:3120 exited_at:{seconds:1742866579 nanos:452969011}" Mar 25 01:36:19.477395 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:36:19.591731 systemd[1]: Started sshd@9-10.0.0.139:22-10.0.0.1:36540.service - OpenSSH per-connection server daemon (10.0.0.1:36540). Mar 25 01:36:19.640867 sshd[3157]: Accepted publickey for core from 10.0.0.1 port 36540 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:19.642240 sshd-session[3157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:19.646177 systemd-logind[1486]: New session 10 of user core. Mar 25 01:36:19.656018 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 25 01:36:19.764341 sshd[3159]: Connection closed by 10.0.0.1 port 36540 Mar 25 01:36:19.764646 sshd-session[3157]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:19.768753 systemd[1]: sshd@9-10.0.0.139:22-10.0.0.1:36540.service: Deactivated successfully. Mar 25 01:36:19.770866 systemd[1]: session-10.scope: Deactivated successfully. Mar 25 01:36:19.771564 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Mar 25 01:36:19.772351 systemd-logind[1486]: Removed session 10. Mar 25 01:36:20.375078 containerd[1501]: time="2025-03-25T01:36:20.375015583Z" level=info msg="CreateContainer within sandbox \"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 01:36:20.553740 containerd[1501]: time="2025-03-25T01:36:20.553701560Z" level=info msg="Container 01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:36:20.557820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971909081.mount: Deactivated successfully. Mar 25 01:36:20.567830 containerd[1501]: time="2025-03-25T01:36:20.567784812Z" level=info msg="CreateContainer within sandbox \"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55\"" Mar 25 01:36:20.568274 containerd[1501]: time="2025-03-25T01:36:20.568230572Z" level=info msg="StartContainer for \"01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55\"" Mar 25 01:36:20.569508 containerd[1501]: time="2025-03-25T01:36:20.569483753Z" level=info msg="connecting to shim 01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55" address="unix:///run/containerd/s/329eda23e22348e4f4d102c7445b4d7f81dfcdc628bdf87d4625de2279b557f5" protocol=ttrpc version=3 Mar 25 01:36:20.589054 systemd[1]: Started cri-containerd-01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55.scope - libcontainer container 01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55. Mar 25 01:36:20.627459 systemd[1]: cri-containerd-01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55.scope: Deactivated successfully. Mar 25 01:36:20.628547 containerd[1501]: time="2025-03-25T01:36:20.628506278Z" level=info msg="TaskExit event in podsandbox handler container_id:\"01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55\" id:\"01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55\" pid:3184 exited_at:{seconds:1742866580 nanos:628216924}" Mar 25 01:36:20.629643 containerd[1501]: time="2025-03-25T01:36:20.629598527Z" level=info msg="received exit event container_id:\"01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55\" id:\"01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55\" pid:3184 exited_at:{seconds:1742866580 nanos:628216924}" Mar 25 01:36:20.638689 containerd[1501]: time="2025-03-25T01:36:20.638637606Z" level=info msg="StartContainer for \"01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55\" returns successfully" Mar 25 01:36:20.650827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55-rootfs.mount: Deactivated successfully. Mar 25 01:36:21.378584 containerd[1501]: time="2025-03-25T01:36:21.378548855Z" level=info msg="CreateContainer within sandbox \"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 01:36:21.403142 containerd[1501]: time="2025-03-25T01:36:21.403096899Z" level=info msg="Container b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:36:21.420231 containerd[1501]: time="2025-03-25T01:36:21.420178631Z" level=info msg="CreateContainer within sandbox \"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe\"" Mar 25 01:36:21.421640 containerd[1501]: time="2025-03-25T01:36:21.420695324Z" level=info msg="StartContainer for \"b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe\"" Mar 25 01:36:21.421640 containerd[1501]: time="2025-03-25T01:36:21.421539474Z" level=info msg="connecting to shim b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe" address="unix:///run/containerd/s/329eda23e22348e4f4d102c7445b4d7f81dfcdc628bdf87d4625de2279b557f5" protocol=ttrpc version=3 Mar 25 01:36:21.442018 systemd[1]: Started cri-containerd-b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe.scope - libcontainer container b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe. Mar 25 01:36:21.466299 systemd[1]: cri-containerd-b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe.scope: Deactivated successfully. Mar 25 01:36:21.466844 containerd[1501]: time="2025-03-25T01:36:21.466805219Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe\" id:\"b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe\" pid:3223 exited_at:{seconds:1742866581 nanos:466474175}" Mar 25 01:36:21.468050 containerd[1501]: time="2025-03-25T01:36:21.468025818Z" level=info msg="received exit event container_id:\"b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe\" id:\"b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe\" pid:3223 exited_at:{seconds:1742866581 nanos:466474175}" Mar 25 01:36:21.475005 containerd[1501]: time="2025-03-25T01:36:21.474970468Z" level=info msg="StartContainer for \"b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe\" returns successfully" Mar 25 01:36:21.486688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe-rootfs.mount: Deactivated successfully. Mar 25 01:36:22.382857 containerd[1501]: time="2025-03-25T01:36:22.382817124Z" level=info msg="CreateContainer within sandbox \"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 01:36:22.395715 containerd[1501]: time="2025-03-25T01:36:22.395649275Z" level=info msg="Container fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:36:22.398628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount470085700.mount: Deactivated successfully. Mar 25 01:36:22.403989 containerd[1501]: time="2025-03-25T01:36:22.403950185Z" level=info msg="CreateContainer within sandbox \"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\"" Mar 25 01:36:22.404363 containerd[1501]: time="2025-03-25T01:36:22.404339368Z" level=info msg="StartContainer for \"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\"" Mar 25 01:36:22.405237 containerd[1501]: time="2025-03-25T01:36:22.405217852Z" level=info msg="connecting to shim fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7" address="unix:///run/containerd/s/329eda23e22348e4f4d102c7445b4d7f81dfcdc628bdf87d4625de2279b557f5" protocol=ttrpc version=3 Mar 25 01:36:22.428133 systemd[1]: Started cri-containerd-fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7.scope - libcontainer container fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7. Mar 25 01:36:22.460690 containerd[1501]: time="2025-03-25T01:36:22.460645407Z" level=info msg="StartContainer for \"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\" returns successfully" Mar 25 01:36:22.533307 containerd[1501]: time="2025-03-25T01:36:22.533244616Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\" id:\"2d454ff1eb5805f7ea1fef6593a5af397ab6d57c45cf229c6516bb33bb82b0c6\" pid:3297 exited_at:{seconds:1742866582 nanos:532801792}" Mar 25 01:36:22.554552 kubelet[2612]: I0325 01:36:22.554502 2612 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 25 01:36:22.610559 systemd[1]: Created slice kubepods-burstable-podc3b483c3_1013_4ef8_bad5_72f905f90892.slice - libcontainer container kubepods-burstable-podc3b483c3_1013_4ef8_bad5_72f905f90892.slice. Mar 25 01:36:22.614564 kubelet[2612]: I0325 01:36:22.614535 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/304e5e9a-ca27-46cb-abc9-ae95a887915b-config-volume\") pod \"coredns-668d6bf9bc-fdln5\" (UID: \"304e5e9a-ca27-46cb-abc9-ae95a887915b\") " pod="kube-system/coredns-668d6bf9bc-fdln5" Mar 25 01:36:22.614631 kubelet[2612]: I0325 01:36:22.614575 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3b483c3-1013-4ef8-bad5-72f905f90892-config-volume\") pod \"coredns-668d6bf9bc-pd92h\" (UID: \"c3b483c3-1013-4ef8-bad5-72f905f90892\") " pod="kube-system/coredns-668d6bf9bc-pd92h" Mar 25 01:36:22.614631 kubelet[2612]: I0325 01:36:22.614603 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4grm8\" (UniqueName: \"kubernetes.io/projected/304e5e9a-ca27-46cb-abc9-ae95a887915b-kube-api-access-4grm8\") pod \"coredns-668d6bf9bc-fdln5\" (UID: \"304e5e9a-ca27-46cb-abc9-ae95a887915b\") " pod="kube-system/coredns-668d6bf9bc-fdln5" Mar 25 01:36:22.614631 kubelet[2612]: I0325 01:36:22.614627 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlmcz\" (UniqueName: \"kubernetes.io/projected/c3b483c3-1013-4ef8-bad5-72f905f90892-kube-api-access-dlmcz\") pod \"coredns-668d6bf9bc-pd92h\" (UID: \"c3b483c3-1013-4ef8-bad5-72f905f90892\") " pod="kube-system/coredns-668d6bf9bc-pd92h" Mar 25 01:36:22.615444 systemd[1]: Created slice kubepods-burstable-pod304e5e9a_ca27_46cb_abc9_ae95a887915b.slice - libcontainer container kubepods-burstable-pod304e5e9a_ca27_46cb_abc9_ae95a887915b.slice. Mar 25 01:36:22.914218 containerd[1501]: time="2025-03-25T01:36:22.914174295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pd92h,Uid:c3b483c3-1013-4ef8-bad5-72f905f90892,Namespace:kube-system,Attempt:0,}" Mar 25 01:36:22.918980 containerd[1501]: time="2025-03-25T01:36:22.918921453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fdln5,Uid:304e5e9a-ca27-46cb-abc9-ae95a887915b,Namespace:kube-system,Attempt:0,}" Mar 25 01:36:24.402566 containerd[1501]: time="2025-03-25T01:36:24.402513334Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:36:24.403360 containerd[1501]: time="2025-03-25T01:36:24.403317257Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 25 01:36:24.404463 containerd[1501]: time="2025-03-25T01:36:24.404417157Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:36:24.405426 containerd[1501]: time="2025-03-25T01:36:24.405386130Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.530011492s" Mar 25 01:36:24.405463 containerd[1501]: time="2025-03-25T01:36:24.405426186Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 25 01:36:24.407270 containerd[1501]: time="2025-03-25T01:36:24.407235471Z" level=info msg="CreateContainer within sandbox \"c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 25 01:36:24.415037 containerd[1501]: time="2025-03-25T01:36:24.415003681Z" level=info msg="Container 7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:36:24.422008 containerd[1501]: time="2025-03-25T01:36:24.421961154Z" level=info msg="CreateContainer within sandbox \"c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\"" Mar 25 01:36:24.422466 containerd[1501]: time="2025-03-25T01:36:24.422440878Z" level=info msg="StartContainer for \"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\"" Mar 25 01:36:24.423582 containerd[1501]: time="2025-03-25T01:36:24.423548612Z" level=info msg="connecting to shim 7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f" address="unix:///run/containerd/s/12d840acda484160968f27ffc619ca704476ac1c8cccf94a1bfac35e87e3a9c4" protocol=ttrpc version=3 Mar 25 01:36:24.450041 systemd[1]: Started cri-containerd-7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f.scope - libcontainer container 7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f. Mar 25 01:36:24.479620 containerd[1501]: time="2025-03-25T01:36:24.479580004Z" level=info msg="StartContainer for \"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\" returns successfully" Mar 25 01:36:24.776286 systemd[1]: Started sshd@10-10.0.0.139:22-10.0.0.1:36552.service - OpenSSH per-connection server daemon (10.0.0.1:36552). Mar 25 01:36:24.825926 sshd[3438]: Accepted publickey for core from 10.0.0.1 port 36552 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:24.827628 sshd-session[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:24.843782 systemd-logind[1486]: New session 11 of user core. Mar 25 01:36:24.848112 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 25 01:36:24.979804 sshd[3440]: Connection closed by 10.0.0.1 port 36552 Mar 25 01:36:24.981508 sshd-session[3438]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:24.986108 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Mar 25 01:36:24.987039 systemd[1]: sshd@10-10.0.0.139:22-10.0.0.1:36552.service: Deactivated successfully. Mar 25 01:36:24.990483 systemd[1]: session-11.scope: Deactivated successfully. Mar 25 01:36:24.991830 systemd-logind[1486]: Removed session 11. Mar 25 01:36:25.399864 kubelet[2612]: I0325 01:36:25.399798 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2rbnw" podStartSLOduration=7.42362729 podStartE2EDuration="32.399766s" podCreationTimestamp="2025-03-25 01:35:53 +0000 UTC" firstStartedPulling="2025-03-25 01:35:53.899121312 +0000 UTC m=+6.692873067" lastFinishedPulling="2025-03-25 01:36:18.875260022 +0000 UTC m=+31.669011777" observedRunningTime="2025-03-25 01:36:23.441669877 +0000 UTC m=+36.235421632" watchObservedRunningTime="2025-03-25 01:36:25.399766 +0000 UTC m=+38.193517755" Mar 25 01:36:25.400408 kubelet[2612]: I0325 01:36:25.400054 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-924lk" podStartSLOduration=2.069775403 podStartE2EDuration="32.400047028s" podCreationTimestamp="2025-03-25 01:35:53 +0000 UTC" firstStartedPulling="2025-03-25 01:35:54.075732989 +0000 UTC m=+6.869484744" lastFinishedPulling="2025-03-25 01:36:24.406004614 +0000 UTC m=+37.199756369" observedRunningTime="2025-03-25 01:36:25.399978719 +0000 UTC m=+38.193730475" watchObservedRunningTime="2025-03-25 01:36:25.400047028 +0000 UTC m=+38.193798793" Mar 25 01:36:28.429195 systemd-networkd[1425]: cilium_host: Link UP Mar 25 01:36:28.429366 systemd-networkd[1425]: cilium_net: Link UP Mar 25 01:36:28.429553 systemd-networkd[1425]: cilium_net: Gained carrier Mar 25 01:36:28.429729 systemd-networkd[1425]: cilium_host: Gained carrier Mar 25 01:36:28.531866 systemd-networkd[1425]: cilium_vxlan: Link UP Mar 25 01:36:28.531915 systemd-networkd[1425]: cilium_vxlan: Gained carrier Mar 25 01:36:28.733902 kernel: NET: Registered PF_ALG protocol family Mar 25 01:36:28.874019 systemd-networkd[1425]: cilium_host: Gained IPv6LL Mar 25 01:36:29.217050 systemd-networkd[1425]: cilium_net: Gained IPv6LL Mar 25 01:36:29.376611 systemd-networkd[1425]: lxc_health: Link UP Mar 25 01:36:29.387101 systemd-networkd[1425]: lxc_health: Gained carrier Mar 25 01:36:29.921000 systemd-networkd[1425]: cilium_vxlan: Gained IPv6LL Mar 25 01:36:29.950898 kernel: eth0: renamed from tmp3d5b3 Mar 25 01:36:29.958128 systemd-networkd[1425]: lxc49d73e4b307e: Link UP Mar 25 01:36:29.959323 systemd-networkd[1425]: lxc49d73e4b307e: Gained carrier Mar 25 01:36:29.977921 kernel: eth0: renamed from tmp35a40 Mar 25 01:36:29.986555 systemd-networkd[1425]: lxc7afc127c8adc: Link UP Mar 25 01:36:29.987379 systemd-networkd[1425]: lxc7afc127c8adc: Gained carrier Mar 25 01:36:29.993738 systemd[1]: Started sshd@11-10.0.0.139:22-10.0.0.1:41350.service - OpenSSH per-connection server daemon (10.0.0.1:41350). Mar 25 01:36:30.047503 sshd[3823]: Accepted publickey for core from 10.0.0.1 port 41350 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:30.049059 sshd-session[3823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:30.056090 systemd-logind[1486]: New session 12 of user core. Mar 25 01:36:30.062012 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 25 01:36:30.187978 sshd[3828]: Connection closed by 10.0.0.1 port 41350 Mar 25 01:36:30.188546 sshd-session[3823]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:30.199863 systemd[1]: sshd@11-10.0.0.139:22-10.0.0.1:41350.service: Deactivated successfully. Mar 25 01:36:30.202435 systemd[1]: session-12.scope: Deactivated successfully. Mar 25 01:36:30.203864 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Mar 25 01:36:30.205252 systemd[1]: Started sshd@12-10.0.0.139:22-10.0.0.1:41360.service - OpenSSH per-connection server daemon (10.0.0.1:41360). Mar 25 01:36:30.206142 systemd-logind[1486]: Removed session 12. Mar 25 01:36:30.252287 sshd[3845]: Accepted publickey for core from 10.0.0.1 port 41360 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:30.253905 sshd-session[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:30.258658 systemd-logind[1486]: New session 13 of user core. Mar 25 01:36:30.269082 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 25 01:36:30.441772 sshd[3848]: Connection closed by 10.0.0.1 port 41360 Mar 25 01:36:30.443506 sshd-session[3845]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:30.450789 systemd[1]: sshd@12-10.0.0.139:22-10.0.0.1:41360.service: Deactivated successfully. Mar 25 01:36:30.453984 systemd[1]: session-13.scope: Deactivated successfully. Mar 25 01:36:30.459030 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Mar 25 01:36:30.460118 systemd[1]: Started sshd@13-10.0.0.139:22-10.0.0.1:41372.service - OpenSSH per-connection server daemon (10.0.0.1:41372). Mar 25 01:36:30.461695 systemd-logind[1486]: Removed session 13. Mar 25 01:36:30.508019 sshd[3860]: Accepted publickey for core from 10.0.0.1 port 41372 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:30.510260 sshd-session[3860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:30.517806 systemd-logind[1486]: New session 14 of user core. Mar 25 01:36:30.531056 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 25 01:36:30.645469 sshd[3863]: Connection closed by 10.0.0.1 port 41372 Mar 25 01:36:30.645799 sshd-session[3860]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:30.650089 systemd[1]: sshd@13-10.0.0.139:22-10.0.0.1:41372.service: Deactivated successfully. Mar 25 01:36:30.652292 systemd[1]: session-14.scope: Deactivated successfully. Mar 25 01:36:30.653073 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Mar 25 01:36:30.653857 systemd-logind[1486]: Removed session 14. Mar 25 01:36:30.689052 systemd-networkd[1425]: lxc_health: Gained IPv6LL Mar 25 01:36:31.393021 systemd-networkd[1425]: lxc49d73e4b307e: Gained IPv6LL Mar 25 01:36:31.777020 systemd-networkd[1425]: lxc7afc127c8adc: Gained IPv6LL Mar 25 01:36:33.390637 containerd[1501]: time="2025-03-25T01:36:33.390590659Z" level=info msg="connecting to shim 3d5b3a6394eef6aef2f27f0ab14fac262fe2ff96ab9ba9537ddd8117f92f8679" address="unix:///run/containerd/s/6b4699b4a086758d01ef00517bb078ebb5d5d12b599e18ea259e0b6eef43d7c1" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:36:33.394266 containerd[1501]: time="2025-03-25T01:36:33.394195683Z" level=info msg="connecting to shim 35a40d7d91983c04d53b72b599f8d8f364adda95391dc440da0190922e6031e5" address="unix:///run/containerd/s/42723b951def240998744869bb2114bdca96b1ad9b959240f0d5582611e8aacc" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:36:33.421015 systemd[1]: Started cri-containerd-3d5b3a6394eef6aef2f27f0ab14fac262fe2ff96ab9ba9537ddd8117f92f8679.scope - libcontainer container 3d5b3a6394eef6aef2f27f0ab14fac262fe2ff96ab9ba9537ddd8117f92f8679. Mar 25 01:36:33.424389 systemd[1]: Started cri-containerd-35a40d7d91983c04d53b72b599f8d8f364adda95391dc440da0190922e6031e5.scope - libcontainer container 35a40d7d91983c04d53b72b599f8d8f364adda95391dc440da0190922e6031e5. Mar 25 01:36:33.433374 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 25 01:36:33.436750 systemd-resolved[1341]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 25 01:36:33.463844 containerd[1501]: time="2025-03-25T01:36:33.463800866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pd92h,Uid:c3b483c3-1013-4ef8-bad5-72f905f90892,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d5b3a6394eef6aef2f27f0ab14fac262fe2ff96ab9ba9537ddd8117f92f8679\"" Mar 25 01:36:33.469973 containerd[1501]: time="2025-03-25T01:36:33.469933300Z" level=info msg="CreateContainer within sandbox \"3d5b3a6394eef6aef2f27f0ab14fac262fe2ff96ab9ba9537ddd8117f92f8679\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:36:33.472492 containerd[1501]: time="2025-03-25T01:36:33.472383193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fdln5,Uid:304e5e9a-ca27-46cb-abc9-ae95a887915b,Namespace:kube-system,Attempt:0,} returns sandbox id \"35a40d7d91983c04d53b72b599f8d8f364adda95391dc440da0190922e6031e5\"" Mar 25 01:36:33.475615 containerd[1501]: time="2025-03-25T01:36:33.475592123Z" level=info msg="CreateContainer within sandbox \"35a40d7d91983c04d53b72b599f8d8f364adda95391dc440da0190922e6031e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:36:33.485885 containerd[1501]: time="2025-03-25T01:36:33.485842183Z" level=info msg="Container cf8e647412db72a8d9a1695f4a494ca170f2fd260012abde2353457bf97c35d0: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:36:33.494131 containerd[1501]: time="2025-03-25T01:36:33.494096063Z" level=info msg="CreateContainer within sandbox \"35a40d7d91983c04d53b72b599f8d8f364adda95391dc440da0190922e6031e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf8e647412db72a8d9a1695f4a494ca170f2fd260012abde2353457bf97c35d0\"" Mar 25 01:36:33.494920 containerd[1501]: time="2025-03-25T01:36:33.494501855Z" level=info msg="StartContainer for \"cf8e647412db72a8d9a1695f4a494ca170f2fd260012abde2353457bf97c35d0\"" Mar 25 01:36:33.495397 containerd[1501]: time="2025-03-25T01:36:33.495375367Z" level=info msg="connecting to shim cf8e647412db72a8d9a1695f4a494ca170f2fd260012abde2353457bf97c35d0" address="unix:///run/containerd/s/42723b951def240998744869bb2114bdca96b1ad9b959240f0d5582611e8aacc" protocol=ttrpc version=3 Mar 25 01:36:33.495978 containerd[1501]: time="2025-03-25T01:36:33.495942874Z" level=info msg="Container 300f382cbe58e246d6ce6f18060c3302dcdfbfef67c812e546b1b954317d9d95: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:36:33.516095 systemd[1]: Started cri-containerd-cf8e647412db72a8d9a1695f4a494ca170f2fd260012abde2353457bf97c35d0.scope - libcontainer container cf8e647412db72a8d9a1695f4a494ca170f2fd260012abde2353457bf97c35d0. Mar 25 01:36:33.518340 containerd[1501]: time="2025-03-25T01:36:33.518289154Z" level=info msg="CreateContainer within sandbox \"3d5b3a6394eef6aef2f27f0ab14fac262fe2ff96ab9ba9537ddd8117f92f8679\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"300f382cbe58e246d6ce6f18060c3302dcdfbfef67c812e546b1b954317d9d95\"" Mar 25 01:36:33.520919 containerd[1501]: time="2025-03-25T01:36:33.520652124Z" level=info msg="StartContainer for \"300f382cbe58e246d6ce6f18060c3302dcdfbfef67c812e546b1b954317d9d95\"" Mar 25 01:36:33.522945 containerd[1501]: time="2025-03-25T01:36:33.522915457Z" level=info msg="connecting to shim 300f382cbe58e246d6ce6f18060c3302dcdfbfef67c812e546b1b954317d9d95" address="unix:///run/containerd/s/6b4699b4a086758d01ef00517bb078ebb5d5d12b599e18ea259e0b6eef43d7c1" protocol=ttrpc version=3 Mar 25 01:36:33.546037 systemd[1]: Started cri-containerd-300f382cbe58e246d6ce6f18060c3302dcdfbfef67c812e546b1b954317d9d95.scope - libcontainer container 300f382cbe58e246d6ce6f18060c3302dcdfbfef67c812e546b1b954317d9d95. Mar 25 01:36:33.553693 containerd[1501]: time="2025-03-25T01:36:33.553581781Z" level=info msg="StartContainer for \"cf8e647412db72a8d9a1695f4a494ca170f2fd260012abde2353457bf97c35d0\" returns successfully" Mar 25 01:36:33.582542 containerd[1501]: time="2025-03-25T01:36:33.582479342Z" level=info msg="StartContainer for \"300f382cbe58e246d6ce6f18060c3302dcdfbfef67c812e546b1b954317d9d95\" returns successfully" Mar 25 01:36:34.388164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount384872806.mount: Deactivated successfully. Mar 25 01:36:34.422043 kubelet[2612]: I0325 01:36:34.421977 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pd92h" podStartSLOduration=41.421961534 podStartE2EDuration="41.421961534s" podCreationTimestamp="2025-03-25 01:35:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:36:34.421313507 +0000 UTC m=+47.215065262" watchObservedRunningTime="2025-03-25 01:36:34.421961534 +0000 UTC m=+47.215713289" Mar 25 01:36:35.661124 systemd[1]: Started sshd@14-10.0.0.139:22-10.0.0.1:41374.service - OpenSSH per-connection server daemon (10.0.0.1:41374). Mar 25 01:36:35.712039 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 41374 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:35.713737 sshd-session[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:35.717913 systemd-logind[1486]: New session 15 of user core. Mar 25 01:36:35.727023 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 25 01:36:35.836619 sshd[4053]: Connection closed by 10.0.0.1 port 41374 Mar 25 01:36:35.836977 sshd-session[4051]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:35.841231 systemd[1]: sshd@14-10.0.0.139:22-10.0.0.1:41374.service: Deactivated successfully. Mar 25 01:36:35.843414 systemd[1]: session-15.scope: Deactivated successfully. Mar 25 01:36:35.844092 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Mar 25 01:36:35.844951 systemd-logind[1486]: Removed session 15. Mar 25 01:36:40.849042 systemd[1]: Started sshd@15-10.0.0.139:22-10.0.0.1:44850.service - OpenSSH per-connection server daemon (10.0.0.1:44850). Mar 25 01:36:40.894779 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 44850 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:40.896223 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:40.900484 systemd-logind[1486]: New session 16 of user core. Mar 25 01:36:40.907002 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 25 01:36:41.015004 sshd[4069]: Connection closed by 10.0.0.1 port 44850 Mar 25 01:36:41.015471 sshd-session[4067]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:41.023708 systemd[1]: sshd@15-10.0.0.139:22-10.0.0.1:44850.service: Deactivated successfully. Mar 25 01:36:41.025712 systemd[1]: session-16.scope: Deactivated successfully. Mar 25 01:36:41.027424 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Mar 25 01:36:41.028756 systemd[1]: Started sshd@16-10.0.0.139:22-10.0.0.1:44852.service - OpenSSH per-connection server daemon (10.0.0.1:44852). Mar 25 01:36:41.029495 systemd-logind[1486]: Removed session 16. Mar 25 01:36:41.076200 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 44852 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:41.078039 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:41.082737 systemd-logind[1486]: New session 17 of user core. Mar 25 01:36:41.096992 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 25 01:36:41.266655 sshd[4084]: Connection closed by 10.0.0.1 port 44852 Mar 25 01:36:41.267024 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:41.285594 systemd[1]: sshd@16-10.0.0.139:22-10.0.0.1:44852.service: Deactivated successfully. Mar 25 01:36:41.287358 systemd[1]: session-17.scope: Deactivated successfully. Mar 25 01:36:41.288838 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Mar 25 01:36:41.290043 systemd[1]: Started sshd@17-10.0.0.139:22-10.0.0.1:44864.service - OpenSSH per-connection server daemon (10.0.0.1:44864). Mar 25 01:36:41.290904 systemd-logind[1486]: Removed session 17. Mar 25 01:36:41.342664 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 44864 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:41.344226 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:41.348938 systemd-logind[1486]: New session 18 of user core. Mar 25 01:36:41.358990 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 25 01:36:42.198969 sshd[4097]: Connection closed by 10.0.0.1 port 44864 Mar 25 01:36:42.199440 sshd-session[4094]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:42.212153 systemd[1]: sshd@17-10.0.0.139:22-10.0.0.1:44864.service: Deactivated successfully. Mar 25 01:36:42.214112 systemd[1]: session-18.scope: Deactivated successfully. Mar 25 01:36:42.216617 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Mar 25 01:36:42.218377 systemd[1]: Started sshd@18-10.0.0.139:22-10.0.0.1:44872.service - OpenSSH per-connection server daemon (10.0.0.1:44872). Mar 25 01:36:42.219968 systemd-logind[1486]: Removed session 18. Mar 25 01:36:42.270693 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 44872 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:42.272246 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:42.276775 systemd-logind[1486]: New session 19 of user core. Mar 25 01:36:42.283988 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 25 01:36:42.491372 sshd[4118]: Connection closed by 10.0.0.1 port 44872 Mar 25 01:36:42.491922 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:42.501981 systemd[1]: sshd@18-10.0.0.139:22-10.0.0.1:44872.service: Deactivated successfully. Mar 25 01:36:42.504054 systemd[1]: session-19.scope: Deactivated successfully. Mar 25 01:36:42.505698 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Mar 25 01:36:42.507095 systemd[1]: Started sshd@19-10.0.0.139:22-10.0.0.1:44888.service - OpenSSH per-connection server daemon (10.0.0.1:44888). Mar 25 01:36:42.507884 systemd-logind[1486]: Removed session 19. Mar 25 01:36:42.557217 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 44888 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:42.558737 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:42.565064 systemd-logind[1486]: New session 20 of user core. Mar 25 01:36:42.571005 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 25 01:36:42.678134 sshd[4132]: Connection closed by 10.0.0.1 port 44888 Mar 25 01:36:42.678336 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:42.682821 systemd[1]: sshd@19-10.0.0.139:22-10.0.0.1:44888.service: Deactivated successfully. Mar 25 01:36:42.684973 systemd[1]: session-20.scope: Deactivated successfully. Mar 25 01:36:42.685725 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. Mar 25 01:36:42.686599 systemd-logind[1486]: Removed session 20. Mar 25 01:36:47.689607 systemd[1]: Started sshd@20-10.0.0.139:22-10.0.0.1:44890.service - OpenSSH per-connection server daemon (10.0.0.1:44890). Mar 25 01:36:47.737526 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 44890 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:47.738767 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:47.742848 systemd-logind[1486]: New session 21 of user core. Mar 25 01:36:47.752995 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 25 01:36:47.854251 sshd[4153]: Connection closed by 10.0.0.1 port 44890 Mar 25 01:36:47.854584 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:47.858008 systemd[1]: sshd@20-10.0.0.139:22-10.0.0.1:44890.service: Deactivated successfully. Mar 25 01:36:47.859844 systemd[1]: session-21.scope: Deactivated successfully. Mar 25 01:36:47.860497 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. Mar 25 01:36:47.861369 systemd-logind[1486]: Removed session 21. Mar 25 01:36:52.869849 systemd[1]: Started sshd@21-10.0.0.139:22-10.0.0.1:39698.service - OpenSSH per-connection server daemon (10.0.0.1:39698). Mar 25 01:36:52.916892 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 39698 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:52.918376 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:52.922310 systemd-logind[1486]: New session 22 of user core. Mar 25 01:36:52.936996 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 25 01:36:53.039479 sshd[4169]: Connection closed by 10.0.0.1 port 39698 Mar 25 01:36:53.039781 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:53.043869 systemd[1]: sshd@21-10.0.0.139:22-10.0.0.1:39698.service: Deactivated successfully. Mar 25 01:36:53.046087 systemd[1]: session-22.scope: Deactivated successfully. Mar 25 01:36:53.046808 systemd-logind[1486]: Session 22 logged out. Waiting for processes to exit. Mar 25 01:36:53.047668 systemd-logind[1486]: Removed session 22. Mar 25 01:36:58.052181 systemd[1]: Started sshd@22-10.0.0.139:22-10.0.0.1:45522.service - OpenSSH per-connection server daemon (10.0.0.1:45522). Mar 25 01:36:58.095787 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 45522 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:58.097139 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:58.101103 systemd-logind[1486]: New session 23 of user core. Mar 25 01:36:58.107989 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 25 01:36:58.211248 sshd[4186]: Connection closed by 10.0.0.1 port 45522 Mar 25 01:36:58.211540 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Mar 25 01:36:58.224643 systemd[1]: sshd@22-10.0.0.139:22-10.0.0.1:45522.service: Deactivated successfully. Mar 25 01:36:58.226612 systemd[1]: session-23.scope: Deactivated successfully. Mar 25 01:36:58.228293 systemd-logind[1486]: Session 23 logged out. Waiting for processes to exit. Mar 25 01:36:58.229756 systemd[1]: Started sshd@23-10.0.0.139:22-10.0.0.1:45538.service - OpenSSH per-connection server daemon (10.0.0.1:45538). Mar 25 01:36:58.230538 systemd-logind[1486]: Removed session 23. Mar 25 01:36:58.283800 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 45538 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:36:58.285395 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:36:58.289364 systemd-logind[1486]: New session 24 of user core. Mar 25 01:36:58.299012 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 25 01:36:59.604478 kubelet[2612]: I0325 01:36:59.604416 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fdln5" podStartSLOduration=66.604400439 podStartE2EDuration="1m6.604400439s" podCreationTimestamp="2025-03-25 01:35:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:36:34.44284445 +0000 UTC m=+47.236596205" watchObservedRunningTime="2025-03-25 01:36:59.604400439 +0000 UTC m=+72.398152194" Mar 25 01:36:59.620276 containerd[1501]: time="2025-03-25T01:36:59.620205135Z" level=info msg="StopContainer for \"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\" with timeout 30 (s)" Mar 25 01:36:59.623016 containerd[1501]: time="2025-03-25T01:36:59.622982642Z" level=info msg="Stop container \"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\" with signal terminated" Mar 25 01:36:59.665082 systemd[1]: cri-containerd-7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f.scope: Deactivated successfully. Mar 25 01:36:59.666270 containerd[1501]: time="2025-03-25T01:36:59.665797827Z" level=info msg="received exit event container_id:\"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\" id:\"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\" pid:3416 exited_at:{seconds:1742866619 nanos:665294276}" Mar 25 01:36:59.666270 containerd[1501]: time="2025-03-25T01:36:59.666118375Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\" id:\"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\" pid:3416 exited_at:{seconds:1742866619 nanos:665294276}" Mar 25 01:36:59.671529 containerd[1501]: time="2025-03-25T01:36:59.670839251Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\" id:\"2b9bcdc5189cd02a2ba1bf157a5fabbfab6ff2d30c5a0970becf40696ae436a5\" pid:4223 exited_at:{seconds:1742866619 nanos:668392251}" Mar 25 01:36:59.672734 containerd[1501]: time="2025-03-25T01:36:59.672701624Z" level=info msg="StopContainer for \"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\" with timeout 2 (s)" Mar 25 01:36:59.673568 containerd[1501]: time="2025-03-25T01:36:59.672957207Z" level=info msg="Stop container \"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\" with signal terminated" Mar 25 01:36:59.679337 containerd[1501]: time="2025-03-25T01:36:59.679052774Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:36:59.680442 systemd-networkd[1425]: lxc_health: Link DOWN Mar 25 01:36:59.680449 systemd-networkd[1425]: lxc_health: Lost carrier Mar 25 01:36:59.693883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f-rootfs.mount: Deactivated successfully. Mar 25 01:36:59.701484 systemd[1]: cri-containerd-fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7.scope: Deactivated successfully. Mar 25 01:36:59.701867 systemd[1]: cri-containerd-fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7.scope: Consumed 6.588s CPU time, 127.1M memory peak, 156K read from disk, 13.3M written to disk. Mar 25 01:36:59.702268 containerd[1501]: time="2025-03-25T01:36:59.702225852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\" id:\"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\" pid:3268 exited_at:{seconds:1742866619 nanos:701913339}" Mar 25 01:36:59.702370 containerd[1501]: time="2025-03-25T01:36:59.702318711Z" level=info msg="received exit event container_id:\"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\" id:\"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\" pid:3268 exited_at:{seconds:1742866619 nanos:701913339}" Mar 25 01:36:59.723447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7-rootfs.mount: Deactivated successfully. Mar 25 01:36:59.735723 containerd[1501]: time="2025-03-25T01:36:59.735673748Z" level=info msg="StopContainer for \"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\" returns successfully" Mar 25 01:36:59.736326 containerd[1501]: time="2025-03-25T01:36:59.736297551Z" level=info msg="StopPodSandbox for \"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\"" Mar 25 01:36:59.738106 containerd[1501]: time="2025-03-25T01:36:59.738077385Z" level=info msg="StopContainer for \"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\" returns successfully" Mar 25 01:36:59.738495 containerd[1501]: time="2025-03-25T01:36:59.738450685Z" level=info msg="StopPodSandbox for \"c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b\"" Mar 25 01:36:59.743786 containerd[1501]: time="2025-03-25T01:36:59.743721040Z" level=info msg="Container to stop \"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:36:59.744367 containerd[1501]: time="2025-03-25T01:36:59.743737302Z" level=info msg="Container to stop \"27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:36:59.744367 containerd[1501]: time="2025-03-25T01:36:59.743845210Z" level=info msg="Container to stop \"b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:36:59.744367 containerd[1501]: time="2025-03-25T01:36:59.743857294Z" level=info msg="Container to stop \"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:36:59.744367 containerd[1501]: time="2025-03-25T01:36:59.743892210Z" level=info msg="Container to stop \"5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:36:59.744367 containerd[1501]: time="2025-03-25T01:36:59.743903913Z" level=info msg="Container to stop \"01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:36:59.766599 systemd[1]: cri-containerd-0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e.scope: Deactivated successfully. Mar 25 01:36:59.768479 systemd[1]: cri-containerd-c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b.scope: Deactivated successfully. Mar 25 01:36:59.770358 containerd[1501]: time="2025-03-25T01:36:59.770311390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\" id:\"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\" pid:2765 exit_status:137 exited_at:{seconds:1742866619 nanos:767770800}" Mar 25 01:36:59.796051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e-rootfs.mount: Deactivated successfully. Mar 25 01:36:59.798542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b-rootfs.mount: Deactivated successfully. Mar 25 01:36:59.803630 containerd[1501]: time="2025-03-25T01:36:59.803598015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b\" id:\"c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b\" pid:2851 exit_status:137 exited_at:{seconds:1742866619 nanos:773210872}" Mar 25 01:36:59.803978 containerd[1501]: time="2025-03-25T01:36:59.803852777Z" level=info msg="shim disconnected" id=0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e namespace=k8s.io Mar 25 01:36:59.803978 containerd[1501]: time="2025-03-25T01:36:59.803905208Z" level=warning msg="cleaning up after shim disconnected" id=0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e namespace=k8s.io Mar 25 01:36:59.805549 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e-shm.mount: Deactivated successfully. Mar 25 01:36:59.830163 containerd[1501]: time="2025-03-25T01:36:59.803914155Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:36:59.830293 containerd[1501]: time="2025-03-25T01:36:59.805732072Z" level=info msg="shim disconnected" id=c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b namespace=k8s.io Mar 25 01:36:59.830293 containerd[1501]: time="2025-03-25T01:36:59.830228512Z" level=warning msg="cleaning up after shim disconnected" id=c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b namespace=k8s.io Mar 25 01:36:59.830293 containerd[1501]: time="2025-03-25T01:36:59.830238160Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:36:59.831411 containerd[1501]: time="2025-03-25T01:36:59.822054746Z" level=info msg="TearDown network for sandbox \"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\" successfully" Mar 25 01:36:59.831411 containerd[1501]: time="2025-03-25T01:36:59.830519362Z" level=info msg="StopPodSandbox for \"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\" returns successfully" Mar 25 01:36:59.831411 containerd[1501]: time="2025-03-25T01:36:59.824691391Z" level=info msg="TearDown network for sandbox \"c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b\" successfully" Mar 25 01:36:59.831411 containerd[1501]: time="2025-03-25T01:36:59.830627110Z" level=info msg="StopPodSandbox for \"c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b\" returns successfully" Mar 25 01:36:59.836723 containerd[1501]: time="2025-03-25T01:36:59.836682521Z" level=info msg="received exit event sandbox_id:\"0e708315876ea4488960c39644be458751b4095294cbfff26670b0b01b96187e\" exit_status:137 exited_at:{seconds:1742866619 nanos:767770800}" Mar 25 01:36:59.837984 containerd[1501]: time="2025-03-25T01:36:59.837954284Z" level=info msg="received exit event sandbox_id:\"c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b\" exit_status:137 exited_at:{seconds:1742866619 nanos:773210872}" Mar 25 01:36:59.928175 kubelet[2612]: I0325 01:36:59.928059 2612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ff86e4e-2980-4c86-b328-6fdc209e86c4-clustermesh-secrets\") pod \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " Mar 25 01:36:59.928175 kubelet[2612]: I0325 01:36:59.928094 2612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-host-proc-sys-net\") pod \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " Mar 25 01:36:59.928175 kubelet[2612]: I0325 01:36:59.928110 2612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-bpf-maps\") pod \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " Mar 25 01:36:59.928175 kubelet[2612]: I0325 01:36:59.928127 2612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-cni-path\") pod \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " Mar 25 01:36:59.928175 kubelet[2612]: I0325 01:36:59.928140 2612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-lib-modules\") pod \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " Mar 25 01:36:59.928413 kubelet[2612]: I0325 01:36:59.928187 2612 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-cni-path" (OuterVolumeSpecName: "cni-path") pod "5ff86e4e-2980-4c86-b328-6fdc209e86c4" (UID: "5ff86e4e-2980-4c86-b328-6fdc209e86c4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:36:59.928413 kubelet[2612]: I0325 01:36:59.928202 2612 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5ff86e4e-2980-4c86-b328-6fdc209e86c4" (UID: "5ff86e4e-2980-4c86-b328-6fdc209e86c4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:36:59.928413 kubelet[2612]: I0325 01:36:59.928202 2612 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5ff86e4e-2980-4c86-b328-6fdc209e86c4" (UID: "5ff86e4e-2980-4c86-b328-6fdc209e86c4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:36:59.928413 kubelet[2612]: I0325 01:36:59.928264 2612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-host-proc-sys-kernel\") pod \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " Mar 25 01:36:59.928413 kubelet[2612]: I0325 01:36:59.928283 2612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-hostproc\") pod \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " Mar 25 01:36:59.928534 kubelet[2612]: I0325 01:36:59.928374 2612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ff86e4e-2980-4c86-b328-6fdc209e86c4-cilium-config-path\") pod \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " Mar 25 01:36:59.928534 kubelet[2612]: I0325 01:36:59.928395 2612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ff86e4e-2980-4c86-b328-6fdc209e86c4-hubble-tls\") pod \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " Mar 25 01:36:59.928534 kubelet[2612]: I0325 01:36:59.928409 2612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-etc-cni-netd\") pod \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " Mar 25 01:36:59.928534 kubelet[2612]: I0325 01:36:59.928423 2612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-cilium-run\") pod \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " Mar 25 01:36:59.928534 kubelet[2612]: I0325 01:36:59.928439 2612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7fjp\" (UniqueName: \"kubernetes.io/projected/dd04a783-1f09-497a-8a63-eba9a7e3f810-kube-api-access-m7fjp\") pod \"dd04a783-1f09-497a-8a63-eba9a7e3f810\" (UID: \"dd04a783-1f09-497a-8a63-eba9a7e3f810\") " Mar 25 01:36:59.928534 kubelet[2612]: I0325 01:36:59.928457 2612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd04a783-1f09-497a-8a63-eba9a7e3f810-cilium-config-path\") pod \"dd04a783-1f09-497a-8a63-eba9a7e3f810\" (UID: \"dd04a783-1f09-497a-8a63-eba9a7e3f810\") " Mar 25 01:36:59.928670 kubelet[2612]: I0325 01:36:59.928473 2612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxt44\" (UniqueName: \"kubernetes.io/projected/5ff86e4e-2980-4c86-b328-6fdc209e86c4-kube-api-access-sxt44\") pod \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " Mar 25 01:36:59.928670 kubelet[2612]: I0325 01:36:59.928487 2612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-xtables-lock\") pod \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " Mar 25 01:36:59.928670 kubelet[2612]: I0325 01:36:59.928500 2612 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-cilium-cgroup\") pod \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\" (UID: \"5ff86e4e-2980-4c86-b328-6fdc209e86c4\") " Mar 25 01:36:59.928670 kubelet[2612]: I0325 01:36:59.928533 2612 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 25 01:36:59.928670 kubelet[2612]: I0325 01:36:59.928543 2612 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 25 01:36:59.928670 kubelet[2612]: I0325 01:36:59.928552 2612 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 25 01:36:59.931812 kubelet[2612]: I0325 01:36:59.928322 2612 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5ff86e4e-2980-4c86-b328-6fdc209e86c4" (UID: "5ff86e4e-2980-4c86-b328-6fdc209e86c4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:36:59.932228 kubelet[2612]: I0325 01:36:59.928326 2612 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5ff86e4e-2980-4c86-b328-6fdc209e86c4" (UID: "5ff86e4e-2980-4c86-b328-6fdc209e86c4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:36:59.932228 kubelet[2612]: I0325 01:36:59.928336 2612 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-hostproc" (OuterVolumeSpecName: "hostproc") pod "5ff86e4e-2980-4c86-b328-6fdc209e86c4" (UID: "5ff86e4e-2980-4c86-b328-6fdc209e86c4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:36:59.932228 kubelet[2612]: I0325 01:36:59.928570 2612 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5ff86e4e-2980-4c86-b328-6fdc209e86c4" (UID: "5ff86e4e-2980-4c86-b328-6fdc209e86c4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:36:59.932228 kubelet[2612]: I0325 01:36:59.931108 2612 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5ff86e4e-2980-4c86-b328-6fdc209e86c4" (UID: "5ff86e4e-2980-4c86-b328-6fdc209e86c4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:36:59.932228 kubelet[2612]: I0325 01:36:59.931737 2612 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5ff86e4e-2980-4c86-b328-6fdc209e86c4" (UID: "5ff86e4e-2980-4c86-b328-6fdc209e86c4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:36:59.932381 kubelet[2612]: I0325 01:36:59.931775 2612 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff86e4e-2980-4c86-b328-6fdc209e86c4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5ff86e4e-2980-4c86-b328-6fdc209e86c4" (UID: "5ff86e4e-2980-4c86-b328-6fdc209e86c4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 25 01:36:59.932381 kubelet[2612]: I0325 01:36:59.932281 2612 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5ff86e4e-2980-4c86-b328-6fdc209e86c4" (UID: "5ff86e4e-2980-4c86-b328-6fdc209e86c4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 25 01:36:59.932474 kubelet[2612]: I0325 01:36:59.932448 2612 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff86e4e-2980-4c86-b328-6fdc209e86c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5ff86e4e-2980-4c86-b328-6fdc209e86c4" (UID: "5ff86e4e-2980-4c86-b328-6fdc209e86c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 25 01:36:59.932910 kubelet[2612]: I0325 01:36:59.932847 2612 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd04a783-1f09-497a-8a63-eba9a7e3f810-kube-api-access-m7fjp" (OuterVolumeSpecName: "kube-api-access-m7fjp") pod "dd04a783-1f09-497a-8a63-eba9a7e3f810" (UID: "dd04a783-1f09-497a-8a63-eba9a7e3f810"). InnerVolumeSpecName "kube-api-access-m7fjp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 25 01:36:59.934694 kubelet[2612]: I0325 01:36:59.934668 2612 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff86e4e-2980-4c86-b328-6fdc209e86c4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5ff86e4e-2980-4c86-b328-6fdc209e86c4" (UID: "5ff86e4e-2980-4c86-b328-6fdc209e86c4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 25 01:36:59.934974 kubelet[2612]: I0325 01:36:59.934946 2612 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd04a783-1f09-497a-8a63-eba9a7e3f810-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dd04a783-1f09-497a-8a63-eba9a7e3f810" (UID: "dd04a783-1f09-497a-8a63-eba9a7e3f810"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 25 01:36:59.935162 kubelet[2612]: I0325 01:36:59.935139 2612 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff86e4e-2980-4c86-b328-6fdc209e86c4-kube-api-access-sxt44" (OuterVolumeSpecName: "kube-api-access-sxt44") pod "5ff86e4e-2980-4c86-b328-6fdc209e86c4" (UID: "5ff86e4e-2980-4c86-b328-6fdc209e86c4"). InnerVolumeSpecName "kube-api-access-sxt44". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 25 01:37:00.029484 kubelet[2612]: I0325 01:37:00.029450 2612 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sxt44\" (UniqueName: \"kubernetes.io/projected/5ff86e4e-2980-4c86-b328-6fdc209e86c4-kube-api-access-sxt44\") on node \"localhost\" DevicePath \"\"" Mar 25 01:37:00.029484 kubelet[2612]: I0325 01:37:00.029470 2612 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 25 01:37:00.029484 kubelet[2612]: I0325 01:37:00.029481 2612 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 25 01:37:00.029586 kubelet[2612]: I0325 01:37:00.029489 2612 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ff86e4e-2980-4c86-b328-6fdc209e86c4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 25 01:37:00.029586 kubelet[2612]: I0325 01:37:00.029499 2612 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 25 01:37:00.029586 kubelet[2612]: I0325 01:37:00.029508 2612 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 25 01:37:00.029586 kubelet[2612]: I0325 01:37:00.029518 2612 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 25 01:37:00.029586 kubelet[2612]: I0325 01:37:00.029526 2612 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ff86e4e-2980-4c86-b328-6fdc209e86c4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 25 01:37:00.029586 kubelet[2612]: I0325 01:37:00.029534 2612 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ff86e4e-2980-4c86-b328-6fdc209e86c4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 25 01:37:00.029586 kubelet[2612]: I0325 01:37:00.029541 2612 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 25 01:37:00.029586 kubelet[2612]: I0325 01:37:00.029550 2612 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ff86e4e-2980-4c86-b328-6fdc209e86c4-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 25 01:37:00.029773 kubelet[2612]: I0325 01:37:00.029558 2612 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m7fjp\" (UniqueName: \"kubernetes.io/projected/dd04a783-1f09-497a-8a63-eba9a7e3f810-kube-api-access-m7fjp\") on node \"localhost\" DevicePath \"\"" Mar 25 01:37:00.029773 kubelet[2612]: I0325 01:37:00.029566 2612 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd04a783-1f09-497a-8a63-eba9a7e3f810-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 25 01:37:00.455986 kubelet[2612]: I0325 01:37:00.455954 2612 scope.go:117] "RemoveContainer" containerID="7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f" Mar 25 01:37:00.457675 containerd[1501]: time="2025-03-25T01:37:00.457644357Z" level=info msg="RemoveContainer for \"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\"" Mar 25 01:37:00.462894 systemd[1]: Removed slice kubepods-besteffort-poddd04a783_1f09_497a_8a63_eba9a7e3f810.slice - libcontainer container kubepods-besteffort-poddd04a783_1f09_497a_8a63_eba9a7e3f810.slice. Mar 25 01:37:00.467490 systemd[1]: Removed slice kubepods-burstable-pod5ff86e4e_2980_4c86_b328_6fdc209e86c4.slice - libcontainer container kubepods-burstable-pod5ff86e4e_2980_4c86_b328_6fdc209e86c4.slice. Mar 25 01:37:00.467592 systemd[1]: kubepods-burstable-pod5ff86e4e_2980_4c86_b328_6fdc209e86c4.slice: Consumed 6.694s CPU time, 127.4M memory peak, 204K read from disk, 13.3M written to disk. Mar 25 01:37:00.493340 containerd[1501]: time="2025-03-25T01:37:00.493293938Z" level=info msg="RemoveContainer for \"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\" returns successfully" Mar 25 01:37:00.493629 kubelet[2612]: I0325 01:37:00.493596 2612 scope.go:117] "RemoveContainer" containerID="7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f" Mar 25 01:37:00.493856 containerd[1501]: time="2025-03-25T01:37:00.493814541Z" level=error msg="ContainerStatus for \"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\": not found" Mar 25 01:37:00.494005 kubelet[2612]: E0325 01:37:00.493975 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\": not found" containerID="7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f" Mar 25 01:37:00.494099 kubelet[2612]: I0325 01:37:00.494005 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f"} err="failed to get container status \"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a9ca3c5812b3d87dcb464e68d04cba96b1414b2e13e4a1e6c35320f2cc76b0f\": not found" Mar 25 01:37:00.494099 kubelet[2612]: I0325 01:37:00.494057 2612 scope.go:117] "RemoveContainer" containerID="fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7" Mar 25 01:37:00.495731 containerd[1501]: time="2025-03-25T01:37:00.495689335Z" level=info msg="RemoveContainer for \"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\"" Mar 25 01:37:00.500558 containerd[1501]: time="2025-03-25T01:37:00.500528522Z" level=info msg="RemoveContainer for \"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\" returns successfully" Mar 25 01:37:00.500738 kubelet[2612]: I0325 01:37:00.500689 2612 scope.go:117] "RemoveContainer" containerID="b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe" Mar 25 01:37:00.501987 containerd[1501]: time="2025-03-25T01:37:00.501951905Z" level=info msg="RemoveContainer for \"b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe\"" Mar 25 01:37:00.506463 containerd[1501]: time="2025-03-25T01:37:00.506433533Z" level=info msg="RemoveContainer for \"b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe\" returns successfully" Mar 25 01:37:00.506614 kubelet[2612]: I0325 01:37:00.506590 2612 scope.go:117] "RemoveContainer" containerID="01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55" Mar 25 01:37:00.508446 containerd[1501]: time="2025-03-25T01:37:00.508422676Z" level=info msg="RemoveContainer for \"01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55\"" Mar 25 01:37:00.512605 containerd[1501]: time="2025-03-25T01:37:00.512576603Z" level=info msg="RemoveContainer for \"01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55\" returns successfully" Mar 25 01:37:00.512715 kubelet[2612]: I0325 01:37:00.512696 2612 scope.go:117] "RemoveContainer" containerID="5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4" Mar 25 01:37:00.513926 containerd[1501]: time="2025-03-25T01:37:00.513905805Z" level=info msg="RemoveContainer for \"5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4\"" Mar 25 01:37:00.517250 containerd[1501]: time="2025-03-25T01:37:00.517224260Z" level=info msg="RemoveContainer for \"5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4\" returns successfully" Mar 25 01:37:00.517428 kubelet[2612]: I0325 01:37:00.517361 2612 scope.go:117] "RemoveContainer" containerID="27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a" Mar 25 01:37:00.518525 containerd[1501]: time="2025-03-25T01:37:00.518500641Z" level=info msg="RemoveContainer for \"27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a\"" Mar 25 01:37:00.528115 containerd[1501]: time="2025-03-25T01:37:00.528084352Z" level=info msg="RemoveContainer for \"27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a\" returns successfully" Mar 25 01:37:00.528284 kubelet[2612]: I0325 01:37:00.528248 2612 scope.go:117] "RemoveContainer" containerID="fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7" Mar 25 01:37:00.528474 containerd[1501]: time="2025-03-25T01:37:00.528440519Z" level=error msg="ContainerStatus for \"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\": not found" Mar 25 01:37:00.528597 kubelet[2612]: E0325 01:37:00.528568 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\": not found" containerID="fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7" Mar 25 01:37:00.528627 kubelet[2612]: I0325 01:37:00.528601 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7"} err="failed to get container status \"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb8ab3938684b3cfa6ac002647381ac3d81f225b0d3020df0245c2012ae779e7\": not found" Mar 25 01:37:00.528627 kubelet[2612]: I0325 01:37:00.528626 2612 scope.go:117] "RemoveContainer" containerID="b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe" Mar 25 01:37:00.528792 containerd[1501]: time="2025-03-25T01:37:00.528759864Z" level=error msg="ContainerStatus for \"b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe\": not found" Mar 25 01:37:00.528920 kubelet[2612]: E0325 01:37:00.528891 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe\": not found" containerID="b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe" Mar 25 01:37:00.528978 kubelet[2612]: I0325 01:37:00.528919 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe"} err="failed to get container status \"b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"b559cb3a8465f2b728ff9d21e7aca6298f683f41e552d74e56b5a5c570b2c8fe\": not found" Mar 25 01:37:00.528978 kubelet[2612]: I0325 01:37:00.528940 2612 scope.go:117] "RemoveContainer" containerID="01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55" Mar 25 01:37:00.529121 containerd[1501]: time="2025-03-25T01:37:00.529091995Z" level=error msg="ContainerStatus for \"01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55\": not found" Mar 25 01:37:00.529212 kubelet[2612]: E0325 01:37:00.529194 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55\": not found" containerID="01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55" Mar 25 01:37:00.529246 kubelet[2612]: I0325 01:37:00.529212 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55"} err="failed to get container status \"01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55\": rpc error: code = NotFound desc = an error occurred when try to find container \"01426ccc86fb4f7d335d0b0675280b8c13b8ba98baeabe66db7cafb963a8dd55\": not found" Mar 25 01:37:00.529246 kubelet[2612]: I0325 01:37:00.529224 2612 scope.go:117] "RemoveContainer" containerID="5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4" Mar 25 01:37:00.529388 containerd[1501]: time="2025-03-25T01:37:00.529362446Z" level=error msg="ContainerStatus for \"5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4\": not found" Mar 25 01:37:00.529480 kubelet[2612]: E0325 01:37:00.529456 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4\": not found" containerID="5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4" Mar 25 01:37:00.529558 kubelet[2612]: I0325 01:37:00.529477 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4"} err="failed to get container status \"5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c9fde8432bd101b14c7588a2d028b07545008f9c5653b875da242e689e8a4b4\": not found" Mar 25 01:37:00.529558 kubelet[2612]: I0325 01:37:00.529495 2612 scope.go:117] "RemoveContainer" containerID="27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a" Mar 25 01:37:00.529723 containerd[1501]: time="2025-03-25T01:37:00.529685388Z" level=error msg="ContainerStatus for \"27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a\": not found" Mar 25 01:37:00.529834 kubelet[2612]: E0325 01:37:00.529811 2612 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a\": not found" containerID="27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a" Mar 25 01:37:00.529860 kubelet[2612]: I0325 01:37:00.529842 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a"} err="failed to get container status \"27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a\": rpc error: code = NotFound desc = an error occurred when try to find container \"27f219bd2d807bfd17d7ee94421574fb5d85856a0850a2b34adf5f32368cf19a\": not found" Mar 25 01:37:00.693249 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c129fe87a2840d337f00406c231ea0cfa411dcba1bb768a1ce86757b227cbd6b-shm.mount: Deactivated successfully. Mar 25 01:37:00.693383 systemd[1]: var-lib-kubelet-pods-dd04a783\x2d1f09\x2d497a\x2d8a63\x2deba9a7e3f810-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm7fjp.mount: Deactivated successfully. Mar 25 01:37:00.693474 systemd[1]: var-lib-kubelet-pods-5ff86e4e\x2d2980\x2d4c86\x2db328\x2d6fdc209e86c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsxt44.mount: Deactivated successfully. Mar 25 01:37:00.693553 systemd[1]: var-lib-kubelet-pods-5ff86e4e\x2d2980\x2d4c86\x2db328\x2d6fdc209e86c4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 25 01:37:00.693639 systemd[1]: var-lib-kubelet-pods-5ff86e4e\x2d2980\x2d4c86\x2db328\x2d6fdc209e86c4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 25 01:37:01.311351 kubelet[2612]: I0325 01:37:01.311308 2612 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ff86e4e-2980-4c86-b328-6fdc209e86c4" path="/var/lib/kubelet/pods/5ff86e4e-2980-4c86-b328-6fdc209e86c4/volumes" Mar 25 01:37:01.312148 kubelet[2612]: I0325 01:37:01.312122 2612 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd04a783-1f09-497a-8a63-eba9a7e3f810" path="/var/lib/kubelet/pods/dd04a783-1f09-497a-8a63-eba9a7e3f810/volumes" Mar 25 01:37:01.587433 sshd[4201]: Connection closed by 10.0.0.1 port 45538 Mar 25 01:37:01.587899 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Mar 25 01:37:01.600814 systemd[1]: sshd@23-10.0.0.139:22-10.0.0.1:45538.service: Deactivated successfully. Mar 25 01:37:01.602814 systemd[1]: session-24.scope: Deactivated successfully. Mar 25 01:37:01.604414 systemd-logind[1486]: Session 24 logged out. Waiting for processes to exit. Mar 25 01:37:01.605778 systemd[1]: Started sshd@24-10.0.0.139:22-10.0.0.1:45548.service - OpenSSH per-connection server daemon (10.0.0.1:45548). Mar 25 01:37:01.606603 systemd-logind[1486]: Removed session 24. Mar 25 01:37:01.657679 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 45548 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:37:01.659294 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:37:01.663640 systemd-logind[1486]: New session 25 of user core. Mar 25 01:37:01.673003 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 25 01:37:02.353607 kubelet[2612]: E0325 01:37:02.353568 2612 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 25 01:37:02.446433 sshd[4355]: Connection closed by 10.0.0.1 port 45548 Mar 25 01:37:02.448200 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Mar 25 01:37:02.460899 kubelet[2612]: I0325 01:37:02.458507 2612 memory_manager.go:355] "RemoveStaleState removing state" podUID="dd04a783-1f09-497a-8a63-eba9a7e3f810" containerName="cilium-operator" Mar 25 01:37:02.460899 kubelet[2612]: I0325 01:37:02.458533 2612 memory_manager.go:355] "RemoveStaleState removing state" podUID="5ff86e4e-2980-4c86-b328-6fdc209e86c4" containerName="cilium-agent" Mar 25 01:37:02.460283 systemd[1]: sshd@24-10.0.0.139:22-10.0.0.1:45548.service: Deactivated successfully. Mar 25 01:37:02.462420 systemd[1]: session-25.scope: Deactivated successfully. Mar 25 01:37:02.465852 systemd-logind[1486]: Session 25 logged out. Waiting for processes to exit. Mar 25 01:37:02.473107 systemd[1]: Started sshd@25-10.0.0.139:22-10.0.0.1:45556.service - OpenSSH per-connection server daemon (10.0.0.1:45556). Mar 25 01:37:02.474281 systemd-logind[1486]: Removed session 25. Mar 25 01:37:02.491696 systemd[1]: Created slice kubepods-burstable-pod9413d5c8_339f_4799_8599_25477f4be700.slice - libcontainer container kubepods-burstable-pod9413d5c8_339f_4799_8599_25477f4be700.slice. Mar 25 01:37:02.522174 sshd[4366]: Accepted publickey for core from 10.0.0.1 port 45556 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:37:02.523549 sshd-session[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:37:02.527469 systemd-logind[1486]: New session 26 of user core. Mar 25 01:37:02.537996 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 25 01:37:02.542692 kubelet[2612]: I0325 01:37:02.542650 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9413d5c8-339f-4799-8599-25477f4be700-cilium-ipsec-secrets\") pod \"cilium-ct85n\" (UID: \"9413d5c8-339f-4799-8599-25477f4be700\") " pod="kube-system/cilium-ct85n" Mar 25 01:37:02.542692 kubelet[2612]: I0325 01:37:02.542684 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9413d5c8-339f-4799-8599-25477f4be700-bpf-maps\") pod \"cilium-ct85n\" (UID: \"9413d5c8-339f-4799-8599-25477f4be700\") " pod="kube-system/cilium-ct85n" Mar 25 01:37:02.542760 kubelet[2612]: I0325 01:37:02.542699 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9413d5c8-339f-4799-8599-25477f4be700-host-proc-sys-net\") pod \"cilium-ct85n\" (UID: \"9413d5c8-339f-4799-8599-25477f4be700\") " pod="kube-system/cilium-ct85n" Mar 25 01:37:02.542760 kubelet[2612]: I0325 01:37:02.542714 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9413d5c8-339f-4799-8599-25477f4be700-hostproc\") pod \"cilium-ct85n\" (UID: \"9413d5c8-339f-4799-8599-25477f4be700\") " pod="kube-system/cilium-ct85n" Mar 25 01:37:02.542760 kubelet[2612]: I0325 01:37:02.542729 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9413d5c8-339f-4799-8599-25477f4be700-clustermesh-secrets\") pod \"cilium-ct85n\" (UID: \"9413d5c8-339f-4799-8599-25477f4be700\") " pod="kube-system/cilium-ct85n" Mar 25 01:37:02.542836 kubelet[2612]: I0325 01:37:02.542764 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9413d5c8-339f-4799-8599-25477f4be700-host-proc-sys-kernel\") pod \"cilium-ct85n\" (UID: \"9413d5c8-339f-4799-8599-25477f4be700\") " pod="kube-system/cilium-ct85n" Mar 25 01:37:02.542836 kubelet[2612]: I0325 01:37:02.542780 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9413d5c8-339f-4799-8599-25477f4be700-cilium-run\") pod \"cilium-ct85n\" (UID: \"9413d5c8-339f-4799-8599-25477f4be700\") " pod="kube-system/cilium-ct85n" Mar 25 01:37:02.542836 kubelet[2612]: I0325 01:37:02.542794 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9413d5c8-339f-4799-8599-25477f4be700-cilium-cgroup\") pod \"cilium-ct85n\" (UID: \"9413d5c8-339f-4799-8599-25477f4be700\") " pod="kube-system/cilium-ct85n" Mar 25 01:37:02.542922 kubelet[2612]: I0325 01:37:02.542897 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9413d5c8-339f-4799-8599-25477f4be700-lib-modules\") pod \"cilium-ct85n\" (UID: \"9413d5c8-339f-4799-8599-25477f4be700\") " pod="kube-system/cilium-ct85n" Mar 25 01:37:02.542945 kubelet[2612]: I0325 01:37:02.542928 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9413d5c8-339f-4799-8599-25477f4be700-xtables-lock\") pod \"cilium-ct85n\" (UID: \"9413d5c8-339f-4799-8599-25477f4be700\") " pod="kube-system/cilium-ct85n" Mar 25 01:37:02.543001 kubelet[2612]: I0325 01:37:02.542950 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9413d5c8-339f-4799-8599-25477f4be700-cilium-config-path\") pod \"cilium-ct85n\" (UID: \"9413d5c8-339f-4799-8599-25477f4be700\") " pod="kube-system/cilium-ct85n" Mar 25 01:37:02.543001 kubelet[2612]: I0325 01:37:02.542968 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9413d5c8-339f-4799-8599-25477f4be700-cni-path\") pod \"cilium-ct85n\" (UID: \"9413d5c8-339f-4799-8599-25477f4be700\") " pod="kube-system/cilium-ct85n" Mar 25 01:37:02.543001 kubelet[2612]: I0325 01:37:02.542986 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9413d5c8-339f-4799-8599-25477f4be700-etc-cni-netd\") pod \"cilium-ct85n\" (UID: \"9413d5c8-339f-4799-8599-25477f4be700\") " pod="kube-system/cilium-ct85n" Mar 25 01:37:02.543084 kubelet[2612]: I0325 01:37:02.543054 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pnd6\" (UniqueName: \"kubernetes.io/projected/9413d5c8-339f-4799-8599-25477f4be700-kube-api-access-6pnd6\") pod \"cilium-ct85n\" (UID: \"9413d5c8-339f-4799-8599-25477f4be700\") " pod="kube-system/cilium-ct85n" Mar 25 01:37:02.543108 kubelet[2612]: I0325 01:37:02.543090 2612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9413d5c8-339f-4799-8599-25477f4be700-hubble-tls\") pod \"cilium-ct85n\" (UID: \"9413d5c8-339f-4799-8599-25477f4be700\") " pod="kube-system/cilium-ct85n" Mar 25 01:37:02.589548 sshd[4369]: Connection closed by 10.0.0.1 port 45556 Mar 25 01:37:02.589938 sshd-session[4366]: pam_unix(sshd:session): session closed for user core Mar 25 01:37:02.601714 systemd[1]: sshd@25-10.0.0.139:22-10.0.0.1:45556.service: Deactivated successfully. Mar 25 01:37:02.603771 systemd[1]: session-26.scope: Deactivated successfully. Mar 25 01:37:02.605290 systemd-logind[1486]: Session 26 logged out. Waiting for processes to exit. Mar 25 01:37:02.606731 systemd[1]: Started sshd@26-10.0.0.139:22-10.0.0.1:45570.service - OpenSSH per-connection server daemon (10.0.0.1:45570). Mar 25 01:37:02.607713 systemd-logind[1486]: Removed session 26. Mar 25 01:37:02.657364 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 45570 ssh2: RSA SHA256:VOILGeYr0VxGzMkfROHRL0tX2jep8W2qO25Wzz4I2B0 Mar 25 01:37:02.659031 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:37:02.665066 systemd-logind[1486]: New session 27 of user core. Mar 25 01:37:02.674992 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 25 01:37:02.795683 containerd[1501]: time="2025-03-25T01:37:02.795628312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ct85n,Uid:9413d5c8-339f-4799-8599-25477f4be700,Namespace:kube-system,Attempt:0,}" Mar 25 01:37:02.812571 containerd[1501]: time="2025-03-25T01:37:02.812522885Z" level=info msg="connecting to shim 5137abd8e43760518102e350815983531bb1976d31c50bca5fe8766b79872199" address="unix:///run/containerd/s/7925b0cb2971ff43595924543263733a6e833eb890444358990f36c49abbe067" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:37:02.835009 systemd[1]: Started cri-containerd-5137abd8e43760518102e350815983531bb1976d31c50bca5fe8766b79872199.scope - libcontainer container 5137abd8e43760518102e350815983531bb1976d31c50bca5fe8766b79872199. Mar 25 01:37:02.858421 containerd[1501]: time="2025-03-25T01:37:02.858000961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ct85n,Uid:9413d5c8-339f-4799-8599-25477f4be700,Namespace:kube-system,Attempt:0,} returns sandbox id \"5137abd8e43760518102e350815983531bb1976d31c50bca5fe8766b79872199\"" Mar 25 01:37:02.873177 containerd[1501]: time="2025-03-25T01:37:02.873128635Z" level=info msg="CreateContainer within sandbox \"5137abd8e43760518102e350815983531bb1976d31c50bca5fe8766b79872199\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 01:37:02.879424 containerd[1501]: time="2025-03-25T01:37:02.879388453Z" level=info msg="Container 91ab084e0cc41c178597c4869acee4972a1c546912d583988c50814ae5ec1ac2: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:37:02.887531 containerd[1501]: time="2025-03-25T01:37:02.887488261Z" level=info msg="CreateContainer within sandbox \"5137abd8e43760518102e350815983531bb1976d31c50bca5fe8766b79872199\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"91ab084e0cc41c178597c4869acee4972a1c546912d583988c50814ae5ec1ac2\"" Mar 25 01:37:02.887917 containerd[1501]: time="2025-03-25T01:37:02.887892559Z" level=info msg="StartContainer for \"91ab084e0cc41c178597c4869acee4972a1c546912d583988c50814ae5ec1ac2\"" Mar 25 01:37:02.888640 containerd[1501]: time="2025-03-25T01:37:02.888597677Z" level=info msg="connecting to shim 91ab084e0cc41c178597c4869acee4972a1c546912d583988c50814ae5ec1ac2" address="unix:///run/containerd/s/7925b0cb2971ff43595924543263733a6e833eb890444358990f36c49abbe067" protocol=ttrpc version=3 Mar 25 01:37:02.907005 systemd[1]: Started cri-containerd-91ab084e0cc41c178597c4869acee4972a1c546912d583988c50814ae5ec1ac2.scope - libcontainer container 91ab084e0cc41c178597c4869acee4972a1c546912d583988c50814ae5ec1ac2. Mar 25 01:37:02.936409 containerd[1501]: time="2025-03-25T01:37:02.936371931Z" level=info msg="StartContainer for \"91ab084e0cc41c178597c4869acee4972a1c546912d583988c50814ae5ec1ac2\" returns successfully" Mar 25 01:37:02.943978 systemd[1]: cri-containerd-91ab084e0cc41c178597c4869acee4972a1c546912d583988c50814ae5ec1ac2.scope: Deactivated successfully. Mar 25 01:37:02.945492 containerd[1501]: time="2025-03-25T01:37:02.945447848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91ab084e0cc41c178597c4869acee4972a1c546912d583988c50814ae5ec1ac2\" id:\"91ab084e0cc41c178597c4869acee4972a1c546912d583988c50814ae5ec1ac2\" pid:4448 exited_at:{seconds:1742866622 nanos:944909282}" Mar 25 01:37:02.945492 containerd[1501]: time="2025-03-25T01:37:02.945454671Z" level=info msg="received exit event container_id:\"91ab084e0cc41c178597c4869acee4972a1c546912d583988c50814ae5ec1ac2\" id:\"91ab084e0cc41c178597c4869acee4972a1c546912d583988c50814ae5ec1ac2\" pid:4448 exited_at:{seconds:1742866622 nanos:944909282}" Mar 25 01:37:03.478412 containerd[1501]: time="2025-03-25T01:37:03.478111138Z" level=info msg="CreateContainer within sandbox \"5137abd8e43760518102e350815983531bb1976d31c50bca5fe8766b79872199\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 01:37:03.484865 containerd[1501]: time="2025-03-25T01:37:03.484817555Z" level=info msg="Container ad163260114633fa933d4979f5b51283f4f0a6789f90668f90b558d8e2206654: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:37:03.492325 containerd[1501]: time="2025-03-25T01:37:03.492285338Z" level=info msg="CreateContainer within sandbox \"5137abd8e43760518102e350815983531bb1976d31c50bca5fe8766b79872199\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ad163260114633fa933d4979f5b51283f4f0a6789f90668f90b558d8e2206654\"" Mar 25 01:37:03.492688 containerd[1501]: time="2025-03-25T01:37:03.492650941Z" level=info msg="StartContainer for \"ad163260114633fa933d4979f5b51283f4f0a6789f90668f90b558d8e2206654\"" Mar 25 01:37:03.493509 containerd[1501]: time="2025-03-25T01:37:03.493477992Z" level=info msg="connecting to shim ad163260114633fa933d4979f5b51283f4f0a6789f90668f90b558d8e2206654" address="unix:///run/containerd/s/7925b0cb2971ff43595924543263733a6e833eb890444358990f36c49abbe067" protocol=ttrpc version=3 Mar 25 01:37:03.515002 systemd[1]: Started cri-containerd-ad163260114633fa933d4979f5b51283f4f0a6789f90668f90b558d8e2206654.scope - libcontainer container ad163260114633fa933d4979f5b51283f4f0a6789f90668f90b558d8e2206654. Mar 25 01:37:03.542055 containerd[1501]: time="2025-03-25T01:37:03.542019667Z" level=info msg="StartContainer for \"ad163260114633fa933d4979f5b51283f4f0a6789f90668f90b558d8e2206654\" returns successfully" Mar 25 01:37:03.548118 systemd[1]: cri-containerd-ad163260114633fa933d4979f5b51283f4f0a6789f90668f90b558d8e2206654.scope: Deactivated successfully. Mar 25 01:37:03.548462 containerd[1501]: time="2025-03-25T01:37:03.548423403Z" level=info msg="received exit event container_id:\"ad163260114633fa933d4979f5b51283f4f0a6789f90668f90b558d8e2206654\" id:\"ad163260114633fa933d4979f5b51283f4f0a6789f90668f90b558d8e2206654\" pid:4493 exited_at:{seconds:1742866623 nanos:548249639}" Mar 25 01:37:03.548648 containerd[1501]: time="2025-03-25T01:37:03.548471386Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad163260114633fa933d4979f5b51283f4f0a6789f90668f90b558d8e2206654\" id:\"ad163260114633fa933d4979f5b51283f4f0a6789f90668f90b558d8e2206654\" pid:4493 exited_at:{seconds:1742866623 nanos:548249639}" Mar 25 01:37:04.482386 containerd[1501]: time="2025-03-25T01:37:04.482167602Z" level=info msg="CreateContainer within sandbox \"5137abd8e43760518102e350815983531bb1976d31c50bca5fe8766b79872199\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 01:37:04.492455 containerd[1501]: time="2025-03-25T01:37:04.491046246Z" level=info msg="Container 1986d009042ef84182527958a5237147183ac30bc9719c90b0b8a3b1bb5e87d0: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:37:04.499777 containerd[1501]: time="2025-03-25T01:37:04.499743361Z" level=info msg="CreateContainer within sandbox \"5137abd8e43760518102e350815983531bb1976d31c50bca5fe8766b79872199\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1986d009042ef84182527958a5237147183ac30bc9719c90b0b8a3b1bb5e87d0\"" Mar 25 01:37:04.500178 containerd[1501]: time="2025-03-25T01:37:04.500142207Z" level=info msg="StartContainer for \"1986d009042ef84182527958a5237147183ac30bc9719c90b0b8a3b1bb5e87d0\"" Mar 25 01:37:04.501450 containerd[1501]: time="2025-03-25T01:37:04.501422468Z" level=info msg="connecting to shim 1986d009042ef84182527958a5237147183ac30bc9719c90b0b8a3b1bb5e87d0" address="unix:///run/containerd/s/7925b0cb2971ff43595924543263733a6e833eb890444358990f36c49abbe067" protocol=ttrpc version=3 Mar 25 01:37:04.522021 systemd[1]: Started cri-containerd-1986d009042ef84182527958a5237147183ac30bc9719c90b0b8a3b1bb5e87d0.scope - libcontainer container 1986d009042ef84182527958a5237147183ac30bc9719c90b0b8a3b1bb5e87d0. Mar 25 01:37:04.558616 systemd[1]: cri-containerd-1986d009042ef84182527958a5237147183ac30bc9719c90b0b8a3b1bb5e87d0.scope: Deactivated successfully. Mar 25 01:37:04.560153 containerd[1501]: time="2025-03-25T01:37:04.560120460Z" level=info msg="received exit event container_id:\"1986d009042ef84182527958a5237147183ac30bc9719c90b0b8a3b1bb5e87d0\" id:\"1986d009042ef84182527958a5237147183ac30bc9719c90b0b8a3b1bb5e87d0\" pid:4538 exited_at:{seconds:1742866624 nanos:559946296}" Mar 25 01:37:04.560676 containerd[1501]: time="2025-03-25T01:37:04.560654857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1986d009042ef84182527958a5237147183ac30bc9719c90b0b8a3b1bb5e87d0\" id:\"1986d009042ef84182527958a5237147183ac30bc9719c90b0b8a3b1bb5e87d0\" pid:4538 exited_at:{seconds:1742866624 nanos:559946296}" Mar 25 01:37:04.562775 containerd[1501]: time="2025-03-25T01:37:04.562739123Z" level=info msg="StartContainer for \"1986d009042ef84182527958a5237147183ac30bc9719c90b0b8a3b1bb5e87d0\" returns successfully" Mar 25 01:37:04.583352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1986d009042ef84182527958a5237147183ac30bc9719c90b0b8a3b1bb5e87d0-rootfs.mount: Deactivated successfully. Mar 25 01:37:05.487079 containerd[1501]: time="2025-03-25T01:37:05.486797386Z" level=info msg="CreateContainer within sandbox \"5137abd8e43760518102e350815983531bb1976d31c50bca5fe8766b79872199\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 01:37:05.502496 containerd[1501]: time="2025-03-25T01:37:05.502453145Z" level=info msg="Container c537f626e8eb732369e0b2e5daaec351dedfd576fc73659187f70e7b48ff96b1: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:37:05.506359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451605212.mount: Deactivated successfully. Mar 25 01:37:05.509194 containerd[1501]: time="2025-03-25T01:37:05.509153584Z" level=info msg="CreateContainer within sandbox \"5137abd8e43760518102e350815983531bb1976d31c50bca5fe8766b79872199\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c537f626e8eb732369e0b2e5daaec351dedfd576fc73659187f70e7b48ff96b1\"" Mar 25 01:37:05.509798 containerd[1501]: time="2025-03-25T01:37:05.509560385Z" level=info msg="StartContainer for \"c537f626e8eb732369e0b2e5daaec351dedfd576fc73659187f70e7b48ff96b1\"" Mar 25 01:37:05.510335 containerd[1501]: time="2025-03-25T01:37:05.510313833Z" level=info msg="connecting to shim c537f626e8eb732369e0b2e5daaec351dedfd576fc73659187f70e7b48ff96b1" address="unix:///run/containerd/s/7925b0cb2971ff43595924543263733a6e833eb890444358990f36c49abbe067" protocol=ttrpc version=3 Mar 25 01:37:05.534069 systemd[1]: Started cri-containerd-c537f626e8eb732369e0b2e5daaec351dedfd576fc73659187f70e7b48ff96b1.scope - libcontainer container c537f626e8eb732369e0b2e5daaec351dedfd576fc73659187f70e7b48ff96b1. Mar 25 01:37:05.558108 systemd[1]: cri-containerd-c537f626e8eb732369e0b2e5daaec351dedfd576fc73659187f70e7b48ff96b1.scope: Deactivated successfully. Mar 25 01:37:05.559697 containerd[1501]: time="2025-03-25T01:37:05.559340205Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c537f626e8eb732369e0b2e5daaec351dedfd576fc73659187f70e7b48ff96b1\" id:\"c537f626e8eb732369e0b2e5daaec351dedfd576fc73659187f70e7b48ff96b1\" pid:4577 exited_at:{seconds:1742866625 nanos:558989962}" Mar 25 01:37:05.559999 containerd[1501]: time="2025-03-25T01:37:05.559967681Z" level=info msg="received exit event container_id:\"c537f626e8eb732369e0b2e5daaec351dedfd576fc73659187f70e7b48ff96b1\" id:\"c537f626e8eb732369e0b2e5daaec351dedfd576fc73659187f70e7b48ff96b1\" pid:4577 exited_at:{seconds:1742866625 nanos:558989962}" Mar 25 01:37:05.567159 containerd[1501]: time="2025-03-25T01:37:05.567123374Z" level=info msg="StartContainer for \"c537f626e8eb732369e0b2e5daaec351dedfd576fc73659187f70e7b48ff96b1\" returns successfully" Mar 25 01:37:05.578589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c537f626e8eb732369e0b2e5daaec351dedfd576fc73659187f70e7b48ff96b1-rootfs.mount: Deactivated successfully. Mar 25 01:37:06.492426 containerd[1501]: time="2025-03-25T01:37:06.492378362Z" level=info msg="CreateContainer within sandbox \"5137abd8e43760518102e350815983531bb1976d31c50bca5fe8766b79872199\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 01:37:06.502485 containerd[1501]: time="2025-03-25T01:37:06.502443972Z" level=info msg="Container dbaf59177ab9ae5cf23b45077f9978cf7c694c23db4137da8ed09db9c6fbdc6c: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:37:06.506104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3592130539.mount: Deactivated successfully. Mar 25 01:37:06.510477 containerd[1501]: time="2025-03-25T01:37:06.510439799Z" level=info msg="CreateContainer within sandbox \"5137abd8e43760518102e350815983531bb1976d31c50bca5fe8766b79872199\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dbaf59177ab9ae5cf23b45077f9978cf7c694c23db4137da8ed09db9c6fbdc6c\"" Mar 25 01:37:06.510821 containerd[1501]: time="2025-03-25T01:37:06.510798739Z" level=info msg="StartContainer for \"dbaf59177ab9ae5cf23b45077f9978cf7c694c23db4137da8ed09db9c6fbdc6c\"" Mar 25 01:37:06.511747 containerd[1501]: time="2025-03-25T01:37:06.511665582Z" level=info msg="connecting to shim dbaf59177ab9ae5cf23b45077f9978cf7c694c23db4137da8ed09db9c6fbdc6c" address="unix:///run/containerd/s/7925b0cb2971ff43595924543263733a6e833eb890444358990f36c49abbe067" protocol=ttrpc version=3 Mar 25 01:37:06.535994 systemd[1]: Started cri-containerd-dbaf59177ab9ae5cf23b45077f9978cf7c694c23db4137da8ed09db9c6fbdc6c.scope - libcontainer container dbaf59177ab9ae5cf23b45077f9978cf7c694c23db4137da8ed09db9c6fbdc6c. Mar 25 01:37:06.569122 containerd[1501]: time="2025-03-25T01:37:06.569082560Z" level=info msg="StartContainer for \"dbaf59177ab9ae5cf23b45077f9978cf7c694c23db4137da8ed09db9c6fbdc6c\" returns successfully" Mar 25 01:37:06.634797 containerd[1501]: time="2025-03-25T01:37:06.634754862Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbaf59177ab9ae5cf23b45077f9978cf7c694c23db4137da8ed09db9c6fbdc6c\" id:\"2db2b8e21f0dd0b7416b39044848eae527a079832cff9f7c94c5f3b570303e80\" pid:4644 exited_at:{seconds:1742866626 nanos:634522937}" Mar 25 01:37:06.963918 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 25 01:37:07.508987 kubelet[2612]: I0325 01:37:07.508932 2612 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ct85n" podStartSLOduration=5.508916225 podStartE2EDuration="5.508916225s" podCreationTimestamp="2025-03-25 01:37:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:37:07.508023222 +0000 UTC m=+80.301774977" watchObservedRunningTime="2025-03-25 01:37:07.508916225 +0000 UTC m=+80.302667980" Mar 25 01:37:08.959414 containerd[1501]: time="2025-03-25T01:37:08.959362882Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbaf59177ab9ae5cf23b45077f9978cf7c694c23db4137da8ed09db9c6fbdc6c\" id:\"e54849c00d40997cfd743824671ffcfad3d8bb0bb553ff8a621d198091384608\" pid:4858 exit_status:1 exited_at:{seconds:1742866628 nanos:959093155}" Mar 25 01:37:09.966963 systemd-networkd[1425]: lxc_health: Link UP Mar 25 01:37:09.980969 systemd-networkd[1425]: lxc_health: Gained carrier Mar 25 01:37:11.066205 containerd[1501]: time="2025-03-25T01:37:11.066159801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbaf59177ab9ae5cf23b45077f9978cf7c694c23db4137da8ed09db9c6fbdc6c\" id:\"41f716c7dd1b823e85ec3d71d3e09709a8f1462427adbbbf9cf0cff33e3a28e2\" pid:5211 exited_at:{seconds:1742866631 nanos:65617995}" Mar 25 01:37:11.585133 systemd-networkd[1425]: lxc_health: Gained IPv6LL Mar 25 01:37:13.190339 containerd[1501]: time="2025-03-25T01:37:13.190291346Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbaf59177ab9ae5cf23b45077f9978cf7c694c23db4137da8ed09db9c6fbdc6c\" id:\"b41a6c75afc07e641a3d130999f670cb0081684c218e24bd41924a3677351b4a\" pid:5245 exited_at:{seconds:1742866633 nanos:190035035}" Mar 25 01:37:15.291864 containerd[1501]: time="2025-03-25T01:37:15.291812612Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbaf59177ab9ae5cf23b45077f9978cf7c694c23db4137da8ed09db9c6fbdc6c\" id:\"8e39353da5d315a8a2d9f48d135011e626a386bed20390aea6222846e8c44a2b\" pid:5271 exited_at:{seconds:1742866635 nanos:291509934}" Mar 25 01:37:17.375120 containerd[1501]: time="2025-03-25T01:37:17.375067370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbaf59177ab9ae5cf23b45077f9978cf7c694c23db4137da8ed09db9c6fbdc6c\" id:\"f7192125bc971e747c4bd995ce783d03a750539e06b5aba692fcef2ad372797f\" pid:5296 exited_at:{seconds:1742866637 nanos:374735056}" Mar 25 01:37:17.381611 sshd[4382]: Connection closed by 10.0.0.1 port 45570 Mar 25 01:37:17.382093 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Mar 25 01:37:17.386406 systemd[1]: sshd@26-10.0.0.139:22-10.0.0.1:45570.service: Deactivated successfully. Mar 25 01:37:17.388549 systemd[1]: session-27.scope: Deactivated successfully. Mar 25 01:37:17.389238 systemd-logind[1486]: Session 27 logged out. Waiting for processes to exit. Mar 25 01:37:17.390088 systemd-logind[1486]: Removed session 27.