Mar 17 19:06:50.058740 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:09:25 -00 2025 Mar 17 19:06:50.058764 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 19:06:50.058774 kernel: BIOS-provided physical RAM map: Mar 17 19:06:50.058782 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 19:06:50.058789 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 19:06:50.058798 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 19:06:50.058807 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Mar 17 19:06:50.058815 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Mar 17 19:06:50.058822 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 19:06:50.058830 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 19:06:50.058838 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Mar 17 19:06:50.058845 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 19:06:50.058853 kernel: NX (Execute Disable) protection: active Mar 17 19:06:50.058860 kernel: APIC: Static calls initialized Mar 17 19:06:50.058871 kernel: SMBIOS 3.0.0 present. Mar 17 19:06:50.058880 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Mar 17 19:06:50.058888 kernel: Hypervisor detected: KVM Mar 17 19:06:50.058895 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 19:06:50.058903 kernel: kvm-clock: using sched offset of 3377326902 cycles Mar 17 19:06:50.058913 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 19:06:50.058922 kernel: tsc: Detected 1996.249 MHz processor Mar 17 19:06:50.058930 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 19:06:50.058939 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 19:06:50.058947 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Mar 17 19:06:50.058956 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 19:06:50.058964 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 19:06:50.058972 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Mar 17 19:06:50.058981 kernel: ACPI: Early table checksum verification disabled Mar 17 19:06:50.058991 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Mar 17 19:06:50.058999 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 19:06:50.059008 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 19:06:50.059016 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 19:06:50.061751 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Mar 17 19:06:50.061761 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 19:06:50.061770 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 19:06:50.061779 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Mar 17 19:06:50.061788 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Mar 17 19:06:50.061801 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Mar 17 19:06:50.061810 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Mar 17 19:06:50.061819 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Mar 17 19:06:50.061831 kernel: No NUMA configuration found Mar 17 19:06:50.061841 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Mar 17 19:06:50.061850 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Mar 17 19:06:50.061861 kernel: Zone ranges: Mar 17 19:06:50.061871 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 19:06:50.061880 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 17 19:06:50.061889 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Mar 17 19:06:50.061898 kernel: Movable zone start for each node Mar 17 19:06:50.061908 kernel: Early memory node ranges Mar 17 19:06:50.061917 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 19:06:50.061926 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Mar 17 19:06:50.061937 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Mar 17 19:06:50.061946 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Mar 17 19:06:50.061956 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 19:06:50.061965 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 19:06:50.061974 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 17 19:06:50.061983 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 19:06:50.061993 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 19:06:50.062002 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 19:06:50.062011 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 19:06:50.062035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 19:06:50.062045 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 19:06:50.062054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 19:06:50.062064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 19:06:50.062073 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 19:06:50.062082 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 19:06:50.062091 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 19:06:50.062100 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Mar 17 19:06:50.062109 kernel: Booting paravirtualized kernel on KVM Mar 17 19:06:50.062121 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 19:06:50.062131 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 17 19:06:50.062140 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 17 19:06:50.062150 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 17 19:06:50.062158 kernel: pcpu-alloc: [0] 0 1 Mar 17 19:06:50.062168 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 17 19:06:50.062178 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 19:06:50.062188 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 19:06:50.062200 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 19:06:50.062209 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 19:06:50.062219 kernel: Fallback order for Node 0: 0 Mar 17 19:06:50.062228 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 17 19:06:50.062237 kernel: Policy zone: Normal Mar 17 19:06:50.062246 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 19:06:50.062255 kernel: software IO TLB: area num 2. Mar 17 19:06:50.062265 kernel: Memory: 3964156K/4193772K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43476K init, 1596K bss, 229356K reserved, 0K cma-reserved) Mar 17 19:06:50.062274 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 19:06:50.062286 kernel: ftrace: allocating 37910 entries in 149 pages Mar 17 19:06:50.062295 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 19:06:50.062304 kernel: Dynamic Preempt: voluntary Mar 17 19:06:50.062313 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 19:06:50.062324 kernel: rcu: RCU event tracing is enabled. Mar 17 19:06:50.062333 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 19:06:50.062343 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 19:06:50.062352 kernel: Rude variant of Tasks RCU enabled. Mar 17 19:06:50.062361 kernel: Tracing variant of Tasks RCU enabled. Mar 17 19:06:50.062370 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 19:06:50.062382 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 19:06:50.062391 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 19:06:50.062400 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 19:06:50.062410 kernel: Console: colour VGA+ 80x25 Mar 17 19:06:50.062419 kernel: printk: console [tty0] enabled Mar 17 19:06:50.062428 kernel: printk: console [ttyS0] enabled Mar 17 19:06:50.062437 kernel: ACPI: Core revision 20230628 Mar 17 19:06:50.062446 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 19:06:50.062455 kernel: x2apic enabled Mar 17 19:06:50.062467 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 19:06:50.062476 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 19:06:50.062485 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 19:06:50.062495 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Mar 17 19:06:50.062504 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 19:06:50.062513 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 19:06:50.062522 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 19:06:50.062531 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 19:06:50.062541 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 19:06:50.062552 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 19:06:50.062561 kernel: Speculative Store Bypass: Vulnerable Mar 17 19:06:50.062570 kernel: x86/fpu: x87 FPU will use FXSAVE Mar 17 19:06:50.062580 kernel: Freeing SMP alternatives memory: 32K Mar 17 19:06:50.062596 kernel: pid_max: default: 32768 minimum: 301 Mar 17 19:06:50.062607 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 19:06:50.062617 kernel: landlock: Up and running. Mar 17 19:06:50.062627 kernel: SELinux: Initializing. Mar 17 19:06:50.062636 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 19:06:50.062646 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 19:06:50.062656 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Mar 17 19:06:50.062668 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 19:06:50.062678 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 19:06:50.062687 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 19:06:50.062697 kernel: Performance Events: AMD PMU driver. Mar 17 19:06:50.062707 kernel: ... version: 0 Mar 17 19:06:50.062718 kernel: ... bit width: 48 Mar 17 19:06:50.062727 kernel: ... generic registers: 4 Mar 17 19:06:50.062737 kernel: ... value mask: 0000ffffffffffff Mar 17 19:06:50.062747 kernel: ... max period: 00007fffffffffff Mar 17 19:06:50.062756 kernel: ... fixed-purpose events: 0 Mar 17 19:06:50.062766 kernel: ... event mask: 000000000000000f Mar 17 19:06:50.062775 kernel: signal: max sigframe size: 1440 Mar 17 19:06:50.062785 kernel: rcu: Hierarchical SRCU implementation. Mar 17 19:06:50.062794 kernel: rcu: Max phase no-delay instances is 400. Mar 17 19:06:50.062806 kernel: smp: Bringing up secondary CPUs ... Mar 17 19:06:50.062816 kernel: smpboot: x86: Booting SMP configuration: Mar 17 19:06:50.062825 kernel: .... node #0, CPUs: #1 Mar 17 19:06:50.062835 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 19:06:50.062844 kernel: smpboot: Max logical packages: 2 Mar 17 19:06:50.062854 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Mar 17 19:06:50.062863 kernel: devtmpfs: initialized Mar 17 19:06:50.062873 kernel: x86/mm: Memory block size: 128MB Mar 17 19:06:50.062883 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 19:06:50.062892 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 19:06:50.062904 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 19:06:50.062914 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 19:06:50.062924 kernel: audit: initializing netlink subsys (disabled) Mar 17 19:06:50.062934 kernel: audit: type=2000 audit(1742238408.605:1): state=initialized audit_enabled=0 res=1 Mar 17 19:06:50.062943 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 19:06:50.062953 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 19:06:50.062962 kernel: cpuidle: using governor menu Mar 17 19:06:50.062972 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 19:06:50.062982 kernel: dca service started, version 1.12.1 Mar 17 19:06:50.062993 kernel: PCI: Using configuration type 1 for base access Mar 17 19:06:50.063003 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 19:06:50.063013 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 19:06:50.063036 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 19:06:50.063046 kernel: ACPI: Added _OSI(Module Device) Mar 17 19:06:50.063055 kernel: ACPI: Added _OSI(Processor Device) Mar 17 19:06:50.063065 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 19:06:50.063075 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 19:06:50.063085 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 19:06:50.063113 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 19:06:50.063126 kernel: ACPI: Interpreter enabled Mar 17 19:06:50.063139 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 19:06:50.063154 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 19:06:50.063171 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 19:06:50.063188 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 19:06:50.063204 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 17 19:06:50.063220 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 19:06:50.063469 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 19:06:50.063583 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 17 19:06:50.063684 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 17 19:06:50.063699 kernel: acpiphp: Slot [3] registered Mar 17 19:06:50.063709 kernel: acpiphp: Slot [4] registered Mar 17 19:06:50.063719 kernel: acpiphp: Slot [5] registered Mar 17 19:06:50.063728 kernel: acpiphp: Slot [6] registered Mar 17 19:06:50.063738 kernel: acpiphp: Slot [7] registered Mar 17 19:06:50.063751 kernel: acpiphp: Slot [8] registered Mar 17 19:06:50.063760 kernel: acpiphp: Slot [9] registered Mar 17 19:06:50.063770 kernel: acpiphp: Slot [10] registered Mar 17 19:06:50.063779 kernel: acpiphp: Slot [11] registered Mar 17 19:06:50.063789 kernel: acpiphp: Slot [12] registered Mar 17 19:06:50.063798 kernel: acpiphp: Slot [13] registered Mar 17 19:06:50.063808 kernel: acpiphp: Slot [14] registered Mar 17 19:06:50.063818 kernel: acpiphp: Slot [15] registered Mar 17 19:06:50.063827 kernel: acpiphp: Slot [16] registered Mar 17 19:06:50.063838 kernel: acpiphp: Slot [17] registered Mar 17 19:06:50.063848 kernel: acpiphp: Slot [18] registered Mar 17 19:06:50.063857 kernel: acpiphp: Slot [19] registered Mar 17 19:06:50.063867 kernel: acpiphp: Slot [20] registered Mar 17 19:06:50.063876 kernel: acpiphp: Slot [21] registered Mar 17 19:06:50.063886 kernel: acpiphp: Slot [22] registered Mar 17 19:06:50.063895 kernel: acpiphp: Slot [23] registered Mar 17 19:06:50.063905 kernel: acpiphp: Slot [24] registered Mar 17 19:06:50.063914 kernel: acpiphp: Slot [25] registered Mar 17 19:06:50.063924 kernel: acpiphp: Slot [26] registered Mar 17 19:06:50.063935 kernel: acpiphp: Slot [27] registered Mar 17 19:06:50.063945 kernel: acpiphp: Slot [28] registered Mar 17 19:06:50.063954 kernel: acpiphp: Slot [29] registered Mar 17 19:06:50.063964 kernel: acpiphp: Slot [30] registered Mar 17 19:06:50.063974 kernel: acpiphp: Slot [31] registered Mar 17 19:06:50.063983 kernel: PCI host bridge to bus 0000:00 Mar 17 19:06:50.067190 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 19:06:50.067334 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 19:06:50.067430 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 19:06:50.067518 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 19:06:50.067605 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Mar 17 19:06:50.067689 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 19:06:50.067807 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 19:06:50.067913 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 17 19:06:50.068045 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 17 19:06:50.068146 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Mar 17 19:06:50.068238 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 17 19:06:50.068330 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 17 19:06:50.068422 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 17 19:06:50.068515 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 17 19:06:50.068617 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 17 19:06:50.068717 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 17 19:06:50.068811 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 17 19:06:50.068914 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 17 19:06:50.069009 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 17 19:06:50.073144 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Mar 17 19:06:50.073244 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Mar 17 19:06:50.073342 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Mar 17 19:06:50.073444 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 19:06:50.073549 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 19:06:50.073647 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Mar 17 19:06:50.073743 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Mar 17 19:06:50.073838 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Mar 17 19:06:50.073932 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Mar 17 19:06:50.076063 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 17 19:06:50.076184 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 19:06:50.076287 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Mar 17 19:06:50.076387 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Mar 17 19:06:50.076497 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Mar 17 19:06:50.076598 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Mar 17 19:06:50.076697 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Mar 17 19:06:50.076804 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 19:06:50.076907 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Mar 17 19:06:50.076999 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Mar 17 19:06:50.077169 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Mar 17 19:06:50.077185 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 19:06:50.077195 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 19:06:50.077204 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 19:06:50.077213 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 19:06:50.077223 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 19:06:50.077236 kernel: iommu: Default domain type: Translated Mar 17 19:06:50.077245 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 19:06:50.077255 kernel: PCI: Using ACPI for IRQ routing Mar 17 19:06:50.077264 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 19:06:50.077273 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 19:06:50.077282 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Mar 17 19:06:50.077374 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 17 19:06:50.077466 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 17 19:06:50.077562 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 19:06:50.077576 kernel: vgaarb: loaded Mar 17 19:06:50.077586 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 19:06:50.077595 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 19:06:50.077604 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 19:06:50.077613 kernel: pnp: PnP ACPI init Mar 17 19:06:50.077708 kernel: pnp 00:03: [dma 2] Mar 17 19:06:50.077723 kernel: pnp: PnP ACPI: found 5 devices Mar 17 19:06:50.077732 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 19:06:50.077745 kernel: NET: Registered PF_INET protocol family Mar 17 19:06:50.077754 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 19:06:50.077763 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 19:06:50.077772 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 19:06:50.077782 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 19:06:50.077791 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 19:06:50.077800 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 19:06:50.077809 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 19:06:50.077820 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 19:06:50.077829 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 19:06:50.077839 kernel: NET: Registered PF_XDP protocol family Mar 17 19:06:50.077923 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 19:06:50.078007 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 19:06:50.078110 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 19:06:50.078193 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Mar 17 19:06:50.078276 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Mar 17 19:06:50.078372 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 17 19:06:50.078474 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 19:06:50.078488 kernel: PCI: CLS 0 bytes, default 64 Mar 17 19:06:50.078497 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 17 19:06:50.078507 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Mar 17 19:06:50.078516 kernel: Initialise system trusted keyrings Mar 17 19:06:50.078525 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 19:06:50.078534 kernel: Key type asymmetric registered Mar 17 19:06:50.078544 kernel: Asymmetric key parser 'x509' registered Mar 17 19:06:50.078556 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 19:06:50.078565 kernel: io scheduler mq-deadline registered Mar 17 19:06:50.078575 kernel: io scheduler kyber registered Mar 17 19:06:50.078583 kernel: io scheduler bfq registered Mar 17 19:06:50.078593 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 19:06:50.078602 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 17 19:06:50.078612 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 17 19:06:50.078621 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 17 19:06:50.078631 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 17 19:06:50.078641 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 19:06:50.078651 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 19:06:50.078660 kernel: random: crng init done Mar 17 19:06:50.078669 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 19:06:50.078678 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 19:06:50.078687 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 19:06:50.078780 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 19:06:50.078868 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 19:06:50.078885 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 19:06:50.078968 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T19:06:49 UTC (1742238409) Mar 17 19:06:50.081151 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 17 19:06:50.081169 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 17 19:06:50.081179 kernel: NET: Registered PF_INET6 protocol family Mar 17 19:06:50.081189 kernel: Segment Routing with IPv6 Mar 17 19:06:50.081199 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 19:06:50.081208 kernel: NET: Registered PF_PACKET protocol family Mar 17 19:06:50.081217 kernel: Key type dns_resolver registered Mar 17 19:06:50.081230 kernel: IPI shorthand broadcast: enabled Mar 17 19:06:50.081240 kernel: sched_clock: Marking stable (942007878, 173174107)->(1150431404, -35249419) Mar 17 19:06:50.081249 kernel: registered taskstats version 1 Mar 17 19:06:50.081258 kernel: Loading compiled-in X.509 certificates Mar 17 19:06:50.081268 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 2d438fc13e28f87f3f580874887bade2e2b0c7dd' Mar 17 19:06:50.081277 kernel: Key type .fscrypt registered Mar 17 19:06:50.081286 kernel: Key type fscrypt-provisioning registered Mar 17 19:06:50.081295 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 19:06:50.081304 kernel: ima: Allocated hash algorithm: sha1 Mar 17 19:06:50.081315 kernel: ima: No architecture policies found Mar 17 19:06:50.081324 kernel: clk: Disabling unused clocks Mar 17 19:06:50.081333 kernel: Freeing unused kernel image (initmem) memory: 43476K Mar 17 19:06:50.081342 kernel: Write protecting the kernel read-only data: 38912k Mar 17 19:06:50.081351 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 17 19:06:50.081360 kernel: Run /init as init process Mar 17 19:06:50.081369 kernel: with arguments: Mar 17 19:06:50.081378 kernel: /init Mar 17 19:06:50.081387 kernel: with environment: Mar 17 19:06:50.081398 kernel: HOME=/ Mar 17 19:06:50.081407 kernel: TERM=linux Mar 17 19:06:50.081416 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 19:06:50.081426 systemd[1]: Successfully made /usr/ read-only. Mar 17 19:06:50.081439 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 19:06:50.081449 systemd[1]: Detected virtualization kvm. Mar 17 19:06:50.081459 systemd[1]: Detected architecture x86-64. Mar 17 19:06:50.081471 systemd[1]: Running in initrd. Mar 17 19:06:50.081480 systemd[1]: No hostname configured, using default hostname. Mar 17 19:06:50.081490 systemd[1]: Hostname set to . Mar 17 19:06:50.081500 systemd[1]: Initializing machine ID from VM UUID. Mar 17 19:06:50.081510 systemd[1]: Queued start job for default target initrd.target. Mar 17 19:06:50.081519 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 19:06:50.081530 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 19:06:50.081548 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 19:06:50.081560 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 19:06:50.081571 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 19:06:50.081582 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 19:06:50.081593 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 19:06:50.081605 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 19:06:50.081615 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 19:06:50.081625 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 19:06:50.081635 systemd[1]: Reached target paths.target - Path Units. Mar 17 19:06:50.081645 systemd[1]: Reached target slices.target - Slice Units. Mar 17 19:06:50.081655 systemd[1]: Reached target swap.target - Swaps. Mar 17 19:06:50.081665 systemd[1]: Reached target timers.target - Timer Units. Mar 17 19:06:50.081675 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 19:06:50.081685 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 19:06:50.081697 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 19:06:50.081707 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 19:06:50.081717 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 19:06:50.081727 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 19:06:50.081737 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 19:06:50.081747 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 19:06:50.081757 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 19:06:50.081768 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 19:06:50.081778 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 19:06:50.081789 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 19:06:50.081800 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 19:06:50.081810 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 19:06:50.081820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 19:06:50.081830 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 19:06:50.081861 systemd-journald[185]: Collecting audit messages is disabled. Mar 17 19:06:50.081888 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 19:06:50.081902 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 19:06:50.081914 systemd-journald[185]: Journal started Mar 17 19:06:50.081937 systemd-journald[185]: Runtime Journal (/run/log/journal/7238b9f7f36b4cce95bcf64be65091ba) is 8M, max 78.3M, 70.3M free. Mar 17 19:06:50.090050 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 19:06:50.064537 systemd-modules-load[186]: Inserted module 'overlay' Mar 17 19:06:50.142149 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 19:06:50.142191 kernel: Bridge firewalling registered Mar 17 19:06:50.142213 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 19:06:50.103630 systemd-modules-load[186]: Inserted module 'br_netfilter' Mar 17 19:06:50.142712 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 19:06:50.144152 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 19:06:50.145833 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 19:06:50.153139 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 19:06:50.156130 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 19:06:50.157377 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 19:06:50.162734 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 19:06:50.183156 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 19:06:50.186224 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 19:06:50.187584 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 19:06:50.195256 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 19:06:50.197436 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 19:06:50.218252 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 19:06:50.226724 systemd-resolved[220]: Positive Trust Anchors: Mar 17 19:06:50.226740 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 19:06:50.226781 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 19:06:50.229431 systemd-resolved[220]: Defaulting to hostname 'linux'. Mar 17 19:06:50.231695 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 19:06:50.234360 dracut-cmdline[222]: dracut-dracut-053 Mar 17 19:06:50.234360 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 19:06:50.233826 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 19:06:50.323068 kernel: SCSI subsystem initialized Mar 17 19:06:50.335082 kernel: Loading iSCSI transport class v2.0-870. Mar 17 19:06:50.347516 kernel: iscsi: registered transport (tcp) Mar 17 19:06:50.370644 kernel: iscsi: registered transport (qla4xxx) Mar 17 19:06:50.370707 kernel: QLogic iSCSI HBA Driver Mar 17 19:06:50.425039 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 19:06:50.432340 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 19:06:50.482215 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 19:06:50.482330 kernel: device-mapper: uevent: version 1.0.3 Mar 17 19:06:50.484325 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 19:06:50.564152 kernel: raid6: sse2x4 gen() 5157 MB/s Mar 17 19:06:50.564265 kernel: raid6: sse2x2 gen() 5985 MB/s Mar 17 19:06:50.580463 kernel: raid6: sse2x1 gen() 9597 MB/s Mar 17 19:06:50.580524 kernel: raid6: using algorithm sse2x1 gen() 9597 MB/s Mar 17 19:06:50.599509 kernel: raid6: .... xor() 7400 MB/s, rmw enabled Mar 17 19:06:50.599577 kernel: raid6: using ssse3x2 recovery algorithm Mar 17 19:06:50.621124 kernel: xor: measuring software checksum speed Mar 17 19:06:50.621192 kernel: prefetch64-sse : 16819 MB/sec Mar 17 19:06:50.623525 kernel: generic_sse : 16841 MB/sec Mar 17 19:06:50.623598 kernel: xor: using function: generic_sse (16841 MB/sec) Mar 17 19:06:50.800126 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 19:06:50.815865 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 19:06:50.826292 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 19:06:50.839575 systemd-udevd[405]: Using default interface naming scheme 'v255'. Mar 17 19:06:50.844670 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 19:06:50.854291 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 19:06:50.879474 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Mar 17 19:06:50.920983 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 19:06:50.930267 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 19:06:50.975842 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 19:06:50.987321 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 19:06:51.012167 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 19:06:51.027976 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 19:06:51.032093 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 19:06:51.032993 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 19:06:51.040146 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 19:06:51.059035 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Mar 17 19:06:51.114318 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Mar 17 19:06:51.114443 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 19:06:51.114458 kernel: GPT:17805311 != 20971519 Mar 17 19:06:51.114470 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 19:06:51.114482 kernel: GPT:17805311 != 20971519 Mar 17 19:06:51.114493 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 19:06:51.114504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 19:06:51.114516 kernel: libata version 3.00 loaded. Mar 17 19:06:51.114527 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 17 19:06:51.114661 kernel: scsi host0: ata_piix Mar 17 19:06:51.114778 kernel: scsi host1: ata_piix Mar 17 19:06:51.114895 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Mar 17 19:06:51.114909 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Mar 17 19:06:51.061595 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 19:06:51.117173 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 19:06:51.117316 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 19:06:51.119419 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 19:06:51.120213 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 19:06:51.120352 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 19:06:51.121962 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 19:06:51.128292 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 19:06:51.180683 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 19:06:51.187224 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 19:06:51.199600 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 19:06:51.320061 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (467) Mar 17 19:06:51.330075 kernel: BTRFS: device fsid 16b3954e-2e86-4c7f-a948-d3d3817b1bdc devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (457) Mar 17 19:06:51.364630 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 19:06:51.377288 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 19:06:51.386876 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 19:06:51.387437 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 19:06:51.400188 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 19:06:51.407158 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 19:06:51.417977 disk-uuid[514]: Primary Header is updated. Mar 17 19:06:51.417977 disk-uuid[514]: Secondary Entries is updated. Mar 17 19:06:51.417977 disk-uuid[514]: Secondary Header is updated. Mar 17 19:06:51.427403 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 19:06:52.442141 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 19:06:52.442607 disk-uuid[515]: The operation has completed successfully. Mar 17 19:06:52.525777 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 19:06:52.525964 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 19:06:52.579187 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 19:06:52.584713 sh[526]: Success Mar 17 19:06:52.594056 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Mar 17 19:06:52.682262 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 19:06:52.684496 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 19:06:52.689199 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 19:06:52.708204 kernel: BTRFS info (device dm-0): first mount of filesystem 16b3954e-2e86-4c7f-a948-d3d3817b1bdc Mar 17 19:06:52.708277 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 19:06:52.708309 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 19:06:52.710470 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 19:06:52.712101 kernel: BTRFS info (device dm-0): using free space tree Mar 17 19:06:52.728787 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 19:06:52.730916 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 19:06:52.738339 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 19:06:52.751326 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 19:06:52.778205 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 19:06:52.778287 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 19:06:52.784359 kernel: BTRFS info (device vda6): using free space tree Mar 17 19:06:52.797110 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 19:06:52.825997 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 19:06:52.825114 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 19:06:52.839977 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 19:06:52.846187 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 19:06:52.876737 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 19:06:52.882136 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 19:06:52.912728 systemd-networkd[709]: lo: Link UP Mar 17 19:06:52.913383 systemd-networkd[709]: lo: Gained carrier Mar 17 19:06:52.915190 systemd-networkd[709]: Enumeration completed Mar 17 19:06:52.915840 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 19:06:52.916721 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 19:06:52.916725 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 19:06:52.917412 systemd[1]: Reached target network.target - Network. Mar 17 19:06:52.919324 systemd-networkd[709]: eth0: Link UP Mar 17 19:06:52.919328 systemd-networkd[709]: eth0: Gained carrier Mar 17 19:06:52.919336 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 19:06:52.932115 systemd-networkd[709]: eth0: DHCPv4 address 172.24.4.57/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 17 19:06:52.994957 ignition[655]: Ignition 2.20.0 Mar 17 19:06:52.995775 ignition[655]: Stage: fetch-offline Mar 17 19:06:52.996323 ignition[655]: no configs at "/usr/lib/ignition/base.d" Mar 17 19:06:52.996852 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 19:06:52.996951 ignition[655]: parsed url from cmdline: "" Mar 17 19:06:52.996955 ignition[655]: no config URL provided Mar 17 19:06:52.996961 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 19:06:52.996969 ignition[655]: no config at "/usr/lib/ignition/user.ign" Mar 17 19:06:52.999614 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 19:06:52.996974 ignition[655]: failed to fetch config: resource requires networking Mar 17 19:06:52.997172 ignition[655]: Ignition finished successfully Mar 17 19:06:53.006203 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 19:06:53.019679 ignition[721]: Ignition 2.20.0 Mar 17 19:06:53.019691 ignition[721]: Stage: fetch Mar 17 19:06:53.019885 ignition[721]: no configs at "/usr/lib/ignition/base.d" Mar 17 19:06:53.019897 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 19:06:53.020013 ignition[721]: parsed url from cmdline: "" Mar 17 19:06:53.020017 ignition[721]: no config URL provided Mar 17 19:06:53.020045 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 19:06:53.020055 ignition[721]: no config at "/usr/lib/ignition/user.ign" Mar 17 19:06:53.020236 ignition[721]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 17 19:06:53.020244 ignition[721]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 17 19:06:53.020251 ignition[721]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 17 19:06:53.287712 systemd-resolved[220]: Detected conflict on linux IN A 172.24.4.57 Mar 17 19:06:53.287739 systemd-resolved[220]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Mar 17 19:06:53.478311 ignition[721]: GET result: OK Mar 17 19:06:53.478485 ignition[721]: parsing config with SHA512: 18ad3e7bf11961b78443679d98506e867166e6e1a71067869a6107ed871bc4abac279c2e6b0b59e875704e8012cb2fdd6f28e2666d8298f0df85c6ff0129351a Mar 17 19:06:53.490175 unknown[721]: fetched base config from "system" Mar 17 19:06:53.490200 unknown[721]: fetched base config from "system" Mar 17 19:06:53.491272 ignition[721]: fetch: fetch complete Mar 17 19:06:53.490215 unknown[721]: fetched user config from "openstack" Mar 17 19:06:53.491285 ignition[721]: fetch: fetch passed Mar 17 19:06:53.494546 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 19:06:53.491378 ignition[721]: Ignition finished successfully Mar 17 19:06:53.507466 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 19:06:53.538868 ignition[727]: Ignition 2.20.0 Mar 17 19:06:53.538900 ignition[727]: Stage: kargs Mar 17 19:06:53.539403 ignition[727]: no configs at "/usr/lib/ignition/base.d" Mar 17 19:06:53.539430 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 19:06:53.541792 ignition[727]: kargs: kargs passed Mar 17 19:06:53.544110 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 19:06:53.541922 ignition[727]: Ignition finished successfully Mar 17 19:06:53.556451 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 19:06:53.577119 ignition[734]: Ignition 2.20.0 Mar 17 19:06:53.577131 ignition[734]: Stage: disks Mar 17 19:06:53.577312 ignition[734]: no configs at "/usr/lib/ignition/base.d" Mar 17 19:06:53.579228 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 19:06:53.577324 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 19:06:53.580774 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 19:06:53.578277 ignition[734]: disks: disks passed Mar 17 19:06:53.582162 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 19:06:53.578318 ignition[734]: Ignition finished successfully Mar 17 19:06:53.583919 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 19:06:53.586010 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 19:06:53.588133 systemd[1]: Reached target basic.target - Basic System. Mar 17 19:06:53.597332 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 19:06:53.617642 systemd-fsck[742]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 17 19:06:53.627270 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 19:06:53.710181 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 19:06:53.886546 kernel: EXT4-fs (vda9): mounted filesystem 21764504-a65e-45eb-84e1-376b55b62aba r/w with ordered data mode. Quota mode: none. Mar 17 19:06:53.886910 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 19:06:53.887885 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 19:06:53.901098 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 19:06:53.905278 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 19:06:53.907712 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 19:06:53.909192 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 17 19:06:53.924888 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (750) Mar 17 19:06:53.924937 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 19:06:53.924967 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 19:06:53.924997 kernel: BTRFS info (device vda6): using free space tree Mar 17 19:06:53.921940 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 19:06:53.941953 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 19:06:53.921973 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 19:06:53.926490 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 19:06:53.943566 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 19:06:53.958269 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 19:06:54.093301 initrd-setup-root[779]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 19:06:54.099906 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Mar 17 19:06:54.107996 initrd-setup-root[793]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 19:06:54.113769 initrd-setup-root[800]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 19:06:54.229957 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 19:06:54.239216 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 19:06:54.243431 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 19:06:54.259081 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 19:06:54.282353 ignition[868]: INFO : Ignition 2.20.0 Mar 17 19:06:54.283343 ignition[868]: INFO : Stage: mount Mar 17 19:06:54.283931 ignition[868]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 19:06:54.283931 ignition[868]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 19:06:54.286296 ignition[868]: INFO : mount: mount passed Mar 17 19:06:54.286296 ignition[868]: INFO : Ignition finished successfully Mar 17 19:06:54.284787 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 19:06:54.287267 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 19:06:54.706638 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 19:06:54.884332 systemd-networkd[709]: eth0: Gained IPv6LL Mar 17 19:07:01.141646 coreos-metadata[752]: Mar 17 19:07:01.141 WARN failed to locate config-drive, using the metadata service API instead Mar 17 19:07:01.181842 coreos-metadata[752]: Mar 17 19:07:01.181 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 17 19:07:01.196701 coreos-metadata[752]: Mar 17 19:07:01.196 INFO Fetch successful Mar 17 19:07:01.198209 coreos-metadata[752]: Mar 17 19:07:01.197 INFO wrote hostname ci-4230-1-0-c-fc9f5e1ee2.novalocal to /sysroot/etc/hostname Mar 17 19:07:01.200510 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 17 19:07:01.200762 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 17 19:07:01.213247 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 19:07:01.234341 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 19:07:01.265093 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (885) Mar 17 19:07:01.274306 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 19:07:01.274362 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 19:07:01.278438 kernel: BTRFS info (device vda6): using free space tree Mar 17 19:07:01.289091 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 19:07:01.294487 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 19:07:01.339930 ignition[903]: INFO : Ignition 2.20.0 Mar 17 19:07:01.339930 ignition[903]: INFO : Stage: files Mar 17 19:07:01.342457 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 19:07:01.342457 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 19:07:01.346218 ignition[903]: DEBUG : files: compiled without relabeling support, skipping Mar 17 19:07:01.346218 ignition[903]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 19:07:01.346218 ignition[903]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 19:07:01.351914 ignition[903]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 19:07:01.353601 ignition[903]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 19:07:01.355443 ignition[903]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 19:07:01.353942 unknown[903]: wrote ssh authorized keys file for user: core Mar 17 19:07:01.358962 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 19:07:01.361274 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 19:07:02.914521 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 19:07:08.049996 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 19:07:08.049996 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 19:07:08.049996 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 19:07:08.692231 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 19:07:09.113806 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 19:07:09.113806 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 19:07:09.118604 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 19:07:09.118604 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 19:07:09.118604 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 19:07:09.118604 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 19:07:09.118604 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 19:07:09.118604 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 19:07:09.118604 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 19:07:09.118604 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 19:07:09.118604 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 19:07:09.118604 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 19:07:09.118604 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 19:07:09.118604 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 19:07:09.118604 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 19:07:09.581380 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 19:07:11.257443 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 19:07:11.257443 ignition[903]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 19:07:11.262892 ignition[903]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 19:07:11.262892 ignition[903]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 19:07:11.262892 ignition[903]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 19:07:11.262892 ignition[903]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 17 19:07:11.262892 ignition[903]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 19:07:11.262892 ignition[903]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 19:07:11.262892 ignition[903]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 19:07:11.262892 ignition[903]: INFO : files: files passed Mar 17 19:07:11.262892 ignition[903]: INFO : Ignition finished successfully Mar 17 19:07:11.262301 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 19:07:11.276472 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 19:07:11.280148 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 19:07:11.282214 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 19:07:11.282295 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 19:07:11.297359 initrd-setup-root-after-ignition[932]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 19:07:11.297359 initrd-setup-root-after-ignition[932]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 19:07:11.303205 initrd-setup-root-after-ignition[936]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 19:07:11.315932 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 19:07:11.317396 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 19:07:11.327392 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 19:07:11.379406 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 19:07:11.379633 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 19:07:11.383217 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 19:07:11.385572 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 19:07:11.388481 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 19:07:11.395354 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 19:07:11.433670 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 19:07:11.442299 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 19:07:11.477547 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 19:07:11.479312 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 19:07:11.482634 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 19:07:11.485492 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 19:07:11.485780 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 19:07:11.488708 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 19:07:11.490516 systemd[1]: Stopped target basic.target - Basic System. Mar 17 19:07:11.493359 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 19:07:11.495926 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 19:07:11.498477 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 19:07:11.501415 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 19:07:11.504301 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 19:07:11.507307 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 19:07:11.510153 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 19:07:11.513187 systemd[1]: Stopped target swap.target - Swaps. Mar 17 19:07:11.515885 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 19:07:11.516223 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 19:07:11.519371 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 19:07:11.521355 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 19:07:11.523670 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 19:07:11.524430 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 19:07:11.526697 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 19:07:11.526980 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 19:07:11.530880 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 19:07:11.531271 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 19:07:11.534194 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 19:07:11.534469 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 19:07:11.545553 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 19:07:11.547547 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 19:07:11.547974 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 19:07:11.558539 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 19:07:11.559807 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 19:07:11.561336 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 19:07:11.563343 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 19:07:11.563734 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 19:07:11.578279 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 19:07:11.579036 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 19:07:11.587688 ignition[956]: INFO : Ignition 2.20.0 Mar 17 19:07:11.589815 ignition[956]: INFO : Stage: umount Mar 17 19:07:11.589815 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 19:07:11.589815 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 19:07:11.589815 ignition[956]: INFO : umount: umount passed Mar 17 19:07:11.589815 ignition[956]: INFO : Ignition finished successfully Mar 17 19:07:11.590978 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 19:07:11.591137 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 19:07:11.592661 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 19:07:11.592731 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 19:07:11.593711 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 19:07:11.593755 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 19:07:11.594670 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 19:07:11.594712 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 19:07:11.595663 systemd[1]: Stopped target network.target - Network. Mar 17 19:07:11.596554 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 19:07:11.596602 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 19:07:11.597594 systemd[1]: Stopped target paths.target - Path Units. Mar 17 19:07:11.598483 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 19:07:11.604079 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 19:07:11.604738 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 19:07:11.608318 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 19:07:11.609335 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 19:07:11.609370 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 19:07:11.610301 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 19:07:11.610335 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 19:07:11.611270 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 19:07:11.611313 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 19:07:11.613435 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 19:07:11.613481 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 19:07:11.614531 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 19:07:11.615833 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 19:07:11.618774 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 19:07:11.623696 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 19:07:11.623819 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 19:07:11.625834 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 19:07:11.625923 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 19:07:11.629093 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 19:07:11.629280 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 19:07:11.629376 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 19:07:11.635092 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 19:07:11.636368 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 19:07:11.636426 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 19:07:11.637515 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 19:07:11.637564 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 19:07:11.643124 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 19:07:11.643621 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 19:07:11.643676 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 19:07:11.644243 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 19:07:11.644285 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 19:07:11.645108 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 19:07:11.645150 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 19:07:11.645958 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 19:07:11.646000 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 19:07:11.647605 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 19:07:11.649427 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 19:07:11.649493 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 19:07:11.657424 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 19:07:11.657595 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 19:07:11.658649 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 19:07:11.658734 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 19:07:11.660329 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 19:07:11.660375 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 19:07:11.661329 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 19:07:11.661362 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 19:07:11.662334 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 19:07:11.662381 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 19:07:11.664072 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 19:07:11.664116 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 19:07:11.665270 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 19:07:11.665315 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 19:07:11.672203 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 19:07:11.673341 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 19:07:11.673399 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 19:07:11.675450 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 19:07:11.675494 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 19:07:11.679641 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 19:07:11.679713 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 19:07:11.680104 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 19:07:11.680206 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 19:07:11.681549 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 19:07:11.691232 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 19:07:11.698078 systemd[1]: Switching root. Mar 17 19:07:11.731203 systemd-journald[185]: Journal stopped Mar 17 19:07:13.548299 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Mar 17 19:07:13.548420 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 19:07:13.548454 kernel: SELinux: policy capability open_perms=1 Mar 17 19:07:13.548477 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 19:07:13.548498 kernel: SELinux: policy capability always_check_network=0 Mar 17 19:07:13.548526 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 19:07:13.548550 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 19:07:13.548571 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 19:07:13.548593 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 19:07:13.548617 systemd[1]: Successfully loaded SELinux policy in 46.205ms. Mar 17 19:07:13.548655 kernel: audit: type=1403 audit(1742238432.412:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 19:07:13.548679 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.443ms. Mar 17 19:07:13.548705 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 19:07:13.548729 systemd[1]: Detected virtualization kvm. Mar 17 19:07:13.548757 systemd[1]: Detected architecture x86-64. Mar 17 19:07:13.548780 systemd[1]: Detected first boot. Mar 17 19:07:13.548803 systemd[1]: Hostname set to . Mar 17 19:07:13.548827 systemd[1]: Initializing machine ID from VM UUID. Mar 17 19:07:13.548850 zram_generator::config[1000]: No configuration found. Mar 17 19:07:13.548874 kernel: Guest personality initialized and is inactive Mar 17 19:07:13.548896 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 17 19:07:13.548918 kernel: Initialized host personality Mar 17 19:07:13.548949 kernel: NET: Registered PF_VSOCK protocol family Mar 17 19:07:13.548971 systemd[1]: Populated /etc with preset unit settings. Mar 17 19:07:13.548998 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 19:07:13.549053 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 19:07:13.549080 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 19:07:13.549103 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 19:07:13.549128 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 19:07:13.549152 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 19:07:13.549180 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 19:07:13.549204 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 19:07:13.549228 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 19:07:13.549252 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 19:07:13.549276 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 19:07:13.549299 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 19:07:13.549322 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 19:07:13.549345 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 19:07:13.549369 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 19:07:13.549395 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 19:07:13.549420 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 19:07:13.549444 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 19:07:13.549467 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 19:07:13.549490 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 19:07:13.549513 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 19:07:13.549539 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 19:07:13.549563 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 19:07:13.549586 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 19:07:13.549610 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 19:07:13.549634 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 19:07:13.549657 systemd[1]: Reached target slices.target - Slice Units. Mar 17 19:07:13.549679 systemd[1]: Reached target swap.target - Swaps. Mar 17 19:07:13.549702 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 19:07:13.549725 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 19:07:13.549751 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 19:07:13.549779 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 19:07:13.549802 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 19:07:13.549824 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 19:07:13.549847 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 19:07:13.549871 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 19:07:13.549894 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 19:07:13.549917 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 19:07:13.549941 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 19:07:13.549967 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 19:07:13.549990 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 19:07:13.550015 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 19:07:13.550093 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 19:07:13.550119 systemd[1]: Reached target machines.target - Containers. Mar 17 19:07:13.550143 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 19:07:13.550166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 19:07:13.550190 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 19:07:13.550213 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 19:07:13.550241 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 19:07:13.550264 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 19:07:13.550287 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 19:07:13.550313 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 19:07:13.550336 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 19:07:13.550360 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 19:07:13.550383 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 19:07:13.550406 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 19:07:13.550433 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 19:07:13.550456 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 19:07:13.550480 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 19:07:13.550504 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 19:07:13.550526 kernel: fuse: init (API version 7.39) Mar 17 19:07:13.550547 kernel: loop: module loaded Mar 17 19:07:13.550570 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 19:07:13.550593 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 19:07:13.550617 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 19:07:13.550644 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 19:07:13.550667 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 19:07:13.550691 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 19:07:13.550714 systemd[1]: Stopped verity-setup.service. Mar 17 19:07:13.550742 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 19:07:13.550765 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 19:07:13.550789 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 19:07:13.550815 kernel: ACPI: bus type drm_connector registered Mar 17 19:07:13.550837 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 19:07:13.550879 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 19:07:13.550906 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 19:07:13.550929 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 19:07:13.550953 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 19:07:13.550976 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 19:07:13.550999 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 19:07:13.551067 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 19:07:13.551095 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 19:07:13.551118 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 19:07:13.551147 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 19:07:13.551171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 19:07:13.551195 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 19:07:13.551219 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 19:07:13.551243 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 19:07:13.551267 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 19:07:13.551290 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 19:07:13.551313 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 19:07:13.551337 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 19:07:13.551363 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 19:07:13.551387 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 19:07:13.551444 systemd-journald[1090]: Collecting audit messages is disabled. Mar 17 19:07:13.551493 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 19:07:13.551518 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 19:07:13.551543 systemd-journald[1090]: Journal started Mar 17 19:07:13.554061 systemd-journald[1090]: Runtime Journal (/run/log/journal/7238b9f7f36b4cce95bcf64be65091ba) is 8M, max 78.3M, 70.3M free. Mar 17 19:07:13.554120 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 19:07:13.083093 systemd[1]: Queued start job for default target multi-user.target. Mar 17 19:07:13.091275 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 19:07:13.091769 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 19:07:13.562076 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 19:07:13.567066 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 19:07:13.594055 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 19:07:13.600050 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 19:07:13.613440 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 19:07:13.613511 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 19:07:13.622054 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 19:07:13.626046 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 19:07:13.626588 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 19:07:13.627366 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 19:07:13.628184 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 19:07:13.628931 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 19:07:13.629586 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 19:07:13.630290 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 19:07:13.631005 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 19:07:13.643823 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 19:07:13.649616 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 19:07:13.655195 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 19:07:13.661186 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 19:07:13.662415 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 19:07:13.667224 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 19:07:13.679079 udevadm[1146]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 19:07:13.707731 systemd-journald[1090]: Time spent on flushing to /var/log/journal/7238b9f7f36b4cce95bcf64be65091ba is 109.645ms for 967 entries. Mar 17 19:07:13.707731 systemd-journald[1090]: System Journal (/var/log/journal/7238b9f7f36b4cce95bcf64be65091ba) is 8M, max 584.8M, 576.8M free. Mar 17 19:07:13.956186 systemd-journald[1090]: Received client request to flush runtime journal. Mar 17 19:07:13.956237 kernel: loop0: detected capacity change from 0 to 138176 Mar 17 19:07:13.776528 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 19:07:13.779236 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 19:07:13.788322 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 19:07:13.802848 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 19:07:13.930440 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 19:07:13.941785 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 19:07:13.966849 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 19:07:13.964986 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 19:07:13.967800 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 19:07:13.977295 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Mar 17 19:07:13.977313 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. Mar 17 19:07:13.982163 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 19:07:13.989512 kernel: loop1: detected capacity change from 0 to 8 Mar 17 19:07:14.011063 kernel: loop2: detected capacity change from 0 to 147912 Mar 17 19:07:14.076254 kernel: loop3: detected capacity change from 0 to 210664 Mar 17 19:07:14.094235 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 19:07:14.146172 kernel: loop4: detected capacity change from 0 to 138176 Mar 17 19:07:14.220065 kernel: loop5: detected capacity change from 0 to 8 Mar 17 19:07:14.224249 kernel: loop6: detected capacity change from 0 to 147912 Mar 17 19:07:14.273088 kernel: loop7: detected capacity change from 0 to 210664 Mar 17 19:07:14.328704 (sd-merge)[1167]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 17 19:07:14.329213 (sd-merge)[1167]: Merged extensions into '/usr'. Mar 17 19:07:14.337138 systemd[1]: Reload requested from client PID 1119 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 19:07:14.337151 systemd[1]: Reloading... Mar 17 19:07:14.408052 zram_generator::config[1192]: No configuration found. Mar 17 19:07:14.602718 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 19:07:14.684905 systemd[1]: Reloading finished in 347 ms. Mar 17 19:07:14.696443 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 19:07:14.697572 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 19:07:14.706177 systemd[1]: Starting ensure-sysext.service... Mar 17 19:07:14.709159 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 19:07:14.713105 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 19:07:14.738093 systemd[1]: Reload requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Mar 17 19:07:14.738110 systemd[1]: Reloading... Mar 17 19:07:14.744513 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 19:07:14.745605 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 19:07:14.747568 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 19:07:14.747877 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Mar 17 19:07:14.747945 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Mar 17 19:07:14.754922 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 19:07:14.754931 systemd-tmpfiles[1252]: Skipping /boot Mar 17 19:07:14.774572 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 19:07:14.774583 systemd-tmpfiles[1252]: Skipping /boot Mar 17 19:07:14.794578 systemd-udevd[1253]: Using default interface naming scheme 'v255'. Mar 17 19:07:14.819123 ldconfig[1116]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 19:07:14.833055 zram_generator::config[1282]: No configuration found. Mar 17 19:07:14.954073 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1307) Mar 17 19:07:15.028410 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 19:07:15.057099 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 19:07:15.074054 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 17 19:07:15.083868 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 19:07:15.120704 kernel: ACPI: button: Power Button [PWRF] Mar 17 19:07:15.132056 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 19:07:15.158335 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 19:07:15.158389 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 19:07:15.159215 systemd[1]: Reloading finished in 420 ms. Mar 17 19:07:15.175050 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Mar 17 19:07:15.178053 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Mar 17 19:07:15.178815 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 19:07:15.181884 kernel: Console: switching to colour dummy device 80x25 Mar 17 19:07:15.183183 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 19:07:15.183218 kernel: [drm] features: -context_init Mar 17 19:07:15.184111 kernel: [drm] number of scanouts: 1 Mar 17 19:07:15.184145 kernel: [drm] number of cap sets: 0 Mar 17 19:07:15.186270 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 19:07:15.188053 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Mar 17 19:07:15.192461 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 17 19:07:15.192531 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 19:07:15.211718 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 19:07:15.209630 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 19:07:15.241782 systemd[1]: Finished ensure-sysext.service. Mar 17 19:07:15.251241 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 19:07:15.265006 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 19:07:15.271170 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 19:07:15.276184 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 19:07:15.276397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 19:07:15.280178 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 19:07:15.282220 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 19:07:15.284803 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 19:07:15.290513 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 19:07:15.293252 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 19:07:15.293447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 19:07:15.297120 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 19:07:15.297207 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 19:07:15.304193 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 19:07:15.312850 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 19:07:15.321213 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 19:07:15.326136 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 19:07:15.331681 lvm[1375]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 19:07:15.334189 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 19:07:15.341251 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 19:07:15.341359 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 19:07:15.342242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 19:07:15.342411 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 19:07:15.342704 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 19:07:15.342842 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 19:07:15.343351 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 19:07:15.343505 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 19:07:15.345884 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 19:07:15.346142 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 19:07:15.357597 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 19:07:15.357662 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 19:07:15.366295 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 19:07:15.384067 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 19:07:15.388682 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 19:07:15.390632 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 19:07:15.396855 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 19:07:15.403190 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 19:07:15.415270 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 19:07:15.419198 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 19:07:15.428370 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 19:07:15.453211 augenrules[1420]: No rules Mar 17 19:07:15.452874 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 19:07:15.453634 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 19:07:15.456923 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 19:07:15.465067 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 19:07:15.466719 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 19:07:15.513418 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 19:07:15.516739 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 19:07:15.540573 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 19:07:15.571480 systemd-networkd[1386]: lo: Link UP Mar 17 19:07:15.571491 systemd-networkd[1386]: lo: Gained carrier Mar 17 19:07:15.572720 systemd-networkd[1386]: Enumeration completed Mar 17 19:07:15.572810 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 19:07:15.578158 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 19:07:15.578171 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 19:07:15.578700 systemd-networkd[1386]: eth0: Link UP Mar 17 19:07:15.578708 systemd-networkd[1386]: eth0: Gained carrier Mar 17 19:07:15.578723 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 19:07:15.583214 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 19:07:15.590202 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 19:07:15.594102 systemd-networkd[1386]: eth0: DHCPv4 address 172.24.4.57/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 17 19:07:15.611690 systemd-resolved[1389]: Positive Trust Anchors: Mar 17 19:07:15.611705 systemd-resolved[1389]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 19:07:15.611746 systemd-resolved[1389]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 19:07:15.617417 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 19:07:15.619254 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 19:07:15.621796 systemd-resolved[1389]: Using system hostname 'ci-4230-1-0-c-fc9f5e1ee2.novalocal'. Mar 17 19:07:15.622661 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 19:07:15.625530 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 19:07:15.628228 systemd[1]: Reached target network.target - Network. Mar 17 19:07:15.630527 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 19:07:15.632834 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 19:07:15.635086 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 19:07:15.637484 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 19:07:15.639926 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 19:07:15.642276 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 19:07:15.643598 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 19:07:15.644792 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 19:07:15.644890 systemd[1]: Reached target paths.target - Path Units. Mar 17 19:07:15.646214 systemd[1]: Reached target timers.target - Timer Units. Mar 17 19:07:15.650162 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 19:07:15.653989 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 19:07:15.662905 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 19:07:15.666271 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 19:07:15.668036 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 19:07:15.677705 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 19:07:15.682670 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 19:07:15.684839 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 19:07:15.688195 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 19:07:15.688840 systemd[1]: Reached target basic.target - Basic System. Mar 17 19:07:15.690359 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 19:07:15.690393 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 19:07:15.701172 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 19:07:15.706714 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 19:07:15.717253 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 19:07:15.721603 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 19:07:15.730480 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 19:07:15.734831 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 19:07:15.736584 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 19:07:15.741297 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 19:07:15.743944 jq[1451]: false Mar 17 19:07:15.751210 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 19:07:15.758294 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 19:07:15.770059 extend-filesystems[1452]: Found loop4 Mar 17 19:07:15.770059 extend-filesystems[1452]: Found loop5 Mar 17 19:07:15.770059 extend-filesystems[1452]: Found loop6 Mar 17 19:07:15.770059 extend-filesystems[1452]: Found loop7 Mar 17 19:07:15.770059 extend-filesystems[1452]: Found vda Mar 17 19:07:15.770059 extend-filesystems[1452]: Found vda1 Mar 17 19:07:15.770059 extend-filesystems[1452]: Found vda2 Mar 17 19:07:15.770059 extend-filesystems[1452]: Found vda3 Mar 17 19:07:15.770059 extend-filesystems[1452]: Found usr Mar 17 19:07:15.770059 extend-filesystems[1452]: Found vda4 Mar 17 19:07:15.770059 extend-filesystems[1452]: Found vda6 Mar 17 19:07:15.770059 extend-filesystems[1452]: Found vda7 Mar 17 19:07:15.770059 extend-filesystems[1452]: Found vda9 Mar 17 19:07:15.770059 extend-filesystems[1452]: Checking size of /dev/vda9 Mar 17 19:07:15.918862 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Mar 17 19:07:15.918908 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Mar 17 19:07:15.918961 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1302) Mar 17 19:07:15.779670 dbus-daemon[1450]: [system] SELinux support is enabled Mar 17 19:07:15.771743 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 19:07:15.927204 extend-filesystems[1452]: Resized partition /dev/vda9 Mar 17 19:07:15.779935 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 19:07:15.933765 extend-filesystems[1473]: resize2fs 1.47.1 (20-May-2024) Mar 17 19:07:15.933765 extend-filesystems[1473]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 19:07:15.933765 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 19:07:15.933765 extend-filesystems[1473]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Mar 17 19:07:15.782182 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 19:07:15.963258 extend-filesystems[1452]: Resized filesystem in /dev/vda9 Mar 17 19:07:15.965882 update_engine[1467]: I20250317 19:07:15.819460 1467 main.cc:92] Flatcar Update Engine starting Mar 17 19:07:15.965882 update_engine[1467]: I20250317 19:07:15.837126 1467 update_check_scheduler.cc:74] Next update check in 6m47s Mar 17 19:07:15.791402 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 19:07:15.966356 jq[1470]: true Mar 17 19:07:15.812131 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 19:07:15.825788 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 19:07:15.966728 jq[1479]: true Mar 17 19:07:15.834964 systemd-timesyncd[1390]: Contacted time server 23.149.208.4:123 (0.flatcar.pool.ntp.org). Mar 17 19:07:15.835015 systemd-timesyncd[1390]: Initial clock synchronization to Mon 2025-03-17 19:07:15.803656 UTC. Mar 17 19:07:15.840383 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 19:07:15.840700 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 19:07:15.840967 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 19:07:15.841408 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 19:07:15.878112 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 19:07:15.878321 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 19:07:15.898645 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 19:07:15.899099 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 19:07:15.919096 (ntainerd)[1480]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 19:07:15.980548 systemd[1]: Started update-engine.service - Update Engine. Mar 17 19:07:15.981052 systemd-logind[1460]: New seat seat0. Mar 17 19:07:15.993728 tar[1478]: linux-amd64/helm Mar 17 19:07:15.993084 systemd-logind[1460]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 19:07:15.993102 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 19:07:15.993379 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 19:07:15.998128 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 19:07:15.998164 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 19:07:15.998666 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 19:07:15.998682 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 19:07:16.011203 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 19:07:16.011920 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 19:07:16.039090 bash[1506]: Updated "/home/core/.ssh/authorized_keys" Mar 17 19:07:16.040790 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 19:07:16.051281 systemd[1]: Starting sshkeys.service... Mar 17 19:07:16.095113 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 19:07:16.119327 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 19:07:16.147999 sshd_keygen[1474]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 19:07:16.169393 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 19:07:16.183708 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 19:07:16.192348 systemd[1]: Started sshd@0-172.24.4.57:22-172.24.4.1:51558.service - OpenSSH per-connection server daemon (172.24.4.1:51558). Mar 17 19:07:16.207360 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 19:07:16.215327 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 19:07:16.215552 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 19:07:16.231219 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 19:07:16.255811 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 19:07:16.271698 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 19:07:16.283498 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 19:07:16.286076 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 19:07:16.410477 containerd[1480]: time="2025-03-17T19:07:16.410401161Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 19:07:16.441293 containerd[1480]: time="2025-03-17T19:07:16.441198925Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 19:07:16.442794 containerd[1480]: time="2025-03-17T19:07:16.442760898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 19:07:16.442794 containerd[1480]: time="2025-03-17T19:07:16.442791695Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 19:07:16.442856 containerd[1480]: time="2025-03-17T19:07:16.442811614Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 19:07:16.443014 containerd[1480]: time="2025-03-17T19:07:16.442989251Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 19:07:16.443066 containerd[1480]: time="2025-03-17T19:07:16.443015559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 19:07:16.443128 containerd[1480]: time="2025-03-17T19:07:16.443101252Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 19:07:16.443156 containerd[1480]: time="2025-03-17T19:07:16.443125290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 19:07:16.443352 containerd[1480]: time="2025-03-17T19:07:16.443325425Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 19:07:16.443352 containerd[1480]: time="2025-03-17T19:07:16.443348894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 19:07:16.443415 containerd[1480]: time="2025-03-17T19:07:16.443364533Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 19:07:16.443415 containerd[1480]: time="2025-03-17T19:07:16.443376532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 19:07:16.443480 containerd[1480]: time="2025-03-17T19:07:16.443456785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 19:07:16.443696 containerd[1480]: time="2025-03-17T19:07:16.443671480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 19:07:16.443824 containerd[1480]: time="2025-03-17T19:07:16.443799970Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 19:07:16.443859 containerd[1480]: time="2025-03-17T19:07:16.443821549Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 19:07:16.444016 containerd[1480]: time="2025-03-17T19:07:16.443909992Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 19:07:16.444016 containerd[1480]: time="2025-03-17T19:07:16.443968968Z" level=info msg="metadata content store policy set" policy=shared Mar 17 19:07:16.451865 containerd[1480]: time="2025-03-17T19:07:16.451446445Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 19:07:16.451865 containerd[1480]: time="2025-03-17T19:07:16.451494351Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 19:07:16.451865 containerd[1480]: time="2025-03-17T19:07:16.451511330Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 19:07:16.451865 containerd[1480]: time="2025-03-17T19:07:16.451528529Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 19:07:16.451865 containerd[1480]: time="2025-03-17T19:07:16.451545818Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 19:07:16.451865 containerd[1480]: time="2025-03-17T19:07:16.451665949Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 19:07:16.452014 containerd[1480]: time="2025-03-17T19:07:16.451889861Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 19:07:16.452014 containerd[1480]: time="2025-03-17T19:07:16.451984564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 19:07:16.452014 containerd[1480]: time="2025-03-17T19:07:16.452003753Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 19:07:16.452232 containerd[1480]: time="2025-03-17T19:07:16.452189429Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 19:07:16.452232 containerd[1480]: time="2025-03-17T19:07:16.452220607Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 19:07:16.452295 containerd[1480]: time="2025-03-17T19:07:16.452236086Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 19:07:16.452295 containerd[1480]: time="2025-03-17T19:07:16.452251054Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 19:07:16.452295 containerd[1480]: time="2025-03-17T19:07:16.452267334Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 19:07:16.452295 containerd[1480]: time="2025-03-17T19:07:16.452288072Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 19:07:16.452377 containerd[1480]: time="2025-03-17T19:07:16.452307450Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 19:07:16.452377 containerd[1480]: time="2025-03-17T19:07:16.452322399Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 19:07:16.452377 containerd[1480]: time="2025-03-17T19:07:16.452335048Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 19:07:16.452377 containerd[1480]: time="2025-03-17T19:07:16.452357187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.452377 containerd[1480]: time="2025-03-17T19:07:16.452374055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.452485 containerd[1480]: time="2025-03-17T19:07:16.452390564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.452485 containerd[1480]: time="2025-03-17T19:07:16.452406563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.452485 containerd[1480]: time="2025-03-17T19:07:16.452419732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.452485 containerd[1480]: time="2025-03-17T19:07:16.452433921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.452485 containerd[1480]: time="2025-03-17T19:07:16.452447730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.452485 containerd[1480]: time="2025-03-17T19:07:16.452461739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.452485 containerd[1480]: time="2025-03-17T19:07:16.452476088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.452665 containerd[1480]: time="2025-03-17T19:07:16.452493927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.452665 containerd[1480]: time="2025-03-17T19:07:16.452507366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.452665 containerd[1480]: time="2025-03-17T19:07:16.452520175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.452665 containerd[1480]: time="2025-03-17T19:07:16.452535743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.452665 containerd[1480]: time="2025-03-17T19:07:16.452551582Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 19:07:16.452665 containerd[1480]: time="2025-03-17T19:07:16.452572801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.452665 containerd[1480]: time="2025-03-17T19:07:16.452587110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.452665 containerd[1480]: time="2025-03-17T19:07:16.452600218Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 19:07:16.452665 containerd[1480]: time="2025-03-17T19:07:16.452643915Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 19:07:16.452665 containerd[1480]: time="2025-03-17T19:07:16.452662724Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 19:07:16.453662 containerd[1480]: time="2025-03-17T19:07:16.452676203Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 19:07:16.453662 containerd[1480]: time="2025-03-17T19:07:16.452690611Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 19:07:16.453662 containerd[1480]: time="2025-03-17T19:07:16.452700981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.453662 containerd[1480]: time="2025-03-17T19:07:16.452713810Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 19:07:16.453662 containerd[1480]: time="2025-03-17T19:07:16.452724769Z" level=info msg="NRI interface is disabled by configuration." Mar 17 19:07:16.453662 containerd[1480]: time="2025-03-17T19:07:16.452736528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 19:07:16.453800 containerd[1480]: time="2025-03-17T19:07:16.453054924Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 19:07:16.453800 containerd[1480]: time="2025-03-17T19:07:16.453115040Z" level=info msg="Connect containerd service" Mar 17 19:07:16.453800 containerd[1480]: time="2025-03-17T19:07:16.453152587Z" level=info msg="using legacy CRI server" Mar 17 19:07:16.453800 containerd[1480]: time="2025-03-17T19:07:16.453160426Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 19:07:16.453800 containerd[1480]: time="2025-03-17T19:07:16.453269688Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 19:07:16.454014 containerd[1480]: time="2025-03-17T19:07:16.453888481Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 19:07:16.455356 containerd[1480]: time="2025-03-17T19:07:16.454209118Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 19:07:16.455356 containerd[1480]: time="2025-03-17T19:07:16.454254564Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 19:07:16.455356 containerd[1480]: time="2025-03-17T19:07:16.454286712Z" level=info msg="Start subscribing containerd event" Mar 17 19:07:16.455356 containerd[1480]: time="2025-03-17T19:07:16.454320709Z" level=info msg="Start recovering state" Mar 17 19:07:16.455356 containerd[1480]: time="2025-03-17T19:07:16.454371725Z" level=info msg="Start event monitor" Mar 17 19:07:16.455356 containerd[1480]: time="2025-03-17T19:07:16.454383504Z" level=info msg="Start snapshots syncer" Mar 17 19:07:16.455356 containerd[1480]: time="2025-03-17T19:07:16.454391864Z" level=info msg="Start cni network conf syncer for default" Mar 17 19:07:16.455356 containerd[1480]: time="2025-03-17T19:07:16.454400443Z" level=info msg="Start streaming server" Mar 17 19:07:16.454546 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 19:07:16.462740 containerd[1480]: time="2025-03-17T19:07:16.462700559Z" level=info msg="containerd successfully booted in 0.053724s" Mar 17 19:07:16.621389 tar[1478]: linux-amd64/LICENSE Mar 17 19:07:16.621545 tar[1478]: linux-amd64/README.md Mar 17 19:07:16.633961 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 19:07:17.160742 sshd[1525]: Accepted publickey for core from 172.24.4.1 port 51558 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:07:17.165344 sshd-session[1525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:07:17.198899 systemd-logind[1460]: New session 1 of user core. Mar 17 19:07:17.203823 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 19:07:17.215722 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 19:07:17.220429 systemd-networkd[1386]: eth0: Gained IPv6LL Mar 17 19:07:17.229614 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 19:07:17.239405 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 19:07:17.259327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 19:07:17.263373 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 19:07:17.274936 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 19:07:17.294431 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 19:07:17.311772 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 19:07:17.316916 systemd-logind[1460]: New session c1 of user core. Mar 17 19:07:17.318215 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 19:07:17.487559 systemd[1557]: Queued start job for default target default.target. Mar 17 19:07:17.495889 systemd[1557]: Created slice app.slice - User Application Slice. Mar 17 19:07:17.495911 systemd[1557]: Reached target paths.target - Paths. Mar 17 19:07:17.496043 systemd[1557]: Reached target timers.target - Timers. Mar 17 19:07:17.497288 systemd[1557]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 19:07:17.516291 systemd[1557]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 19:07:17.516402 systemd[1557]: Reached target sockets.target - Sockets. Mar 17 19:07:17.516446 systemd[1557]: Reached target basic.target - Basic System. Mar 17 19:07:17.516486 systemd[1557]: Reached target default.target - Main User Target. Mar 17 19:07:17.516511 systemd[1557]: Startup finished in 183ms. Mar 17 19:07:17.517189 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 19:07:17.527332 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 19:07:18.014702 systemd[1]: Started sshd@1-172.24.4.57:22-172.24.4.1:51568.service - OpenSSH per-connection server daemon (172.24.4.1:51568). Mar 17 19:07:18.752116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 19:07:18.763221 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 19:07:19.695439 sshd[1574]: Accepted publickey for core from 172.24.4.1 port 51568 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:07:19.697780 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:07:19.709016 systemd-logind[1460]: New session 2 of user core. Mar 17 19:07:19.718595 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 19:07:20.088452 kubelet[1581]: E0317 19:07:20.087895 1581 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 19:07:20.093193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 19:07:20.093521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 19:07:20.094649 systemd[1]: kubelet.service: Consumed 1.809s CPU time, 245.6M memory peak. Mar 17 19:07:20.423469 sshd[1590]: Connection closed by 172.24.4.1 port 51568 Mar 17 19:07:20.425622 sshd-session[1574]: pam_unix(sshd:session): session closed for user core Mar 17 19:07:20.442844 systemd[1]: sshd@1-172.24.4.57:22-172.24.4.1:51568.service: Deactivated successfully. Mar 17 19:07:20.446412 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 19:07:20.450392 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Mar 17 19:07:20.455798 systemd[1]: Started sshd@2-172.24.4.57:22-172.24.4.1:51576.service - OpenSSH per-connection server daemon (172.24.4.1:51576). Mar 17 19:07:20.464624 systemd-logind[1460]: Removed session 2. Mar 17 19:07:21.351256 login[1537]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 19:07:21.364202 login[1538]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 19:07:21.369939 systemd-logind[1460]: New session 3 of user core. Mar 17 19:07:21.378575 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 19:07:21.388392 systemd-logind[1460]: New session 4 of user core. Mar 17 19:07:21.397483 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 19:07:21.630324 sshd[1596]: Accepted publickey for core from 172.24.4.1 port 51576 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:07:21.633951 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:07:21.647242 systemd-logind[1460]: New session 5 of user core. Mar 17 19:07:21.653508 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 19:07:22.235166 sshd[1621]: Connection closed by 172.24.4.1 port 51576 Mar 17 19:07:22.236230 sshd-session[1596]: pam_unix(sshd:session): session closed for user core Mar 17 19:07:22.242774 systemd[1]: sshd@2-172.24.4.57:22-172.24.4.1:51576.service: Deactivated successfully. Mar 17 19:07:22.246766 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 19:07:22.250444 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Mar 17 19:07:22.252908 systemd-logind[1460]: Removed session 5. Mar 17 19:07:22.779605 coreos-metadata[1447]: Mar 17 19:07:22.779 WARN failed to locate config-drive, using the metadata service API instead Mar 17 19:07:22.826885 coreos-metadata[1447]: Mar 17 19:07:22.826 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 17 19:07:22.988302 coreos-metadata[1447]: Mar 17 19:07:22.988 INFO Fetch successful Mar 17 19:07:22.988302 coreos-metadata[1447]: Mar 17 19:07:22.988 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 17 19:07:23.004581 coreos-metadata[1447]: Mar 17 19:07:23.004 INFO Fetch successful Mar 17 19:07:23.004767 coreos-metadata[1447]: Mar 17 19:07:23.004 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 17 19:07:23.020011 coreos-metadata[1447]: Mar 17 19:07:23.019 INFO Fetch successful Mar 17 19:07:23.020249 coreos-metadata[1447]: Mar 17 19:07:23.019 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 17 19:07:23.034182 coreos-metadata[1447]: Mar 17 19:07:23.033 INFO Fetch successful Mar 17 19:07:23.034182 coreos-metadata[1447]: Mar 17 19:07:23.034 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 17 19:07:23.048236 coreos-metadata[1447]: Mar 17 19:07:23.048 INFO Fetch successful Mar 17 19:07:23.048236 coreos-metadata[1447]: Mar 17 19:07:23.048 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 17 19:07:23.062820 coreos-metadata[1447]: Mar 17 19:07:23.062 INFO Fetch successful Mar 17 19:07:23.107223 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 19:07:23.109474 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 19:07:23.182286 coreos-metadata[1510]: Mar 17 19:07:23.182 WARN failed to locate config-drive, using the metadata service API instead Mar 17 19:07:23.223961 coreos-metadata[1510]: Mar 17 19:07:23.223 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 17 19:07:23.239767 coreos-metadata[1510]: Mar 17 19:07:23.239 INFO Fetch successful Mar 17 19:07:23.239922 coreos-metadata[1510]: Mar 17 19:07:23.239 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 19:07:23.252893 coreos-metadata[1510]: Mar 17 19:07:23.252 INFO Fetch successful Mar 17 19:07:23.258605 unknown[1510]: wrote ssh authorized keys file for user: core Mar 17 19:07:23.301358 update-ssh-keys[1639]: Updated "/home/core/.ssh/authorized_keys" Mar 17 19:07:23.302511 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 19:07:23.308171 systemd[1]: Finished sshkeys.service. Mar 17 19:07:23.311301 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 19:07:23.314157 systemd[1]: Startup finished in 1.161s (kernel) + 22.622s (initrd) + 10.947s (userspace) = 34.732s. Mar 17 19:07:30.281238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 19:07:30.288409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 19:07:30.610071 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 19:07:30.613583 (kubelet)[1650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 19:07:30.681240 kubelet[1650]: E0317 19:07:30.681089 1650 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 19:07:30.688223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 19:07:30.688560 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 19:07:30.689242 systemd[1]: kubelet.service: Consumed 261ms CPU time, 95.7M memory peak. Mar 17 19:07:32.257636 systemd[1]: Started sshd@3-172.24.4.57:22-172.24.4.1:45856.service - OpenSSH per-connection server daemon (172.24.4.1:45856). Mar 17 19:07:33.589356 sshd[1660]: Accepted publickey for core from 172.24.4.1 port 45856 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:07:33.592000 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:07:33.604140 systemd-logind[1460]: New session 6 of user core. Mar 17 19:07:33.613335 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 19:07:34.326175 sshd[1662]: Connection closed by 172.24.4.1 port 45856 Mar 17 19:07:34.327147 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Mar 17 19:07:34.344824 systemd[1]: sshd@3-172.24.4.57:22-172.24.4.1:45856.service: Deactivated successfully. Mar 17 19:07:34.348296 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 19:07:34.350272 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Mar 17 19:07:34.362613 systemd[1]: Started sshd@4-172.24.4.57:22-172.24.4.1:49816.service - OpenSSH per-connection server daemon (172.24.4.1:49816). Mar 17 19:07:34.367234 systemd-logind[1460]: Removed session 6. Mar 17 19:07:35.664662 sshd[1667]: Accepted publickey for core from 172.24.4.1 port 49816 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:07:35.667437 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:07:35.678312 systemd-logind[1460]: New session 7 of user core. Mar 17 19:07:35.690357 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 19:07:36.299066 sshd[1670]: Connection closed by 172.24.4.1 port 49816 Mar 17 19:07:36.299989 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Mar 17 19:07:36.322704 systemd[1]: sshd@4-172.24.4.57:22-172.24.4.1:49816.service: Deactivated successfully. Mar 17 19:07:36.326544 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 19:07:36.328525 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Mar 17 19:07:36.343638 systemd[1]: Started sshd@5-172.24.4.57:22-172.24.4.1:49832.service - OpenSSH per-connection server daemon (172.24.4.1:49832). Mar 17 19:07:36.347175 systemd-logind[1460]: Removed session 7. Mar 17 19:07:37.645155 sshd[1675]: Accepted publickey for core from 172.24.4.1 port 49832 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:07:37.647900 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:07:37.663121 systemd-logind[1460]: New session 8 of user core. Mar 17 19:07:37.668399 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 19:07:38.415491 sshd[1678]: Connection closed by 172.24.4.1 port 49832 Mar 17 19:07:38.416576 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Mar 17 19:07:38.433407 systemd[1]: sshd@5-172.24.4.57:22-172.24.4.1:49832.service: Deactivated successfully. Mar 17 19:07:38.437113 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 19:07:38.441351 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Mar 17 19:07:38.450638 systemd[1]: Started sshd@6-172.24.4.57:22-172.24.4.1:49834.service - OpenSSH per-connection server daemon (172.24.4.1:49834). Mar 17 19:07:38.454006 systemd-logind[1460]: Removed session 8. Mar 17 19:07:39.659990 sshd[1683]: Accepted publickey for core from 172.24.4.1 port 49834 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:07:39.662996 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:07:39.674726 systemd-logind[1460]: New session 9 of user core. Mar 17 19:07:39.685341 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 19:07:40.161844 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 19:07:40.162592 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 19:07:40.180292 sudo[1687]: pam_unix(sudo:session): session closed for user root Mar 17 19:07:40.392082 sshd[1686]: Connection closed by 172.24.4.1 port 49834 Mar 17 19:07:40.392539 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Mar 17 19:07:40.410842 systemd[1]: sshd@6-172.24.4.57:22-172.24.4.1:49834.service: Deactivated successfully. Mar 17 19:07:40.414848 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 19:07:40.417159 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Mar 17 19:07:40.430685 systemd[1]: Started sshd@7-172.24.4.57:22-172.24.4.1:49838.service - OpenSSH per-connection server daemon (172.24.4.1:49838). Mar 17 19:07:40.432494 systemd-logind[1460]: Removed session 9. Mar 17 19:07:40.781748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 19:07:40.792402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 19:07:41.107304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 19:07:41.122620 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 19:07:41.210104 kubelet[1703]: E0317 19:07:41.209979 1703 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 19:07:41.215341 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 19:07:41.215890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 19:07:41.216704 systemd[1]: kubelet.service: Consumed 281ms CPU time, 97.9M memory peak. Mar 17 19:07:41.614281 sshd[1692]: Accepted publickey for core from 172.24.4.1 port 49838 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:07:41.617104 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:07:41.630145 systemd-logind[1460]: New session 10 of user core. Mar 17 19:07:41.638343 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 19:07:42.074311 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 19:07:42.074934 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 19:07:42.083919 sudo[1713]: pam_unix(sudo:session): session closed for user root Mar 17 19:07:42.096228 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 19:07:42.096861 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 19:07:42.125691 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 19:07:42.185848 augenrules[1735]: No rules Mar 17 19:07:42.187125 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 19:07:42.187566 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 19:07:42.189940 sudo[1712]: pam_unix(sudo:session): session closed for user root Mar 17 19:07:42.384380 sshd[1711]: Connection closed by 172.24.4.1 port 49838 Mar 17 19:07:42.384418 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Mar 17 19:07:42.403554 systemd[1]: sshd@7-172.24.4.57:22-172.24.4.1:49838.service: Deactivated successfully. Mar 17 19:07:42.407639 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 19:07:42.409975 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. Mar 17 19:07:42.426107 systemd[1]: Started sshd@8-172.24.4.57:22-172.24.4.1:49850.service - OpenSSH per-connection server daemon (172.24.4.1:49850). Mar 17 19:07:42.428795 systemd-logind[1460]: Removed session 10. Mar 17 19:07:43.583145 sshd[1743]: Accepted publickey for core from 172.24.4.1 port 49850 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:07:43.585889 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:07:43.596279 systemd-logind[1460]: New session 11 of user core. Mar 17 19:07:43.605331 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 19:07:44.021469 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 19:07:44.022221 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 19:07:44.728403 (dockerd)[1764]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 19:07:44.729421 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 19:07:45.508548 dockerd[1764]: time="2025-03-17T19:07:45.508448145Z" level=info msg="Starting up" Mar 17 19:07:45.771165 dockerd[1764]: time="2025-03-17T19:07:45.770806268Z" level=info msg="Loading containers: start." Mar 17 19:07:45.959240 kernel: Initializing XFRM netlink socket Mar 17 19:07:46.119612 systemd-networkd[1386]: docker0: Link UP Mar 17 19:07:46.158196 dockerd[1764]: time="2025-03-17T19:07:46.158105450Z" level=info msg="Loading containers: done." Mar 17 19:07:46.190163 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2595543978-merged.mount: Deactivated successfully. Mar 17 19:07:46.193125 dockerd[1764]: time="2025-03-17T19:07:46.192749322Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 19:07:46.193125 dockerd[1764]: time="2025-03-17T19:07:46.192914897Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 19:07:46.193400 dockerd[1764]: time="2025-03-17T19:07:46.193137371Z" level=info msg="Daemon has completed initialization" Mar 17 19:07:46.252516 dockerd[1764]: time="2025-03-17T19:07:46.250617788Z" level=info msg="API listen on /run/docker.sock" Mar 17 19:07:46.253223 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 19:07:48.610592 containerd[1480]: time="2025-03-17T19:07:48.610165659Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 19:07:49.326546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3121833532.mount: Deactivated successfully. Mar 17 19:07:51.280570 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 19:07:51.286356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 19:07:51.452425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 19:07:51.456306 (kubelet)[2020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 19:07:52.130381 kubelet[2020]: E0317 19:07:52.130260 2020 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 19:07:52.135255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 19:07:52.135682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 19:07:52.136780 systemd[1]: kubelet.service: Consumed 205ms CPU time, 98M memory peak. Mar 17 19:07:52.189266 containerd[1480]: time="2025-03-17T19:07:52.189008164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:07:52.192474 containerd[1480]: time="2025-03-17T19:07:52.192014769Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674581" Mar 17 19:07:52.194007 containerd[1480]: time="2025-03-17T19:07:52.193798194Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:07:52.204174 containerd[1480]: time="2025-03-17T19:07:52.203915880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:07:52.208071 containerd[1480]: time="2025-03-17T19:07:52.207377683Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 3.597128997s" Mar 17 19:07:52.208071 containerd[1480]: time="2025-03-17T19:07:52.207483111Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 19:07:52.260993 containerd[1480]: time="2025-03-17T19:07:52.260908550Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 19:07:54.627102 containerd[1480]: time="2025-03-17T19:07:54.627049389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:07:54.628799 containerd[1480]: time="2025-03-17T19:07:54.628740397Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619780" Mar 17 19:07:54.634052 containerd[1480]: time="2025-03-17T19:07:54.633919121Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:07:54.639226 containerd[1480]: time="2025-03-17T19:07:54.639185967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:07:54.640657 containerd[1480]: time="2025-03-17T19:07:54.640505731Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 2.379093021s" Mar 17 19:07:54.640657 containerd[1480]: time="2025-03-17T19:07:54.640541411Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 19:07:54.672302 containerd[1480]: time="2025-03-17T19:07:54.672249445Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 19:07:56.202112 containerd[1480]: time="2025-03-17T19:07:56.201834704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:07:56.204303 containerd[1480]: time="2025-03-17T19:07:56.204250710Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903317" Mar 17 19:07:56.208044 containerd[1480]: time="2025-03-17T19:07:56.207382122Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:07:56.211336 containerd[1480]: time="2025-03-17T19:07:56.211313816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:07:56.212594 containerd[1480]: time="2025-03-17T19:07:56.212553859Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 1.540071807s" Mar 17 19:07:56.212650 containerd[1480]: time="2025-03-17T19:07:56.212595321Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 19:07:56.237191 containerd[1480]: time="2025-03-17T19:07:56.237125985Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 19:07:57.739899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount250253782.mount: Deactivated successfully. Mar 17 19:07:58.741766 containerd[1480]: time="2025-03-17T19:07:58.741560545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:07:58.743927 containerd[1480]: time="2025-03-17T19:07:58.743809392Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185380" Mar 17 19:07:58.746160 containerd[1480]: time="2025-03-17T19:07:58.746003723Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:07:58.751507 containerd[1480]: time="2025-03-17T19:07:58.751402791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:07:58.754339 containerd[1480]: time="2025-03-17T19:07:58.753197043Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 2.516013379s" Mar 17 19:07:58.754339 containerd[1480]: time="2025-03-17T19:07:58.753280218Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 19:07:58.804593 containerd[1480]: time="2025-03-17T19:07:58.804467002Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 19:07:59.437491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2313476987.mount: Deactivated successfully. Mar 17 19:08:00.618078 containerd[1480]: time="2025-03-17T19:08:00.617289680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:08:00.619414 containerd[1480]: time="2025-03-17T19:08:00.619373631Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Mar 17 19:08:00.621310 containerd[1480]: time="2025-03-17T19:08:00.621271836Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:08:00.624943 containerd[1480]: time="2025-03-17T19:08:00.624908432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:08:00.626770 containerd[1480]: time="2025-03-17T19:08:00.626737895Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.822193078s" Mar 17 19:08:00.627359 containerd[1480]: time="2025-03-17T19:08:00.627340226Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 19:08:00.653932 containerd[1480]: time="2025-03-17T19:08:00.653888406Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 19:08:01.213275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2796751544.mount: Deactivated successfully. Mar 17 19:08:01.223543 containerd[1480]: time="2025-03-17T19:08:01.223285199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:08:01.225587 containerd[1480]: time="2025-03-17T19:08:01.225503102Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Mar 17 19:08:01.226774 containerd[1480]: time="2025-03-17T19:08:01.226644191Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:08:01.232482 containerd[1480]: time="2025-03-17T19:08:01.232346998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:08:01.235016 containerd[1480]: time="2025-03-17T19:08:01.234772839Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 580.57678ms" Mar 17 19:08:01.235016 containerd[1480]: time="2025-03-17T19:08:01.234842542Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 19:08:01.289126 containerd[1480]: time="2025-03-17T19:08:01.288837079Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 19:08:01.587264 update_engine[1467]: I20250317 19:08:01.587126 1467 update_attempter.cc:509] Updating boot flags... Mar 17 19:08:01.664197 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2124) Mar 17 19:08:01.726188 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2123) Mar 17 19:08:01.804078 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2123) Mar 17 19:08:01.940525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308337365.mount: Deactivated successfully. Mar 17 19:08:02.280096 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 19:08:02.289329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 19:08:04.271316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 19:08:04.282804 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 19:08:04.372462 kubelet[2152]: E0317 19:08:04.372342 2152 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 19:08:04.377425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 19:08:04.377754 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 19:08:04.378768 systemd[1]: kubelet.service: Consumed 279ms CPU time, 95.8M memory peak. Mar 17 19:08:05.923107 containerd[1480]: time="2025-03-17T19:08:05.922348773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:08:05.924718 containerd[1480]: time="2025-03-17T19:08:05.924406393Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Mar 17 19:08:05.926181 containerd[1480]: time="2025-03-17T19:08:05.926150020Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:08:05.930495 containerd[1480]: time="2025-03-17T19:08:05.930461961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:08:05.934963 containerd[1480]: time="2025-03-17T19:08:05.934918442Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.646034459s" Mar 17 19:08:05.935093 containerd[1480]: time="2025-03-17T19:08:05.935074231Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 19:08:10.185721 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 19:08:10.186229 systemd[1]: kubelet.service: Consumed 279ms CPU time, 95.8M memory peak. Mar 17 19:08:10.197551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 19:08:10.227688 systemd[1]: Reload requested from client PID 2256 ('systemctl') (unit session-11.scope)... Mar 17 19:08:10.227702 systemd[1]: Reloading... Mar 17 19:08:10.327952 zram_generator::config[2302]: No configuration found. Mar 17 19:08:10.505601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 19:08:10.629985 systemd[1]: Reloading finished in 401 ms. Mar 17 19:08:10.704631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 19:08:10.715356 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 19:08:10.729869 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 19:08:10.732876 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 19:08:10.733495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 19:08:10.733601 systemd[1]: kubelet.service: Consumed 113ms CPU time, 87.3M memory peak. Mar 17 19:08:10.739645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 19:08:10.846270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 19:08:10.866611 (kubelet)[2375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 19:08:11.102015 kubelet[2375]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 19:08:11.102015 kubelet[2375]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 19:08:11.102015 kubelet[2375]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 19:08:11.103305 kubelet[2375]: I0317 19:08:11.102013 2375 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 19:08:11.850586 kubelet[2375]: I0317 19:08:11.850488 2375 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 19:08:11.850586 kubelet[2375]: I0317 19:08:11.850531 2375 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 19:08:11.850878 kubelet[2375]: I0317 19:08:11.850809 2375 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 19:08:11.884994 kubelet[2375]: I0317 19:08:11.884941 2375 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 19:08:11.885833 kubelet[2375]: E0317 19:08:11.885420 2375 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:11.905619 kubelet[2375]: I0317 19:08:11.903715 2375 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 19:08:11.905619 kubelet[2375]: I0317 19:08:11.904155 2375 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 19:08:11.905619 kubelet[2375]: I0317 19:08:11.904195 2375 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-0-c-fc9f5e1ee2.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 19:08:11.907923 kubelet[2375]: I0317 19:08:11.907237 2375 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 19:08:11.907923 kubelet[2375]: I0317 19:08:11.907297 2375 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 19:08:11.907923 kubelet[2375]: I0317 19:08:11.907596 2375 state_mem.go:36] "Initialized new in-memory state store" Mar 17 19:08:11.909728 kubelet[2375]: I0317 19:08:11.909680 2375 kubelet.go:400] "Attempting to sync node with API server" Mar 17 19:08:11.909930 kubelet[2375]: I0317 19:08:11.909903 2375 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 19:08:11.910183 kubelet[2375]: I0317 19:08:11.910158 2375 kubelet.go:312] "Adding apiserver pod source" Mar 17 19:08:11.911430 kubelet[2375]: I0317 19:08:11.911384 2375 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 19:08:11.921546 kubelet[2375]: W0317 19:08:11.920977 2375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-c-fc9f5e1ee2.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:11.921546 kubelet[2375]: E0317 19:08:11.921136 2375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-c-fc9f5e1ee2.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:11.921546 kubelet[2375]: W0317 19:08:11.921243 2375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:11.921546 kubelet[2375]: E0317 19:08:11.921298 2375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:11.922179 kubelet[2375]: I0317 19:08:11.921925 2375 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 19:08:11.925769 kubelet[2375]: I0317 19:08:11.925698 2375 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 19:08:11.925893 kubelet[2375]: W0317 19:08:11.925787 2375 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 19:08:11.927557 kubelet[2375]: I0317 19:08:11.927303 2375 server.go:1264] "Started kubelet" Mar 17 19:08:11.946445 kubelet[2375]: I0317 19:08:11.946291 2375 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 19:08:11.951443 kubelet[2375]: E0317 19:08:11.951205 2375 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.57:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.57:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-1-0-c-fc9f5e1ee2.novalocal.182dacad795647d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-0-c-fc9f5e1ee2.novalocal,UID:ci-4230-1-0-c-fc9f5e1ee2.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-0-c-fc9f5e1ee2.novalocal,},FirstTimestamp:2025-03-17 19:08:11.927267282 +0000 UTC m=+1.052959862,LastTimestamp:2025-03-17 19:08:11.927267282 +0000 UTC m=+1.052959862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-0-c-fc9f5e1ee2.novalocal,}" Mar 17 19:08:11.952077 kubelet[2375]: I0317 19:08:11.951842 2375 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 19:08:11.953082 kubelet[2375]: I0317 19:08:11.953008 2375 server.go:455] "Adding debug handlers to kubelet server" Mar 17 19:08:11.954036 kubelet[2375]: I0317 19:08:11.953978 2375 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 19:08:11.954260 kubelet[2375]: I0317 19:08:11.954246 2375 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 19:08:11.955497 kubelet[2375]: I0317 19:08:11.954685 2375 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 19:08:11.955497 kubelet[2375]: I0317 19:08:11.954873 2375 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 19:08:11.955497 kubelet[2375]: I0317 19:08:11.955005 2375 reconciler.go:26] "Reconciler: start to sync state" Mar 17 19:08:11.955719 kubelet[2375]: W0317 19:08:11.955637 2375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:11.955765 kubelet[2375]: E0317 19:08:11.955744 2375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:11.957427 kubelet[2375]: E0317 19:08:11.956956 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-c-fc9f5e1ee2.novalocal?timeout=10s\": dial tcp 172.24.4.57:6443: connect: connection refused" interval="200ms" Mar 17 19:08:11.958985 kubelet[2375]: I0317 19:08:11.958940 2375 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 19:08:11.960313 kubelet[2375]: E0317 19:08:11.960271 2375 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 19:08:11.960795 kubelet[2375]: I0317 19:08:11.960757 2375 factory.go:221] Registration of the containerd container factory successfully Mar 17 19:08:11.960795 kubelet[2375]: I0317 19:08:11.960794 2375 factory.go:221] Registration of the systemd container factory successfully Mar 17 19:08:11.976358 kubelet[2375]: I0317 19:08:11.976223 2375 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 19:08:11.979880 kubelet[2375]: I0317 19:08:11.979513 2375 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 19:08:11.979880 kubelet[2375]: I0317 19:08:11.979559 2375 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 19:08:11.979880 kubelet[2375]: I0317 19:08:11.979579 2375 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 19:08:11.979880 kubelet[2375]: E0317 19:08:11.979620 2375 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 19:08:11.986292 kubelet[2375]: W0317 19:08:11.986240 2375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:11.986577 kubelet[2375]: E0317 19:08:11.986562 2375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:11.989519 kubelet[2375]: I0317 19:08:11.989493 2375 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 19:08:11.989673 kubelet[2375]: I0317 19:08:11.989649 2375 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 19:08:11.989963 kubelet[2375]: I0317 19:08:11.989746 2375 state_mem.go:36] "Initialized new in-memory state store" Mar 17 19:08:11.993732 kubelet[2375]: I0317 19:08:11.993719 2375 policy_none.go:49] "None policy: Start" Mar 17 19:08:11.994404 kubelet[2375]: I0317 19:08:11.994390 2375 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 19:08:11.994493 kubelet[2375]: I0317 19:08:11.994484 2375 state_mem.go:35] "Initializing new in-memory state store" Mar 17 19:08:12.002903 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 19:08:12.011437 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 19:08:12.016754 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 19:08:12.029249 kubelet[2375]: I0317 19:08:12.028772 2375 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 19:08:12.029249 kubelet[2375]: I0317 19:08:12.028993 2375 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 19:08:12.029249 kubelet[2375]: I0317 19:08:12.029133 2375 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 19:08:12.031919 kubelet[2375]: E0317 19:08:12.031898 2375 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" not found" Mar 17 19:08:12.057418 kubelet[2375]: I0317 19:08:12.057339 2375 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.058103 kubelet[2375]: E0317 19:08:12.058001 2375 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.57:6443/api/v1/nodes\": dial tcp 172.24.4.57:6443: connect: connection refused" node="ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.080949 kubelet[2375]: I0317 19:08:12.080811 2375 topology_manager.go:215] "Topology Admit Handler" podUID="a79bfc0bc82587ff8ce62c29258d94df" podNamespace="kube-system" podName="kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.086107 kubelet[2375]: I0317 19:08:12.085656 2375 topology_manager.go:215] "Topology Admit Handler" podUID="ce5820534d390545fa5b4dafd45c5861" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.090557 kubelet[2375]: I0317 19:08:12.090168 2375 topology_manager.go:215] "Topology Admit Handler" podUID="a5c78f151e5e46d433109a5f724f0b12" podNamespace="kube-system" podName="kube-scheduler-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.108666 systemd[1]: Created slice kubepods-burstable-poda79bfc0bc82587ff8ce62c29258d94df.slice - libcontainer container kubepods-burstable-poda79bfc0bc82587ff8ce62c29258d94df.slice. Mar 17 19:08:12.133626 systemd[1]: Created slice kubepods-burstable-podce5820534d390545fa5b4dafd45c5861.slice - libcontainer container kubepods-burstable-podce5820534d390545fa5b4dafd45c5861.slice. Mar 17 19:08:12.141480 systemd[1]: Created slice kubepods-burstable-poda5c78f151e5e46d433109a5f724f0b12.slice - libcontainer container kubepods-burstable-poda5c78f151e5e46d433109a5f724f0b12.slice. Mar 17 19:08:12.157681 kubelet[2375]: E0317 19:08:12.157642 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-c-fc9f5e1ee2.novalocal?timeout=10s\": dial tcp 172.24.4.57:6443: connect: connection refused" interval="400ms" Mar 17 19:08:12.257702 kubelet[2375]: I0317 19:08:12.257368 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a79bfc0bc82587ff8ce62c29258d94df-ca-certs\") pod \"kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"a79bfc0bc82587ff8ce62c29258d94df\") " pod="kube-system/kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.257702 kubelet[2375]: I0317 19:08:12.257468 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a79bfc0bc82587ff8ce62c29258d94df-k8s-certs\") pod \"kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"a79bfc0bc82587ff8ce62c29258d94df\") " pod="kube-system/kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.257702 kubelet[2375]: I0317 19:08:12.257520 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a79bfc0bc82587ff8ce62c29258d94df-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"a79bfc0bc82587ff8ce62c29258d94df\") " pod="kube-system/kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.257702 kubelet[2375]: I0317 19:08:12.257570 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce5820534d390545fa5b4dafd45c5861-ca-certs\") pod \"kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"ce5820534d390545fa5b4dafd45c5861\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.257702 kubelet[2375]: I0317 19:08:12.257738 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce5820534d390545fa5b4dafd45c5861-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"ce5820534d390545fa5b4dafd45c5861\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.260601 kubelet[2375]: I0317 19:08:12.257805 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce5820534d390545fa5b4dafd45c5861-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"ce5820534d390545fa5b4dafd45c5861\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.260601 kubelet[2375]: I0317 19:08:12.257859 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce5820534d390545fa5b4dafd45c5861-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"ce5820534d390545fa5b4dafd45c5861\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.260601 kubelet[2375]: I0317 19:08:12.257917 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce5820534d390545fa5b4dafd45c5861-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"ce5820534d390545fa5b4dafd45c5861\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.260601 kubelet[2375]: I0317 19:08:12.257960 2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a5c78f151e5e46d433109a5f724f0b12-kubeconfig\") pod \"kube-scheduler-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"a5c78f151e5e46d433109a5f724f0b12\") " pod="kube-system/kube-scheduler-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.263819 kubelet[2375]: I0317 19:08:12.263166 2375 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.263819 kubelet[2375]: E0317 19:08:12.263725 2375 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.57:6443/api/v1/nodes\": dial tcp 172.24.4.57:6443: connect: connection refused" node="ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.426359 containerd[1480]: time="2025-03-17T19:08:12.426163888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal,Uid:a79bfc0bc82587ff8ce62c29258d94df,Namespace:kube-system,Attempt:0,}" Mar 17 19:08:12.440262 containerd[1480]: time="2025-03-17T19:08:12.440177665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal,Uid:ce5820534d390545fa5b4dafd45c5861,Namespace:kube-system,Attempt:0,}" Mar 17 19:08:12.445922 containerd[1480]: time="2025-03-17T19:08:12.445856609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-0-c-fc9f5e1ee2.novalocal,Uid:a5c78f151e5e46d433109a5f724f0b12,Namespace:kube-system,Attempt:0,}" Mar 17 19:08:12.559352 kubelet[2375]: E0317 19:08:12.559251 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-c-fc9f5e1ee2.novalocal?timeout=10s\": dial tcp 172.24.4.57:6443: connect: connection refused" interval="800ms" Mar 17 19:08:12.668015 kubelet[2375]: I0317 19:08:12.667842 2375 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.668861 kubelet[2375]: E0317 19:08:12.668673 2375 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.57:6443/api/v1/nodes\": dial tcp 172.24.4.57:6443: connect: connection refused" node="ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:12.806450 kubelet[2375]: W0317 19:08:12.806268 2375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:12.806450 kubelet[2375]: E0317 19:08:12.806406 2375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:13.135371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151113220.mount: Deactivated successfully. Mar 17 19:08:13.147305 containerd[1480]: time="2025-03-17T19:08:13.147083969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 19:08:13.152126 containerd[1480]: time="2025-03-17T19:08:13.151980782Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 17 19:08:13.155011 containerd[1480]: time="2025-03-17T19:08:13.154858878Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 19:08:13.159431 containerd[1480]: time="2025-03-17T19:08:13.158994862Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 19:08:13.163147 containerd[1480]: time="2025-03-17T19:08:13.162347956Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 19:08:13.163147 containerd[1480]: time="2025-03-17T19:08:13.162798078Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 19:08:13.163147 containerd[1480]: time="2025-03-17T19:08:13.162964953Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 19:08:13.171566 containerd[1480]: time="2025-03-17T19:08:13.171489101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 19:08:13.174174 containerd[1480]: time="2025-03-17T19:08:13.174105800Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 747.729125ms" Mar 17 19:08:13.184125 containerd[1480]: time="2025-03-17T19:08:13.184006261Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 737.978941ms" Mar 17 19:08:13.193123 kubelet[2375]: W0317 19:08:13.192938 2375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-c-fc9f5e1ee2.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:13.194012 kubelet[2375]: E0317 19:08:13.193943 2375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-c-fc9f5e1ee2.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:13.221941 containerd[1480]: time="2025-03-17T19:08:13.221485721Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 781.133107ms" Mar 17 19:08:13.360109 kubelet[2375]: E0317 19:08:13.360045 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-c-fc9f5e1ee2.novalocal?timeout=10s\": dial tcp 172.24.4.57:6443: connect: connection refused" interval="1.6s" Mar 17 19:08:13.383788 containerd[1480]: time="2025-03-17T19:08:13.383380253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:08:13.383788 containerd[1480]: time="2025-03-17T19:08:13.383443228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:08:13.383788 containerd[1480]: time="2025-03-17T19:08:13.383456151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:08:13.383788 containerd[1480]: time="2025-03-17T19:08:13.383539814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:08:13.385274 containerd[1480]: time="2025-03-17T19:08:13.385203453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:08:13.385411 containerd[1480]: time="2025-03-17T19:08:13.385327479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:08:13.385882 containerd[1480]: time="2025-03-17T19:08:13.385792328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:08:13.386448 containerd[1480]: time="2025-03-17T19:08:13.386071418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:08:13.386448 containerd[1480]: time="2025-03-17T19:08:13.386251097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:08:13.386448 containerd[1480]: time="2025-03-17T19:08:13.386303772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:08:13.386448 containerd[1480]: time="2025-03-17T19:08:13.386323108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:08:13.386626 containerd[1480]: time="2025-03-17T19:08:13.386398956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:08:13.409608 systemd[1]: Started cri-containerd-5e9b3d3a07ebc2c00db5f62e382222ab5dc16798875222aafbb86b37c5827f36.scope - libcontainer container 5e9b3d3a07ebc2c00db5f62e382222ab5dc16798875222aafbb86b37c5827f36. Mar 17 19:08:13.421197 systemd[1]: Started cri-containerd-d22f95eaea9c61bfbb9f4dbddb22f49c3a4c860a332ed2749228d1df58dce64f.scope - libcontainer container d22f95eaea9c61bfbb9f4dbddb22f49c3a4c860a332ed2749228d1df58dce64f. Mar 17 19:08:13.425505 systemd[1]: Started cri-containerd-2389c42ecdef8b4c879c0e2b3da3f8be85b0a4794ab70be8e5c83ba0ee786e71.scope - libcontainer container 2389c42ecdef8b4c879c0e2b3da3f8be85b0a4794ab70be8e5c83ba0ee786e71. Mar 17 19:08:13.465590 kubelet[2375]: W0317 19:08:13.465524 2375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:13.465838 kubelet[2375]: E0317 19:08:13.465826 2375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:13.471309 kubelet[2375]: I0317 19:08:13.471278 2375 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:13.473513 kubelet[2375]: E0317 19:08:13.473370 2375 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.57:6443/api/v1/nodes\": dial tcp 172.24.4.57:6443: connect: connection refused" node="ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:13.480093 kubelet[2375]: W0317 19:08:13.479943 2375 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:13.480093 kubelet[2375]: E0317 19:08:13.480065 2375 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused Mar 17 19:08:13.493185 containerd[1480]: time="2025-03-17T19:08:13.492927692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-0-c-fc9f5e1ee2.novalocal,Uid:a5c78f151e5e46d433109a5f724f0b12,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e9b3d3a07ebc2c00db5f62e382222ab5dc16798875222aafbb86b37c5827f36\"" Mar 17 19:08:13.495811 containerd[1480]: time="2025-03-17T19:08:13.495510410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal,Uid:a79bfc0bc82587ff8ce62c29258d94df,Namespace:kube-system,Attempt:0,} returns sandbox id \"2389c42ecdef8b4c879c0e2b3da3f8be85b0a4794ab70be8e5c83ba0ee786e71\"" Mar 17 19:08:13.498910 containerd[1480]: time="2025-03-17T19:08:13.498814634Z" level=info msg="CreateContainer within sandbox \"5e9b3d3a07ebc2c00db5f62e382222ab5dc16798875222aafbb86b37c5827f36\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 19:08:13.500882 containerd[1480]: time="2025-03-17T19:08:13.500842157Z" level=info msg="CreateContainer within sandbox \"2389c42ecdef8b4c879c0e2b3da3f8be85b0a4794ab70be8e5c83ba0ee786e71\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 19:08:13.503709 containerd[1480]: time="2025-03-17T19:08:13.503671955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal,Uid:ce5820534d390545fa5b4dafd45c5861,Namespace:kube-system,Attempt:0,} returns sandbox id \"d22f95eaea9c61bfbb9f4dbddb22f49c3a4c860a332ed2749228d1df58dce64f\"" Mar 17 19:08:13.507570 containerd[1480]: time="2025-03-17T19:08:13.507463149Z" level=info msg="CreateContainer within sandbox \"d22f95eaea9c61bfbb9f4dbddb22f49c3a4c860a332ed2749228d1df58dce64f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 19:08:13.546152 containerd[1480]: time="2025-03-17T19:08:13.546104060Z" level=info msg="CreateContainer within sandbox \"2389c42ecdef8b4c879c0e2b3da3f8be85b0a4794ab70be8e5c83ba0ee786e71\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4dc21104fcb3e53b5d0a0477e4c963e6e3ee0c679321e84afd40209d47bb8a58\"" Mar 17 19:08:13.547010 containerd[1480]: time="2025-03-17T19:08:13.546706591Z" level=info msg="StartContainer for \"4dc21104fcb3e53b5d0a0477e4c963e6e3ee0c679321e84afd40209d47bb8a58\"" Mar 17 19:08:13.548534 containerd[1480]: time="2025-03-17T19:08:13.548457588Z" level=info msg="CreateContainer within sandbox \"5e9b3d3a07ebc2c00db5f62e382222ab5dc16798875222aafbb86b37c5827f36\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bebaa1a0fc8c1eb2e56f6adcda93568b853382e8713b0d1e06e43549c5ef3ab3\"" Mar 17 19:08:13.548931 containerd[1480]: time="2025-03-17T19:08:13.548874199Z" level=info msg="StartContainer for \"bebaa1a0fc8c1eb2e56f6adcda93568b853382e8713b0d1e06e43549c5ef3ab3\"" Mar 17 19:08:13.557309 containerd[1480]: time="2025-03-17T19:08:13.557248032Z" level=info msg="CreateContainer within sandbox \"d22f95eaea9c61bfbb9f4dbddb22f49c3a4c860a332ed2749228d1df58dce64f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3b6c1394154f56e0ca7aeb1eeaea71a2e759046a50c38273e120fa34387188b5\"" Mar 17 19:08:13.557754 containerd[1480]: time="2025-03-17T19:08:13.557726316Z" level=info msg="StartContainer for \"3b6c1394154f56e0ca7aeb1eeaea71a2e759046a50c38273e120fa34387188b5\"" Mar 17 19:08:13.586655 systemd[1]: Started cri-containerd-4dc21104fcb3e53b5d0a0477e4c963e6e3ee0c679321e84afd40209d47bb8a58.scope - libcontainer container 4dc21104fcb3e53b5d0a0477e4c963e6e3ee0c679321e84afd40209d47bb8a58. Mar 17 19:08:13.596627 systemd[1]: Started cri-containerd-bebaa1a0fc8c1eb2e56f6adcda93568b853382e8713b0d1e06e43549c5ef3ab3.scope - libcontainer container bebaa1a0fc8c1eb2e56f6adcda93568b853382e8713b0d1e06e43549c5ef3ab3. Mar 17 19:08:13.610665 systemd[1]: Started cri-containerd-3b6c1394154f56e0ca7aeb1eeaea71a2e759046a50c38273e120fa34387188b5.scope - libcontainer container 3b6c1394154f56e0ca7aeb1eeaea71a2e759046a50c38273e120fa34387188b5. Mar 17 19:08:13.666148 containerd[1480]: time="2025-03-17T19:08:13.665106918Z" level=info msg="StartContainer for \"4dc21104fcb3e53b5d0a0477e4c963e6e3ee0c679321e84afd40209d47bb8a58\" returns successfully" Mar 17 19:08:13.679170 containerd[1480]: time="2025-03-17T19:08:13.679124629Z" level=info msg="StartContainer for \"3b6c1394154f56e0ca7aeb1eeaea71a2e759046a50c38273e120fa34387188b5\" returns successfully" Mar 17 19:08:13.711367 containerd[1480]: time="2025-03-17T19:08:13.711317083Z" level=info msg="StartContainer for \"bebaa1a0fc8c1eb2e56f6adcda93568b853382e8713b0d1e06e43549c5ef3ab3\" returns successfully" Mar 17 19:08:15.078171 kubelet[2375]: I0317 19:08:15.078136 2375 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:15.750581 kubelet[2375]: E0317 19:08:15.750497 2375 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" not found" node="ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:15.818481 kubelet[2375]: I0317 19:08:15.818435 2375 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:15.918889 kubelet[2375]: I0317 19:08:15.918809 2375 apiserver.go:52] "Watching apiserver" Mar 17 19:08:15.956044 kubelet[2375]: I0317 19:08:15.955988 2375 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 19:08:18.753179 kubelet[2375]: W0317 19:08:18.752631 2375 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 19:08:19.311467 systemd[1]: Reload requested from client PID 2652 ('systemctl') (unit session-11.scope)... Mar 17 19:08:19.312017 systemd[1]: Reloading... Mar 17 19:08:19.428060 zram_generator::config[2701]: No configuration found. Mar 17 19:08:19.610881 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 19:08:19.755058 systemd[1]: Reloading finished in 441 ms. Mar 17 19:08:19.781259 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 19:08:19.798365 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 19:08:19.798644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 19:08:19.798741 systemd[1]: kubelet.service: Consumed 1.534s CPU time, 115.5M memory peak. Mar 17 19:08:19.804490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 19:08:19.936559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 19:08:19.952795 (kubelet)[2761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 19:08:20.003051 kubelet[2761]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 19:08:20.003051 kubelet[2761]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 19:08:20.003051 kubelet[2761]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 19:08:20.003051 kubelet[2761]: I0317 19:08:20.002703 2761 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 19:08:20.011165 kubelet[2761]: I0317 19:08:20.009526 2761 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 19:08:20.011165 kubelet[2761]: I0317 19:08:20.009550 2761 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 19:08:20.011165 kubelet[2761]: I0317 19:08:20.009776 2761 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 19:08:20.011490 kubelet[2761]: I0317 19:08:20.011476 2761 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 19:08:20.012836 kubelet[2761]: I0317 19:08:20.012806 2761 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 19:08:20.018678 kubelet[2761]: I0317 19:08:20.018658 2761 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 19:08:20.019012 kubelet[2761]: I0317 19:08:20.018987 2761 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 19:08:20.019259 kubelet[2761]: I0317 19:08:20.019085 2761 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-0-c-fc9f5e1ee2.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 19:08:20.019408 kubelet[2761]: I0317 19:08:20.019396 2761 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 19:08:20.019477 kubelet[2761]: I0317 19:08:20.019468 2761 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 19:08:20.019569 kubelet[2761]: I0317 19:08:20.019558 2761 state_mem.go:36] "Initialized new in-memory state store" Mar 17 19:08:20.019710 kubelet[2761]: I0317 19:08:20.019698 2761 kubelet.go:400] "Attempting to sync node with API server" Mar 17 19:08:20.019777 kubelet[2761]: I0317 19:08:20.019768 2761 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 19:08:20.019854 kubelet[2761]: I0317 19:08:20.019839 2761 kubelet.go:312] "Adding apiserver pod source" Mar 17 19:08:20.019934 kubelet[2761]: I0317 19:08:20.019924 2761 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 19:08:20.023716 kubelet[2761]: I0317 19:08:20.023691 2761 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 19:08:20.023878 kubelet[2761]: I0317 19:08:20.023859 2761 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 19:08:20.027993 kubelet[2761]: I0317 19:08:20.027969 2761 server.go:1264] "Started kubelet" Mar 17 19:08:20.039128 kubelet[2761]: I0317 19:08:20.039079 2761 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 19:08:20.041628 kubelet[2761]: I0317 19:08:20.040321 2761 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 19:08:20.041905 kubelet[2761]: I0317 19:08:20.041881 2761 server.go:455] "Adding debug handlers to kubelet server" Mar 17 19:08:20.042193 kubelet[2761]: I0317 19:08:20.042165 2761 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 19:08:20.043830 kubelet[2761]: I0317 19:08:20.043813 2761 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 19:08:20.051339 kubelet[2761]: I0317 19:08:20.051320 2761 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 19:08:20.051574 kubelet[2761]: I0317 19:08:20.051560 2761 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 19:08:20.051758 kubelet[2761]: I0317 19:08:20.051746 2761 reconciler.go:26] "Reconciler: start to sync state" Mar 17 19:08:20.052165 kubelet[2761]: E0317 19:08:20.052148 2761 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 19:08:20.053483 kubelet[2761]: I0317 19:08:20.053468 2761 factory.go:221] Registration of the systemd container factory successfully Mar 17 19:08:20.053892 kubelet[2761]: I0317 19:08:20.053660 2761 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 19:08:20.056470 kubelet[2761]: I0317 19:08:20.056451 2761 factory.go:221] Registration of the containerd container factory successfully Mar 17 19:08:20.057336 kubelet[2761]: I0317 19:08:20.057300 2761 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 19:08:20.060103 kubelet[2761]: I0317 19:08:20.060079 2761 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 19:08:20.060179 kubelet[2761]: I0317 19:08:20.060150 2761 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 19:08:20.060179 kubelet[2761]: I0317 19:08:20.060177 2761 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 19:08:20.060236 kubelet[2761]: E0317 19:08:20.060218 2761 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 19:08:20.136762 kubelet[2761]: I0317 19:08:20.136721 2761 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 19:08:20.137013 kubelet[2761]: I0317 19:08:20.136984 2761 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 19:08:20.137186 kubelet[2761]: I0317 19:08:20.137118 2761 state_mem.go:36] "Initialized new in-memory state store" Mar 17 19:08:20.137471 kubelet[2761]: I0317 19:08:20.137457 2761 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 19:08:20.137887 kubelet[2761]: I0317 19:08:20.137549 2761 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 19:08:20.138059 kubelet[2761]: I0317 19:08:20.137961 2761 policy_none.go:49] "None policy: Start" Mar 17 19:08:20.141167 kubelet[2761]: I0317 19:08:20.141139 2761 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 19:08:20.141167 kubelet[2761]: I0317 19:08:20.141169 2761 state_mem.go:35] "Initializing new in-memory state store" Mar 17 19:08:20.141537 kubelet[2761]: I0317 19:08:20.141404 2761 state_mem.go:75] "Updated machine memory state" Mar 17 19:08:20.146431 kubelet[2761]: I0317 19:08:20.146410 2761 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 19:08:20.147206 kubelet[2761]: I0317 19:08:20.146915 2761 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 19:08:20.147206 kubelet[2761]: I0317 19:08:20.147080 2761 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 19:08:20.157611 kubelet[2761]: I0317 19:08:20.157575 2761 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:20.161084 kubelet[2761]: I0317 19:08:20.160382 2761 topology_manager.go:215] "Topology Admit Handler" podUID="a79bfc0bc82587ff8ce62c29258d94df" podNamespace="kube-system" podName="kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:20.161084 kubelet[2761]: I0317 19:08:20.160462 2761 topology_manager.go:215] "Topology Admit Handler" podUID="ce5820534d390545fa5b4dafd45c5861" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:20.161084 kubelet[2761]: I0317 19:08:20.160516 2761 topology_manager.go:215] "Topology Admit Handler" podUID="a5c78f151e5e46d433109a5f724f0b12" podNamespace="kube-system" podName="kube-scheduler-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:20.256091 kubelet[2761]: I0317 19:08:20.252786 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a79bfc0bc82587ff8ce62c29258d94df-k8s-certs\") pod \"kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"a79bfc0bc82587ff8ce62c29258d94df\") " pod="kube-system/kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:20.256091 kubelet[2761]: I0317 19:08:20.252875 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce5820534d390545fa5b4dafd45c5861-ca-certs\") pod \"kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"ce5820534d390545fa5b4dafd45c5861\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:20.256091 kubelet[2761]: I0317 19:08:20.252933 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce5820534d390545fa5b4dafd45c5861-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"ce5820534d390545fa5b4dafd45c5861\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:20.256091 kubelet[2761]: I0317 19:08:20.252980 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce5820534d390545fa5b4dafd45c5861-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"ce5820534d390545fa5b4dafd45c5861\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:20.256597 kubelet[2761]: I0317 19:08:20.253074 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce5820534d390545fa5b4dafd45c5861-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"ce5820534d390545fa5b4dafd45c5861\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:20.256597 kubelet[2761]: I0317 19:08:20.253129 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a79bfc0bc82587ff8ce62c29258d94df-ca-certs\") pod \"kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"a79bfc0bc82587ff8ce62c29258d94df\") " pod="kube-system/kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:20.256597 kubelet[2761]: I0317 19:08:20.253175 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a79bfc0bc82587ff8ce62c29258d94df-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"a79bfc0bc82587ff8ce62c29258d94df\") " pod="kube-system/kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:20.256597 kubelet[2761]: I0317 19:08:20.253222 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce5820534d390545fa5b4dafd45c5861-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"ce5820534d390545fa5b4dafd45c5861\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:20.256867 kubelet[2761]: I0317 19:08:20.253266 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a5c78f151e5e46d433109a5f724f0b12-kubeconfig\") pod \"kube-scheduler-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" (UID: \"a5c78f151e5e46d433109a5f724f0b12\") " pod="kube-system/kube-scheduler-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:20.264061 kubelet[2761]: W0317 19:08:20.258978 2761 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 19:08:20.264061 kubelet[2761]: W0317 19:08:20.259287 2761 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 19:08:20.264061 kubelet[2761]: W0317 19:08:20.259450 2761 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 19:08:20.264061 kubelet[2761]: E0317 19:08:20.260106 2761 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:20.401699 kubelet[2761]: I0317 19:08:20.401603 2761 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:20.402312 kubelet[2761]: I0317 19:08:20.401788 2761 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-1-0-c-fc9f5e1ee2.novalocal" Mar 17 19:08:20.583947 sudo[2793]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 19:08:20.584726 sudo[2793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 19:08:21.021919 kubelet[2761]: I0317 19:08:21.021100 2761 apiserver.go:52] "Watching apiserver" Mar 17 19:08:21.052585 kubelet[2761]: I0317 19:08:21.052539 2761 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 19:08:21.156078 kubelet[2761]: I0317 19:08:21.155415 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-1-0-c-fc9f5e1ee2.novalocal" podStartSLOduration=3.155396785 podStartE2EDuration="3.155396785s" podCreationTimestamp="2025-03-17 19:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 19:08:21.155330592 +0000 UTC m=+1.198474138" watchObservedRunningTime="2025-03-17 19:08:21.155396785 +0000 UTC m=+1.198540321" Mar 17 19:08:21.182564 kubelet[2761]: I0317 19:08:21.182283 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-1-0-c-fc9f5e1ee2.novalocal" podStartSLOduration=1.182258675 podStartE2EDuration="1.182258675s" podCreationTimestamp="2025-03-17 19:08:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 19:08:21.167891239 +0000 UTC m=+1.211034785" watchObservedRunningTime="2025-03-17 19:08:21.182258675 +0000 UTC m=+1.225402231" Mar 17 19:08:21.197079 kubelet[2761]: I0317 19:08:21.196994 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-1-0-c-fc9f5e1ee2.novalocal" podStartSLOduration=1.196975253 podStartE2EDuration="1.196975253s" podCreationTimestamp="2025-03-17 19:08:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 19:08:21.182863683 +0000 UTC m=+1.226007209" watchObservedRunningTime="2025-03-17 19:08:21.196975253 +0000 UTC m=+1.240118789" Mar 17 19:08:21.210349 sudo[2793]: pam_unix(sudo:session): session closed for user root Mar 17 19:08:24.496654 sudo[1747]: pam_unix(sudo:session): session closed for user root Mar 17 19:08:24.748605 sshd[1746]: Connection closed by 172.24.4.1 port 49850 Mar 17 19:08:24.750117 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Mar 17 19:08:24.760845 systemd[1]: sshd@8-172.24.4.57:22-172.24.4.1:49850.service: Deactivated successfully. Mar 17 19:08:24.766748 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 19:08:24.767598 systemd[1]: session-11.scope: Consumed 7.814s CPU time, 296.5M memory peak. Mar 17 19:08:24.772456 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. Mar 17 19:08:24.775100 systemd-logind[1460]: Removed session 11. Mar 17 19:08:32.757285 kubelet[2761]: I0317 19:08:32.757237 2761 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 19:08:32.761436 kubelet[2761]: I0317 19:08:32.759433 2761 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 19:08:32.761613 containerd[1480]: time="2025-03-17T19:08:32.758055820Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 19:08:33.330691 kubelet[2761]: I0317 19:08:33.330531 2761 topology_manager.go:215] "Topology Admit Handler" podUID="5ac4e0df-73ad-4d8b-921c-68ec04ed5aa3" podNamespace="kube-system" podName="kube-proxy-5jcvg" Mar 17 19:08:33.345073 kubelet[2761]: I0317 19:08:33.344284 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5ac4e0df-73ad-4d8b-921c-68ec04ed5aa3-kube-proxy\") pod \"kube-proxy-5jcvg\" (UID: \"5ac4e0df-73ad-4d8b-921c-68ec04ed5aa3\") " pod="kube-system/kube-proxy-5jcvg" Mar 17 19:08:33.345073 kubelet[2761]: I0317 19:08:33.344356 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ac4e0df-73ad-4d8b-921c-68ec04ed5aa3-lib-modules\") pod \"kube-proxy-5jcvg\" (UID: \"5ac4e0df-73ad-4d8b-921c-68ec04ed5aa3\") " pod="kube-system/kube-proxy-5jcvg" Mar 17 19:08:33.345073 kubelet[2761]: I0317 19:08:33.344409 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h76jm\" (UniqueName: \"kubernetes.io/projected/5ac4e0df-73ad-4d8b-921c-68ec04ed5aa3-kube-api-access-h76jm\") pod \"kube-proxy-5jcvg\" (UID: \"5ac4e0df-73ad-4d8b-921c-68ec04ed5aa3\") " pod="kube-system/kube-proxy-5jcvg" Mar 17 19:08:33.345073 kubelet[2761]: I0317 19:08:33.344462 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ac4e0df-73ad-4d8b-921c-68ec04ed5aa3-xtables-lock\") pod \"kube-proxy-5jcvg\" (UID: \"5ac4e0df-73ad-4d8b-921c-68ec04ed5aa3\") " pod="kube-system/kube-proxy-5jcvg" Mar 17 19:08:33.359680 systemd[1]: Created slice kubepods-besteffort-pod5ac4e0df_73ad_4d8b_921c_68ec04ed5aa3.slice - libcontainer container kubepods-besteffort-pod5ac4e0df_73ad_4d8b_921c_68ec04ed5aa3.slice. Mar 17 19:08:33.375423 kubelet[2761]: I0317 19:08:33.375199 2761 topology_manager.go:215] "Topology Admit Handler" podUID="c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" podNamespace="kube-system" podName="cilium-6k8z8" Mar 17 19:08:33.384278 systemd[1]: Created slice kubepods-burstable-podc0e4e2f4_a265_4c49_a5e9_b8b792b9317e.slice - libcontainer container kubepods-burstable-podc0e4e2f4_a265_4c49_a5e9_b8b792b9317e.slice. Mar 17 19:08:33.446259 kubelet[2761]: I0317 19:08:33.445199 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-hostproc\") pod \"cilium-6k8z8\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " pod="kube-system/cilium-6k8z8" Mar 17 19:08:33.446259 kubelet[2761]: I0317 19:08:33.445246 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-cilium-config-path\") pod \"cilium-6k8z8\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " pod="kube-system/cilium-6k8z8" Mar 17 19:08:33.446259 kubelet[2761]: I0317 19:08:33.445266 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-hubble-tls\") pod \"cilium-6k8z8\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " pod="kube-system/cilium-6k8z8" Mar 17 19:08:33.446259 kubelet[2761]: I0317 19:08:33.445299 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-bpf-maps\") pod \"cilium-6k8z8\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " pod="kube-system/cilium-6k8z8" Mar 17 19:08:33.446259 kubelet[2761]: I0317 19:08:33.445320 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-clustermesh-secrets\") pod \"cilium-6k8z8\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " pod="kube-system/cilium-6k8z8" Mar 17 19:08:33.446259 kubelet[2761]: I0317 19:08:33.445337 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-xtables-lock\") pod \"cilium-6k8z8\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " pod="kube-system/cilium-6k8z8" Mar 17 19:08:33.446574 kubelet[2761]: I0317 19:08:33.445356 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-host-proc-sys-kernel\") pod \"cilium-6k8z8\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " pod="kube-system/cilium-6k8z8" Mar 17 19:08:33.446574 kubelet[2761]: I0317 19:08:33.445384 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-lib-modules\") pod \"cilium-6k8z8\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " pod="kube-system/cilium-6k8z8" Mar 17 19:08:33.446574 kubelet[2761]: I0317 19:08:33.445402 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-cni-path\") pod \"cilium-6k8z8\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " pod="kube-system/cilium-6k8z8" Mar 17 19:08:33.446574 kubelet[2761]: I0317 19:08:33.445419 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-etc-cni-netd\") pod \"cilium-6k8z8\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " pod="kube-system/cilium-6k8z8" Mar 17 19:08:33.446574 kubelet[2761]: I0317 19:08:33.445437 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-host-proc-sys-net\") pod \"cilium-6k8z8\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " pod="kube-system/cilium-6k8z8" Mar 17 19:08:33.446574 kubelet[2761]: I0317 19:08:33.445465 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r7jm\" (UniqueName: \"kubernetes.io/projected/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-kube-api-access-4r7jm\") pod \"cilium-6k8z8\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " pod="kube-system/cilium-6k8z8" Mar 17 19:08:33.446772 kubelet[2761]: I0317 19:08:33.445486 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-cilium-run\") pod \"cilium-6k8z8\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " pod="kube-system/cilium-6k8z8" Mar 17 19:08:33.446772 kubelet[2761]: I0317 19:08:33.445526 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-cilium-cgroup\") pod \"cilium-6k8z8\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " pod="kube-system/cilium-6k8z8" Mar 17 19:08:33.451646 kubelet[2761]: E0317 19:08:33.451412 2761 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 19:08:33.451646 kubelet[2761]: E0317 19:08:33.451438 2761 projected.go:200] Error preparing data for projected volume kube-api-access-h76jm for pod kube-system/kube-proxy-5jcvg: configmap "kube-root-ca.crt" not found Mar 17 19:08:33.451646 kubelet[2761]: E0317 19:08:33.451487 2761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5ac4e0df-73ad-4d8b-921c-68ec04ed5aa3-kube-api-access-h76jm podName:5ac4e0df-73ad-4d8b-921c-68ec04ed5aa3 nodeName:}" failed. No retries permitted until 2025-03-17 19:08:33.951468337 +0000 UTC m=+13.994611863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h76jm" (UniqueName: "kubernetes.io/projected/5ac4e0df-73ad-4d8b-921c-68ec04ed5aa3-kube-api-access-h76jm") pod "kube-proxy-5jcvg" (UID: "5ac4e0df-73ad-4d8b-921c-68ec04ed5aa3") : configmap "kube-root-ca.crt" not found Mar 17 19:08:33.565005 kubelet[2761]: E0317 19:08:33.564901 2761 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 19:08:33.568125 kubelet[2761]: E0317 19:08:33.565459 2761 projected.go:200] Error preparing data for projected volume kube-api-access-4r7jm for pod kube-system/cilium-6k8z8: configmap "kube-root-ca.crt" not found Mar 17 19:08:33.569080 kubelet[2761]: E0317 19:08:33.568764 2761 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-kube-api-access-4r7jm podName:c0e4e2f4-a265-4c49-a5e9-b8b792b9317e nodeName:}" failed. No retries permitted until 2025-03-17 19:08:34.068083063 +0000 UTC m=+14.111226680 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4r7jm" (UniqueName: "kubernetes.io/projected/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-kube-api-access-4r7jm") pod "cilium-6k8z8" (UID: "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e") : configmap "kube-root-ca.crt" not found Mar 17 19:08:33.872366 kubelet[2761]: I0317 19:08:33.871544 2761 topology_manager.go:215] "Topology Admit Handler" podUID="812fc024-a846-49c3-b983-24c834ccdc65" podNamespace="kube-system" podName="cilium-operator-599987898-rb6rn" Mar 17 19:08:33.890924 systemd[1]: Created slice kubepods-besteffort-pod812fc024_a846_49c3_b983_24c834ccdc65.slice - libcontainer container kubepods-besteffort-pod812fc024_a846_49c3_b983_24c834ccdc65.slice. Mar 17 19:08:33.949699 kubelet[2761]: I0317 19:08:33.949666 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/812fc024-a846-49c3-b983-24c834ccdc65-cilium-config-path\") pod \"cilium-operator-599987898-rb6rn\" (UID: \"812fc024-a846-49c3-b983-24c834ccdc65\") " pod="kube-system/cilium-operator-599987898-rb6rn" Mar 17 19:08:33.949978 kubelet[2761]: I0317 19:08:33.949932 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j9w6\" (UniqueName: \"kubernetes.io/projected/812fc024-a846-49c3-b983-24c834ccdc65-kube-api-access-8j9w6\") pod \"cilium-operator-599987898-rb6rn\" (UID: \"812fc024-a846-49c3-b983-24c834ccdc65\") " pod="kube-system/cilium-operator-599987898-rb6rn" Mar 17 19:08:34.199185 containerd[1480]: time="2025-03-17T19:08:34.199016476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-rb6rn,Uid:812fc024-a846-49c3-b983-24c834ccdc65,Namespace:kube-system,Attempt:0,}" Mar 17 19:08:34.243977 containerd[1480]: time="2025-03-17T19:08:34.243840514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:08:34.244416 containerd[1480]: time="2025-03-17T19:08:34.243937335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:08:34.244416 containerd[1480]: time="2025-03-17T19:08:34.244045368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:08:34.244416 containerd[1480]: time="2025-03-17T19:08:34.244276980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:08:34.274353 systemd[1]: Started cri-containerd-b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284.scope - libcontainer container b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284. Mar 17 19:08:34.281628 containerd[1480]: time="2025-03-17T19:08:34.279334397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jcvg,Uid:5ac4e0df-73ad-4d8b-921c-68ec04ed5aa3,Namespace:kube-system,Attempt:0,}" Mar 17 19:08:34.288707 containerd[1480]: time="2025-03-17T19:08:34.288655806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6k8z8,Uid:c0e4e2f4-a265-4c49-a5e9-b8b792b9317e,Namespace:kube-system,Attempt:0,}" Mar 17 19:08:34.336904 containerd[1480]: time="2025-03-17T19:08:34.336813881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:08:34.337140 containerd[1480]: time="2025-03-17T19:08:34.336917555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:08:34.337140 containerd[1480]: time="2025-03-17T19:08:34.336963381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:08:34.337293 containerd[1480]: time="2025-03-17T19:08:34.337110805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:08:34.359991 containerd[1480]: time="2025-03-17T19:08:34.359889990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-rb6rn,Uid:812fc024-a846-49c3-b983-24c834ccdc65,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284\"" Mar 17 19:08:34.362220 systemd[1]: Started cri-containerd-ad8fb04de509f5d61b7fe47d785b9298849d70d52562c1eca0e8cf6e9dcde43f.scope - libcontainer container ad8fb04de509f5d61b7fe47d785b9298849d70d52562c1eca0e8cf6e9dcde43f. Mar 17 19:08:34.364759 containerd[1480]: time="2025-03-17T19:08:34.363122307Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 19:08:34.371572 containerd[1480]: time="2025-03-17T19:08:34.371260627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:08:34.371908 containerd[1480]: time="2025-03-17T19:08:34.371420185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:08:34.371908 containerd[1480]: time="2025-03-17T19:08:34.371442146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:08:34.371908 containerd[1480]: time="2025-03-17T19:08:34.371771481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:08:34.398343 systemd[1]: Started cri-containerd-e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd.scope - libcontainer container e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd. Mar 17 19:08:34.399987 containerd[1480]: time="2025-03-17T19:08:34.399728587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jcvg,Uid:5ac4e0df-73ad-4d8b-921c-68ec04ed5aa3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad8fb04de509f5d61b7fe47d785b9298849d70d52562c1eca0e8cf6e9dcde43f\"" Mar 17 19:08:34.405919 containerd[1480]: time="2025-03-17T19:08:34.405884955Z" level=info msg="CreateContainer within sandbox \"ad8fb04de509f5d61b7fe47d785b9298849d70d52562c1eca0e8cf6e9dcde43f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 19:08:34.425566 containerd[1480]: time="2025-03-17T19:08:34.425515699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6k8z8,Uid:c0e4e2f4-a265-4c49-a5e9-b8b792b9317e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\"" Mar 17 19:08:34.446983 containerd[1480]: time="2025-03-17T19:08:34.446915297Z" level=info msg="CreateContainer within sandbox \"ad8fb04de509f5d61b7fe47d785b9298849d70d52562c1eca0e8cf6e9dcde43f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0e57299fa585c64171cf1117dc5a134d3395e6beb85c89757d8eceee74e71b55\"" Mar 17 19:08:34.447992 containerd[1480]: time="2025-03-17T19:08:34.447962794Z" level=info msg="StartContainer for \"0e57299fa585c64171cf1117dc5a134d3395e6beb85c89757d8eceee74e71b55\"" Mar 17 19:08:34.476233 systemd[1]: Started cri-containerd-0e57299fa585c64171cf1117dc5a134d3395e6beb85c89757d8eceee74e71b55.scope - libcontainer container 0e57299fa585c64171cf1117dc5a134d3395e6beb85c89757d8eceee74e71b55. Mar 17 19:08:34.519910 containerd[1480]: time="2025-03-17T19:08:34.518403977Z" level=info msg="StartContainer for \"0e57299fa585c64171cf1117dc5a134d3395e6beb85c89757d8eceee74e71b55\" returns successfully" Mar 17 19:08:36.094929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4140231867.mount: Deactivated successfully. Mar 17 19:08:36.770298 containerd[1480]: time="2025-03-17T19:08:36.770236759Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:08:36.772046 containerd[1480]: time="2025-03-17T19:08:36.771973984Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 17 19:08:36.772516 containerd[1480]: time="2025-03-17T19:08:36.772471063Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:08:36.776066 containerd[1480]: time="2025-03-17T19:08:36.774173834Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.411018134s" Mar 17 19:08:36.776066 containerd[1480]: time="2025-03-17T19:08:36.774216734Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 19:08:36.779113 containerd[1480]: time="2025-03-17T19:08:36.779071774Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 19:08:36.784588 containerd[1480]: time="2025-03-17T19:08:36.784546441Z" level=info msg="CreateContainer within sandbox \"b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 19:08:36.807978 containerd[1480]: time="2025-03-17T19:08:36.807935845Z" level=info msg="CreateContainer within sandbox \"b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1\"" Mar 17 19:08:36.808660 containerd[1480]: time="2025-03-17T19:08:36.808616978Z" level=info msg="StartContainer for \"baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1\"" Mar 17 19:08:36.840226 systemd[1]: Started cri-containerd-baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1.scope - libcontainer container baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1. Mar 17 19:08:36.869070 containerd[1480]: time="2025-03-17T19:08:36.868439202Z" level=info msg="StartContainer for \"baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1\" returns successfully" Mar 17 19:08:37.167562 kubelet[2761]: I0317 19:08:37.167254 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-rb6rn" podStartSLOduration=1.751549282 podStartE2EDuration="4.167221831s" podCreationTimestamp="2025-03-17 19:08:33 +0000 UTC" firstStartedPulling="2025-03-17 19:08:34.361969133 +0000 UTC m=+14.405112669" lastFinishedPulling="2025-03-17 19:08:36.777641692 +0000 UTC m=+16.820785218" observedRunningTime="2025-03-17 19:08:37.166063426 +0000 UTC m=+17.209206972" watchObservedRunningTime="2025-03-17 19:08:37.167221831 +0000 UTC m=+17.210365377" Mar 17 19:08:37.167562 kubelet[2761]: I0317 19:08:37.167509 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5jcvg" podStartSLOduration=4.167500361 podStartE2EDuration="4.167500361s" podCreationTimestamp="2025-03-17 19:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 19:08:35.168156223 +0000 UTC m=+15.211299799" watchObservedRunningTime="2025-03-17 19:08:37.167500361 +0000 UTC m=+17.210643897" Mar 17 19:08:41.899271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527684196.mount: Deactivated successfully. Mar 17 19:08:44.486290 containerd[1480]: time="2025-03-17T19:08:44.486176719Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:08:44.487937 containerd[1480]: time="2025-03-17T19:08:44.487706409Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 17 19:08:44.489774 containerd[1480]: time="2025-03-17T19:08:44.489500252Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 19:08:44.491262 containerd[1480]: time="2025-03-17T19:08:44.491233011Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.71212039s" Mar 17 19:08:44.491346 containerd[1480]: time="2025-03-17T19:08:44.491329512Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 19:08:44.493775 containerd[1480]: time="2025-03-17T19:08:44.493736822Z" level=info msg="CreateContainer within sandbox \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 19:08:44.506434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2708024245.mount: Deactivated successfully. Mar 17 19:08:44.511685 containerd[1480]: time="2025-03-17T19:08:44.511608704Z" level=info msg="CreateContainer within sandbox \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19\"" Mar 17 19:08:44.513242 containerd[1480]: time="2025-03-17T19:08:44.512299094Z" level=info msg="StartContainer for \"12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19\"" Mar 17 19:08:44.548172 systemd[1]: Started cri-containerd-12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19.scope - libcontainer container 12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19. Mar 17 19:08:44.575223 containerd[1480]: time="2025-03-17T19:08:44.575182764Z" level=info msg="StartContainer for \"12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19\" returns successfully" Mar 17 19:08:44.588711 systemd[1]: cri-containerd-12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19.scope: Deactivated successfully. Mar 17 19:08:45.496410 containerd[1480]: time="2025-03-17T19:08:45.496295774Z" level=info msg="shim disconnected" id=12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19 namespace=k8s.io Mar 17 19:08:45.496410 containerd[1480]: time="2025-03-17T19:08:45.496388687Z" level=warning msg="cleaning up after shim disconnected" id=12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19 namespace=k8s.io Mar 17 19:08:45.496410 containerd[1480]: time="2025-03-17T19:08:45.496414656Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 19:08:45.510967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19-rootfs.mount: Deactivated successfully. Mar 17 19:08:46.263738 containerd[1480]: time="2025-03-17T19:08:46.263573140Z" level=info msg="CreateContainer within sandbox \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 19:08:46.319508 containerd[1480]: time="2025-03-17T19:08:46.315338960Z" level=info msg="CreateContainer within sandbox \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf\"" Mar 17 19:08:46.329169 containerd[1480]: time="2025-03-17T19:08:46.319883117Z" level=info msg="StartContainer for \"e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf\"" Mar 17 19:08:46.331147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2589265156.mount: Deactivated successfully. Mar 17 19:08:46.368163 systemd[1]: Started cri-containerd-e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf.scope - libcontainer container e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf. Mar 17 19:08:46.403292 containerd[1480]: time="2025-03-17T19:08:46.403165065Z" level=info msg="StartContainer for \"e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf\" returns successfully" Mar 17 19:08:46.404503 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 19:08:46.405118 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 19:08:46.405765 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 19:08:46.412656 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 19:08:46.413469 systemd[1]: cri-containerd-e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf.scope: Deactivated successfully. Mar 17 19:08:46.431049 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 19:08:46.445914 containerd[1480]: time="2025-03-17T19:08:46.445817136Z" level=info msg="shim disconnected" id=e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf namespace=k8s.io Mar 17 19:08:46.446280 containerd[1480]: time="2025-03-17T19:08:46.446095415Z" level=warning msg="cleaning up after shim disconnected" id=e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf namespace=k8s.io Mar 17 19:08:46.446280 containerd[1480]: time="2025-03-17T19:08:46.446112808Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 19:08:46.507147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf-rootfs.mount: Deactivated successfully. Mar 17 19:08:47.275592 containerd[1480]: time="2025-03-17T19:08:47.275073149Z" level=info msg="CreateContainer within sandbox \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 19:08:47.321694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3626572778.mount: Deactivated successfully. Mar 17 19:08:47.326687 containerd[1480]: time="2025-03-17T19:08:47.326607453Z" level=info msg="CreateContainer within sandbox \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f\"" Mar 17 19:08:47.329489 containerd[1480]: time="2025-03-17T19:08:47.329436053Z" level=info msg="StartContainer for \"50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f\"" Mar 17 19:08:47.373170 systemd[1]: Started cri-containerd-50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f.scope - libcontainer container 50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f. Mar 17 19:08:47.411186 systemd[1]: cri-containerd-50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f.scope: Deactivated successfully. Mar 17 19:08:47.414778 containerd[1480]: time="2025-03-17T19:08:47.414729044Z" level=info msg="StartContainer for \"50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f\" returns successfully" Mar 17 19:08:47.443420 containerd[1480]: time="2025-03-17T19:08:47.443345945Z" level=info msg="shim disconnected" id=50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f namespace=k8s.io Mar 17 19:08:47.443420 containerd[1480]: time="2025-03-17T19:08:47.443406688Z" level=warning msg="cleaning up after shim disconnected" id=50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f namespace=k8s.io Mar 17 19:08:47.443420 containerd[1480]: time="2025-03-17T19:08:47.443418019Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 19:08:47.456522 containerd[1480]: time="2025-03-17T19:08:47.456472180Z" level=warning msg="cleanup warnings time=\"2025-03-17T19:08:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 19:08:47.505279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f-rootfs.mount: Deactivated successfully. Mar 17 19:08:48.280417 containerd[1480]: time="2025-03-17T19:08:48.279785356Z" level=info msg="CreateContainer within sandbox \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 19:08:48.318572 containerd[1480]: time="2025-03-17T19:08:48.317822418Z" level=info msg="CreateContainer within sandbox \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d\"" Mar 17 19:08:48.323446 containerd[1480]: time="2025-03-17T19:08:48.320353090Z" level=info msg="StartContainer for \"fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d\"" Mar 17 19:08:48.381198 systemd[1]: Started cri-containerd-fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d.scope - libcontainer container fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d. Mar 17 19:08:48.409002 systemd[1]: cri-containerd-fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d.scope: Deactivated successfully. Mar 17 19:08:48.413451 containerd[1480]: time="2025-03-17T19:08:48.413401605Z" level=info msg="StartContainer for \"fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d\" returns successfully" Mar 17 19:08:48.444265 containerd[1480]: time="2025-03-17T19:08:48.444127965Z" level=info msg="shim disconnected" id=fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d namespace=k8s.io Mar 17 19:08:48.444673 containerd[1480]: time="2025-03-17T19:08:48.444337267Z" level=warning msg="cleaning up after shim disconnected" id=fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d namespace=k8s.io Mar 17 19:08:48.444673 containerd[1480]: time="2025-03-17T19:08:48.444351553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 19:08:48.506495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d-rootfs.mount: Deactivated successfully. Mar 17 19:08:49.302335 containerd[1480]: time="2025-03-17T19:08:49.302246352Z" level=info msg="CreateContainer within sandbox \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 19:08:49.337619 containerd[1480]: time="2025-03-17T19:08:49.336571287Z" level=info msg="CreateContainer within sandbox \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da\"" Mar 17 19:08:49.340119 containerd[1480]: time="2025-03-17T19:08:49.338609429Z" level=info msg="StartContainer for \"cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da\"" Mar 17 19:08:49.403451 systemd[1]: Started cri-containerd-cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da.scope - libcontainer container cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da. Mar 17 19:08:49.472400 containerd[1480]: time="2025-03-17T19:08:49.471685946Z" level=info msg="StartContainer for \"cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da\" returns successfully" Mar 17 19:08:49.507617 systemd[1]: run-containerd-runc-k8s.io-cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da-runc.5nClMc.mount: Deactivated successfully. Mar 17 19:08:49.615517 kubelet[2761]: I0317 19:08:49.615228 2761 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 19:08:49.651915 kubelet[2761]: I0317 19:08:49.650931 2761 topology_manager.go:215] "Topology Admit Handler" podUID="d9052ab5-f440-4d45-ab1a-25ff2389fe98" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nnj9k" Mar 17 19:08:49.661170 kubelet[2761]: I0317 19:08:49.659688 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9052ab5-f440-4d45-ab1a-25ff2389fe98-config-volume\") pod \"coredns-7db6d8ff4d-nnj9k\" (UID: \"d9052ab5-f440-4d45-ab1a-25ff2389fe98\") " pod="kube-system/coredns-7db6d8ff4d-nnj9k" Mar 17 19:08:49.661170 kubelet[2761]: I0317 19:08:49.659726 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chntn\" (UniqueName: \"kubernetes.io/projected/d9052ab5-f440-4d45-ab1a-25ff2389fe98-kube-api-access-chntn\") pod \"coredns-7db6d8ff4d-nnj9k\" (UID: \"d9052ab5-f440-4d45-ab1a-25ff2389fe98\") " pod="kube-system/coredns-7db6d8ff4d-nnj9k" Mar 17 19:08:49.665814 kubelet[2761]: I0317 19:08:49.665088 2761 topology_manager.go:215] "Topology Admit Handler" podUID="d56a3500-1041-4205-a9b6-7ac5f94ad83a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-n4647" Mar 17 19:08:49.665814 kubelet[2761]: W0317 19:08:49.665600 2761 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4230-1-0-c-fc9f5e1ee2.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-0-c-fc9f5e1ee2.novalocal' and this object Mar 17 19:08:49.665814 kubelet[2761]: E0317 19:08:49.665627 2761 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4230-1-0-c-fc9f5e1ee2.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-0-c-fc9f5e1ee2.novalocal' and this object Mar 17 19:08:49.667328 systemd[1]: Created slice kubepods-burstable-podd9052ab5_f440_4d45_ab1a_25ff2389fe98.slice - libcontainer container kubepods-burstable-podd9052ab5_f440_4d45_ab1a_25ff2389fe98.slice. Mar 17 19:08:49.683915 systemd[1]: Created slice kubepods-burstable-podd56a3500_1041_4205_a9b6_7ac5f94ad83a.slice - libcontainer container kubepods-burstable-podd56a3500_1041_4205_a9b6_7ac5f94ad83a.slice. Mar 17 19:08:49.760869 kubelet[2761]: I0317 19:08:49.760796 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d56a3500-1041-4205-a9b6-7ac5f94ad83a-config-volume\") pod \"coredns-7db6d8ff4d-n4647\" (UID: \"d56a3500-1041-4205-a9b6-7ac5f94ad83a\") " pod="kube-system/coredns-7db6d8ff4d-n4647" Mar 17 19:08:49.761283 kubelet[2761]: I0317 19:08:49.761219 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtrvm\" (UniqueName: \"kubernetes.io/projected/d56a3500-1041-4205-a9b6-7ac5f94ad83a-kube-api-access-vtrvm\") pod \"coredns-7db6d8ff4d-n4647\" (UID: \"d56a3500-1041-4205-a9b6-7ac5f94ad83a\") " pod="kube-system/coredns-7db6d8ff4d-n4647" Mar 17 19:08:50.328507 kubelet[2761]: I0317 19:08:50.327924 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6k8z8" podStartSLOduration=7.265153572 podStartE2EDuration="17.327879629s" podCreationTimestamp="2025-03-17 19:08:33 +0000 UTC" firstStartedPulling="2025-03-17 19:08:34.429378534 +0000 UTC m=+14.472522090" lastFinishedPulling="2025-03-17 19:08:44.492104621 +0000 UTC m=+24.535248147" observedRunningTime="2025-03-17 19:08:50.327656752 +0000 UTC m=+30.370800358" watchObservedRunningTime="2025-03-17 19:08:50.327879629 +0000 UTC m=+30.371023235" Mar 17 19:08:50.883597 containerd[1480]: time="2025-03-17T19:08:50.882621289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nnj9k,Uid:d9052ab5-f440-4d45-ab1a-25ff2389fe98,Namespace:kube-system,Attempt:0,}" Mar 17 19:08:50.890368 containerd[1480]: time="2025-03-17T19:08:50.889709586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n4647,Uid:d56a3500-1041-4205-a9b6-7ac5f94ad83a,Namespace:kube-system,Attempt:0,}" Mar 17 19:08:51.687767 systemd-networkd[1386]: cilium_host: Link UP Mar 17 19:08:51.688144 systemd-networkd[1386]: cilium_net: Link UP Mar 17 19:08:51.688443 systemd-networkd[1386]: cilium_net: Gained carrier Mar 17 19:08:51.688730 systemd-networkd[1386]: cilium_host: Gained carrier Mar 17 19:08:51.798285 systemd-networkd[1386]: cilium_vxlan: Link UP Mar 17 19:08:51.798298 systemd-networkd[1386]: cilium_vxlan: Gained carrier Mar 17 19:08:52.021290 systemd-networkd[1386]: cilium_net: Gained IPv6LL Mar 17 19:08:52.116061 kernel: NET: Registered PF_ALG protocol family Mar 17 19:08:52.644554 systemd-networkd[1386]: cilium_host: Gained IPv6LL Mar 17 19:08:52.999183 systemd-networkd[1386]: lxc_health: Link UP Mar 17 19:08:53.004361 systemd-networkd[1386]: lxc_health: Gained carrier Mar 17 19:08:53.412374 systemd-networkd[1386]: cilium_vxlan: Gained IPv6LL Mar 17 19:08:53.512773 systemd-networkd[1386]: lxc228a7a5ba0f2: Link UP Mar 17 19:08:53.520062 kernel: eth0: renamed from tmp0e74d Mar 17 19:08:53.541071 kernel: eth0: renamed from tmp524d6 Mar 17 19:08:53.551764 systemd-networkd[1386]: lxc228a7a5ba0f2: Gained carrier Mar 17 19:08:53.555883 systemd-networkd[1386]: lxc5916f3624e30: Link UP Mar 17 19:08:53.561765 systemd-networkd[1386]: lxc5916f3624e30: Gained carrier Mar 17 19:08:54.373128 systemd-networkd[1386]: lxc_health: Gained IPv6LL Mar 17 19:08:54.884232 systemd-networkd[1386]: lxc5916f3624e30: Gained IPv6LL Mar 17 19:08:55.140241 systemd-networkd[1386]: lxc228a7a5ba0f2: Gained IPv6LL Mar 17 19:08:58.008895 containerd[1480]: time="2025-03-17T19:08:58.008761295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:08:58.008895 containerd[1480]: time="2025-03-17T19:08:58.008835072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:08:58.010141 containerd[1480]: time="2025-03-17T19:08:58.008851683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:08:58.011094 containerd[1480]: time="2025-03-17T19:08:58.011057221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:08:58.043205 systemd[1]: Started cri-containerd-0e74d157937423220a8fd40bc2467c6a43e83b435be0ce34e6c7f7ac3a58f58e.scope - libcontainer container 0e74d157937423220a8fd40bc2467c6a43e83b435be0ce34e6c7f7ac3a58f58e. Mar 17 19:08:58.083080 containerd[1480]: time="2025-03-17T19:08:58.081378496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:08:58.083080 containerd[1480]: time="2025-03-17T19:08:58.081445051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:08:58.083080 containerd[1480]: time="2025-03-17T19:08:58.081463506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:08:58.083080 containerd[1480]: time="2025-03-17T19:08:58.081549786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:08:58.122056 containerd[1480]: time="2025-03-17T19:08:58.121953330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nnj9k,Uid:d9052ab5-f440-4d45-ab1a-25ff2389fe98,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e74d157937423220a8fd40bc2467c6a43e83b435be0ce34e6c7f7ac3a58f58e\"" Mar 17 19:08:58.122075 systemd[1]: Started cri-containerd-524d6ebc0df94eefc91d748f5ca3aab43cd7e4ee4f437f6954ca6c08563efec8.scope - libcontainer container 524d6ebc0df94eefc91d748f5ca3aab43cd7e4ee4f437f6954ca6c08563efec8. Mar 17 19:08:58.127059 containerd[1480]: time="2025-03-17T19:08:58.126875660Z" level=info msg="CreateContainer within sandbox \"0e74d157937423220a8fd40bc2467c6a43e83b435be0ce34e6c7f7ac3a58f58e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 19:08:58.156049 containerd[1480]: time="2025-03-17T19:08:58.155357545Z" level=info msg="CreateContainer within sandbox \"0e74d157937423220a8fd40bc2467c6a43e83b435be0ce34e6c7f7ac3a58f58e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e26d5ac10bed27f36647980db4eae557f2056d4639cf86c15854368ccc74522c\"" Mar 17 19:08:58.156759 containerd[1480]: time="2025-03-17T19:08:58.156714815Z" level=info msg="StartContainer for \"e26d5ac10bed27f36647980db4eae557f2056d4639cf86c15854368ccc74522c\"" Mar 17 19:08:58.198480 systemd[1]: Started cri-containerd-e26d5ac10bed27f36647980db4eae557f2056d4639cf86c15854368ccc74522c.scope - libcontainer container e26d5ac10bed27f36647980db4eae557f2056d4639cf86c15854368ccc74522c. Mar 17 19:08:58.206390 containerd[1480]: time="2025-03-17T19:08:58.205056713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n4647,Uid:d56a3500-1041-4205-a9b6-7ac5f94ad83a,Namespace:kube-system,Attempt:0,} returns sandbox id \"524d6ebc0df94eefc91d748f5ca3aab43cd7e4ee4f437f6954ca6c08563efec8\"" Mar 17 19:08:58.209658 containerd[1480]: time="2025-03-17T19:08:58.209626543Z" level=info msg="CreateContainer within sandbox \"524d6ebc0df94eefc91d748f5ca3aab43cd7e4ee4f437f6954ca6c08563efec8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 19:08:58.233804 containerd[1480]: time="2025-03-17T19:08:58.233684171Z" level=info msg="CreateContainer within sandbox \"524d6ebc0df94eefc91d748f5ca3aab43cd7e4ee4f437f6954ca6c08563efec8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eac10916546b6e036432b54362cd60b02d711ab669953d5b088e6fdeba57119a\"" Mar 17 19:08:58.235225 containerd[1480]: time="2025-03-17T19:08:58.235109096Z" level=info msg="StartContainer for \"eac10916546b6e036432b54362cd60b02d711ab669953d5b088e6fdeba57119a\"" Mar 17 19:08:58.271975 containerd[1480]: time="2025-03-17T19:08:58.270078962Z" level=info msg="StartContainer for \"e26d5ac10bed27f36647980db4eae557f2056d4639cf86c15854368ccc74522c\" returns successfully" Mar 17 19:08:58.273688 systemd[1]: Started cri-containerd-eac10916546b6e036432b54362cd60b02d711ab669953d5b088e6fdeba57119a.scope - libcontainer container eac10916546b6e036432b54362cd60b02d711ab669953d5b088e6fdeba57119a. Mar 17 19:08:58.316086 containerd[1480]: time="2025-03-17T19:08:58.315604349Z" level=info msg="StartContainer for \"eac10916546b6e036432b54362cd60b02d711ab669953d5b088e6fdeba57119a\" returns successfully" Mar 17 19:08:58.377798 kubelet[2761]: I0317 19:08:58.377742 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nnj9k" podStartSLOduration=25.377722885 podStartE2EDuration="25.377722885s" podCreationTimestamp="2025-03-17 19:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 19:08:58.37601546 +0000 UTC m=+38.419158996" watchObservedRunningTime="2025-03-17 19:08:58.377722885 +0000 UTC m=+38.420866411" Mar 17 19:08:58.378187 kubelet[2761]: I0317 19:08:58.377839 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-n4647" podStartSLOduration=25.377833041 podStartE2EDuration="25.377833041s" podCreationTimestamp="2025-03-17 19:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 19:08:58.354911631 +0000 UTC m=+38.398055187" watchObservedRunningTime="2025-03-17 19:08:58.377833041 +0000 UTC m=+38.420976587" Mar 17 19:09:46.027296 systemd[1]: Started sshd@9-172.24.4.57:22-172.24.4.1:49670.service - OpenSSH per-connection server daemon (172.24.4.1:49670). Mar 17 19:09:47.455646 sshd[4130]: Accepted publickey for core from 172.24.4.1 port 49670 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:09:47.458731 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:09:47.471126 systemd-logind[1460]: New session 12 of user core. Mar 17 19:09:47.476354 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 19:09:48.366141 sshd[4132]: Connection closed by 172.24.4.1 port 49670 Mar 17 19:09:48.367401 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Mar 17 19:09:48.374579 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. Mar 17 19:09:48.376431 systemd[1]: sshd@9-172.24.4.57:22-172.24.4.1:49670.service: Deactivated successfully. Mar 17 19:09:48.380506 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 19:09:48.384322 systemd-logind[1460]: Removed session 12. Mar 17 19:09:53.399613 systemd[1]: Started sshd@10-172.24.4.57:22-172.24.4.1:49682.service - OpenSSH per-connection server daemon (172.24.4.1:49682). Mar 17 19:09:54.471368 sshd[4144]: Accepted publickey for core from 172.24.4.1 port 49682 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:09:54.473977 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:09:54.487203 systemd-logind[1460]: New session 13 of user core. Mar 17 19:09:54.492615 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 19:09:55.192135 sshd[4147]: Connection closed by 172.24.4.1 port 49682 Mar 17 19:09:55.193192 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Mar 17 19:09:55.199763 systemd[1]: sshd@10-172.24.4.57:22-172.24.4.1:49682.service: Deactivated successfully. Mar 17 19:09:55.204768 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 19:09:55.209723 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. Mar 17 19:09:55.212762 systemd-logind[1460]: Removed session 13. Mar 17 19:10:00.224377 systemd[1]: Started sshd@11-172.24.4.57:22-172.24.4.1:34444.service - OpenSSH per-connection server daemon (172.24.4.1:34444). Mar 17 19:10:01.506891 sshd[4161]: Accepted publickey for core from 172.24.4.1 port 34444 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:10:01.510438 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:10:01.522188 systemd-logind[1460]: New session 14 of user core. Mar 17 19:10:01.527546 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 19:10:02.275158 sshd[4163]: Connection closed by 172.24.4.1 port 34444 Mar 17 19:10:02.276456 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Mar 17 19:10:02.283568 systemd[1]: sshd@11-172.24.4.57:22-172.24.4.1:34444.service: Deactivated successfully. Mar 17 19:10:02.287428 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 19:10:02.289906 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. Mar 17 19:10:02.292797 systemd-logind[1460]: Removed session 14. Mar 17 19:10:07.300701 systemd[1]: Started sshd@12-172.24.4.57:22-172.24.4.1:53628.service - OpenSSH per-connection server daemon (172.24.4.1:53628). Mar 17 19:10:08.674378 sshd[4177]: Accepted publickey for core from 172.24.4.1 port 53628 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:10:08.677730 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:10:08.692219 systemd-logind[1460]: New session 15 of user core. Mar 17 19:10:08.703543 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 19:10:09.494072 sshd[4179]: Connection closed by 172.24.4.1 port 53628 Mar 17 19:10:09.495004 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Mar 17 19:10:09.510374 systemd[1]: sshd@12-172.24.4.57:22-172.24.4.1:53628.service: Deactivated successfully. Mar 17 19:10:09.514668 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 19:10:09.519230 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. Mar 17 19:10:09.525633 systemd[1]: Started sshd@13-172.24.4.57:22-172.24.4.1:53644.service - OpenSSH per-connection server daemon (172.24.4.1:53644). Mar 17 19:10:09.529433 systemd-logind[1460]: Removed session 15. Mar 17 19:10:10.866601 sshd[4191]: Accepted publickey for core from 172.24.4.1 port 53644 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:10:10.868869 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:10:10.879072 systemd-logind[1460]: New session 16 of user core. Mar 17 19:10:10.884588 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 19:10:11.611846 sshd[4194]: Connection closed by 172.24.4.1 port 53644 Mar 17 19:10:11.610075 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Mar 17 19:10:11.633163 systemd[1]: sshd@13-172.24.4.57:22-172.24.4.1:53644.service: Deactivated successfully. Mar 17 19:10:11.637085 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 19:10:11.642416 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. Mar 17 19:10:11.650701 systemd[1]: Started sshd@14-172.24.4.57:22-172.24.4.1:53650.service - OpenSSH per-connection server daemon (172.24.4.1:53650). Mar 17 19:10:11.654145 systemd-logind[1460]: Removed session 16. Mar 17 19:10:12.887473 sshd[4203]: Accepted publickey for core from 172.24.4.1 port 53650 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:10:12.890751 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:10:12.903992 systemd-logind[1460]: New session 17 of user core. Mar 17 19:10:12.911491 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 19:10:13.715925 sshd[4206]: Connection closed by 172.24.4.1 port 53650 Mar 17 19:10:13.716634 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Mar 17 19:10:13.719356 systemd[1]: sshd@14-172.24.4.57:22-172.24.4.1:53650.service: Deactivated successfully. Mar 17 19:10:13.721837 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 19:10:13.724262 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. Mar 17 19:10:13.725735 systemd-logind[1460]: Removed session 17. Mar 17 19:10:18.746745 systemd[1]: Started sshd@15-172.24.4.57:22-172.24.4.1:36588.service - OpenSSH per-connection server daemon (172.24.4.1:36588). Mar 17 19:10:19.987375 sshd[4218]: Accepted publickey for core from 172.24.4.1 port 36588 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:10:19.990708 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:10:20.000889 systemd-logind[1460]: New session 18 of user core. Mar 17 19:10:20.009640 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 19:10:20.910507 sshd[4220]: Connection closed by 172.24.4.1 port 36588 Mar 17 19:10:20.912428 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Mar 17 19:10:20.932235 systemd[1]: sshd@15-172.24.4.57:22-172.24.4.1:36588.service: Deactivated successfully. Mar 17 19:10:20.936391 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 19:10:20.938746 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. Mar 17 19:10:20.950748 systemd[1]: Started sshd@16-172.24.4.57:22-172.24.4.1:36600.service - OpenSSH per-connection server daemon (172.24.4.1:36600). Mar 17 19:10:20.954405 systemd-logind[1460]: Removed session 18. Mar 17 19:10:22.245114 sshd[4232]: Accepted publickey for core from 172.24.4.1 port 36600 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:10:22.248332 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:10:22.260935 systemd-logind[1460]: New session 19 of user core. Mar 17 19:10:22.273352 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 19:10:23.072872 sshd[4235]: Connection closed by 172.24.4.1 port 36600 Mar 17 19:10:23.074146 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Mar 17 19:10:23.088680 systemd[1]: sshd@16-172.24.4.57:22-172.24.4.1:36600.service: Deactivated successfully. Mar 17 19:10:23.094296 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 19:10:23.096977 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. Mar 17 19:10:23.105693 systemd[1]: Started sshd@17-172.24.4.57:22-172.24.4.1:36610.service - OpenSSH per-connection server daemon (172.24.4.1:36610). Mar 17 19:10:23.109767 systemd-logind[1460]: Removed session 19. Mar 17 19:10:24.410085 sshd[4243]: Accepted publickey for core from 172.24.4.1 port 36610 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:10:24.414239 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:10:24.426750 systemd-logind[1460]: New session 20 of user core. Mar 17 19:10:24.431332 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 19:10:27.095041 sshd[4246]: Connection closed by 172.24.4.1 port 36610 Mar 17 19:10:27.094878 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Mar 17 19:10:27.106876 systemd[1]: sshd@17-172.24.4.57:22-172.24.4.1:36610.service: Deactivated successfully. Mar 17 19:10:27.109859 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 19:10:27.113107 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. Mar 17 19:10:27.118654 systemd[1]: Started sshd@18-172.24.4.57:22-172.24.4.1:50882.service - OpenSSH per-connection server daemon (172.24.4.1:50882). Mar 17 19:10:27.121921 systemd-logind[1460]: Removed session 20. Mar 17 19:10:28.530873 sshd[4262]: Accepted publickey for core from 172.24.4.1 port 50882 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:10:28.533685 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:10:28.544734 systemd-logind[1460]: New session 21 of user core. Mar 17 19:10:28.551585 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 19:10:29.486818 sshd[4265]: Connection closed by 172.24.4.1 port 50882 Mar 17 19:10:29.487943 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Mar 17 19:10:29.505841 systemd[1]: sshd@18-172.24.4.57:22-172.24.4.1:50882.service: Deactivated successfully. Mar 17 19:10:29.509778 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 19:10:29.512431 systemd-logind[1460]: Session 21 logged out. Waiting for processes to exit. Mar 17 19:10:29.522702 systemd[1]: Started sshd@19-172.24.4.57:22-172.24.4.1:50886.service - OpenSSH per-connection server daemon (172.24.4.1:50886). Mar 17 19:10:29.525755 systemd-logind[1460]: Removed session 21. Mar 17 19:10:30.654244 sshd[4274]: Accepted publickey for core from 172.24.4.1 port 50886 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:10:30.660550 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:10:30.682130 systemd-logind[1460]: New session 22 of user core. Mar 17 19:10:30.687389 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 19:10:31.424276 sshd[4277]: Connection closed by 172.24.4.1 port 50886 Mar 17 19:10:31.425290 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Mar 17 19:10:31.430353 systemd[1]: sshd@19-172.24.4.57:22-172.24.4.1:50886.service: Deactivated successfully. Mar 17 19:10:31.434397 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 19:10:31.437796 systemd-logind[1460]: Session 22 logged out. Waiting for processes to exit. Mar 17 19:10:31.440824 systemd-logind[1460]: Removed session 22. Mar 17 19:10:36.452577 systemd[1]: Started sshd@20-172.24.4.57:22-172.24.4.1:49054.service - OpenSSH per-connection server daemon (172.24.4.1:49054). Mar 17 19:10:37.830790 sshd[4295]: Accepted publickey for core from 172.24.4.1 port 49054 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:10:37.833911 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:10:37.845148 systemd-logind[1460]: New session 23 of user core. Mar 17 19:10:37.863331 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 19:10:38.556467 sshd[4297]: Connection closed by 172.24.4.1 port 49054 Mar 17 19:10:38.557539 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Mar 17 19:10:38.564875 systemd[1]: sshd@20-172.24.4.57:22-172.24.4.1:49054.service: Deactivated successfully. Mar 17 19:10:38.569483 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 19:10:38.572225 systemd-logind[1460]: Session 23 logged out. Waiting for processes to exit. Mar 17 19:10:38.574803 systemd-logind[1460]: Removed session 23. Mar 17 19:10:43.583625 systemd[1]: Started sshd@21-172.24.4.57:22-172.24.4.1:52594.service - OpenSSH per-connection server daemon (172.24.4.1:52594). Mar 17 19:10:44.959303 sshd[4308]: Accepted publickey for core from 172.24.4.1 port 52594 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:10:44.961621 sshd-session[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:10:44.972139 systemd-logind[1460]: New session 24 of user core. Mar 17 19:10:44.979384 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 19:10:45.748127 sshd[4310]: Connection closed by 172.24.4.1 port 52594 Mar 17 19:10:45.747849 sshd-session[4308]: pam_unix(sshd:session): session closed for user core Mar 17 19:10:45.753640 systemd[1]: sshd@21-172.24.4.57:22-172.24.4.1:52594.service: Deactivated successfully. Mar 17 19:10:45.758348 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 19:10:45.763380 systemd-logind[1460]: Session 24 logged out. Waiting for processes to exit. Mar 17 19:10:45.765731 systemd-logind[1460]: Removed session 24. Mar 17 19:10:50.776602 systemd[1]: Started sshd@22-172.24.4.57:22-172.24.4.1:52598.service - OpenSSH per-connection server daemon (172.24.4.1:52598). Mar 17 19:10:52.137728 sshd[4321]: Accepted publickey for core from 172.24.4.1 port 52598 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:10:52.140687 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:10:52.152731 systemd-logind[1460]: New session 25 of user core. Mar 17 19:10:52.165409 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 19:10:52.938131 sshd[4323]: Connection closed by 172.24.4.1 port 52598 Mar 17 19:10:52.940849 sshd-session[4321]: pam_unix(sshd:session): session closed for user core Mar 17 19:10:52.958148 systemd[1]: sshd@22-172.24.4.57:22-172.24.4.1:52598.service: Deactivated successfully. Mar 17 19:10:52.962081 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 19:10:52.964911 systemd-logind[1460]: Session 25 logged out. Waiting for processes to exit. Mar 17 19:10:52.972654 systemd[1]: Started sshd@23-172.24.4.57:22-172.24.4.1:52614.service - OpenSSH per-connection server daemon (172.24.4.1:52614). Mar 17 19:10:52.976816 systemd-logind[1460]: Removed session 25. Mar 17 19:10:54.167693 sshd[4334]: Accepted publickey for core from 172.24.4.1 port 52614 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:10:54.170482 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:10:54.183611 systemd-logind[1460]: New session 26 of user core. Mar 17 19:10:54.190332 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 19:10:56.119078 systemd[1]: run-containerd-runc-k8s.io-cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da-runc.6KmFES.mount: Deactivated successfully. Mar 17 19:10:56.119892 containerd[1480]: time="2025-03-17T19:10:56.119836888Z" level=info msg="StopContainer for \"baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1\" with timeout 30 (s)" Mar 17 19:10:56.123247 containerd[1480]: time="2025-03-17T19:10:56.123191319Z" level=info msg="Stop container \"baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1\" with signal terminated" Mar 17 19:10:56.136474 systemd[1]: cri-containerd-baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1.scope: Deactivated successfully. Mar 17 19:10:56.138144 containerd[1480]: time="2025-03-17T19:10:56.137812437Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 19:10:56.149710 containerd[1480]: time="2025-03-17T19:10:56.149679633Z" level=info msg="StopContainer for \"cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da\" with timeout 2 (s)" Mar 17 19:10:56.150890 containerd[1480]: time="2025-03-17T19:10:56.150713377Z" level=info msg="Stop container \"cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da\" with signal terminated" Mar 17 19:10:56.160839 systemd-networkd[1386]: lxc_health: Link DOWN Mar 17 19:10:56.160849 systemd-networkd[1386]: lxc_health: Lost carrier Mar 17 19:10:56.175966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1-rootfs.mount: Deactivated successfully. Mar 17 19:10:56.178432 systemd[1]: cri-containerd-cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da.scope: Deactivated successfully. Mar 17 19:10:56.181134 systemd[1]: cri-containerd-cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da.scope: Consumed 8.635s CPU time, 125.3M memory peak, 144K read from disk, 13.3M written to disk. Mar 17 19:10:56.197279 containerd[1480]: time="2025-03-17T19:10:56.196721375Z" level=info msg="shim disconnected" id=baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1 namespace=k8s.io Mar 17 19:10:56.197279 containerd[1480]: time="2025-03-17T19:10:56.196839077Z" level=warning msg="cleaning up after shim disconnected" id=baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1 namespace=k8s.io Mar 17 19:10:56.197279 containerd[1480]: time="2025-03-17T19:10:56.196865156Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 19:10:56.210668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da-rootfs.mount: Deactivated successfully. Mar 17 19:10:56.215122 containerd[1480]: time="2025-03-17T19:10:56.214778647Z" level=info msg="shim disconnected" id=cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da namespace=k8s.io Mar 17 19:10:56.215396 containerd[1480]: time="2025-03-17T19:10:56.215287424Z" level=warning msg="cleaning up after shim disconnected" id=cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da namespace=k8s.io Mar 17 19:10:56.215396 containerd[1480]: time="2025-03-17T19:10:56.215327329Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 19:10:56.231793 containerd[1480]: time="2025-03-17T19:10:56.231739105Z" level=info msg="StopContainer for \"baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1\" returns successfully" Mar 17 19:10:56.232996 containerd[1480]: time="2025-03-17T19:10:56.232953550Z" level=info msg="StopPodSandbox for \"b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284\"" Mar 17 19:10:56.233182 containerd[1480]: time="2025-03-17T19:10:56.232991382Z" level=info msg="Container to stop \"baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 19:10:56.236343 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284-shm.mount: Deactivated successfully. Mar 17 19:10:56.247665 systemd[1]: cri-containerd-b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284.scope: Deactivated successfully. Mar 17 19:10:56.266301 containerd[1480]: time="2025-03-17T19:10:56.266254261Z" level=info msg="StopContainer for \"cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da\" returns successfully" Mar 17 19:10:56.266885 containerd[1480]: time="2025-03-17T19:10:56.266856524Z" level=info msg="StopPodSandbox for \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\"" Mar 17 19:10:56.267243 containerd[1480]: time="2025-03-17T19:10:56.266980547Z" level=info msg="Container to stop \"fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 19:10:56.267243 containerd[1480]: time="2025-03-17T19:10:56.267083791Z" level=info msg="Container to stop \"12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 19:10:56.267243 containerd[1480]: time="2025-03-17T19:10:56.267104751Z" level=info msg="Container to stop \"e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 19:10:56.267243 containerd[1480]: time="2025-03-17T19:10:56.267116002Z" level=info msg="Container to stop \"50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 19:10:56.267243 containerd[1480]: time="2025-03-17T19:10:56.267126702Z" level=info msg="Container to stop \"cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 19:10:56.279150 systemd[1]: cri-containerd-e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd.scope: Deactivated successfully. Mar 17 19:10:56.304407 containerd[1480]: time="2025-03-17T19:10:56.304250144Z" level=info msg="shim disconnected" id=e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd namespace=k8s.io Mar 17 19:10:56.304815 containerd[1480]: time="2025-03-17T19:10:56.304550159Z" level=info msg="shim disconnected" id=b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284 namespace=k8s.io Mar 17 19:10:56.304815 containerd[1480]: time="2025-03-17T19:10:56.304592829Z" level=warning msg="cleaning up after shim disconnected" id=b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284 namespace=k8s.io Mar 17 19:10:56.304815 containerd[1480]: time="2025-03-17T19:10:56.304602707Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 19:10:56.304952 containerd[1480]: time="2025-03-17T19:10:56.304930324Z" level=warning msg="cleaning up after shim disconnected" id=e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd namespace=k8s.io Mar 17 19:10:56.305283 containerd[1480]: time="2025-03-17T19:10:56.305016546Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 19:10:56.327384 containerd[1480]: time="2025-03-17T19:10:56.327345355Z" level=info msg="TearDown network for sandbox \"b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284\" successfully" Mar 17 19:10:56.327691 containerd[1480]: time="2025-03-17T19:10:56.327566460Z" level=info msg="StopPodSandbox for \"b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284\" returns successfully" Mar 17 19:10:56.329001 containerd[1480]: time="2025-03-17T19:10:56.328634389Z" level=info msg="TearDown network for sandbox \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\" successfully" Mar 17 19:10:56.329001 containerd[1480]: time="2025-03-17T19:10:56.328864483Z" level=info msg="StopPodSandbox for \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\" returns successfully" Mar 17 19:10:56.398138 kubelet[2761]: I0317 19:10:56.397632 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-host-proc-sys-kernel\") pod \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " Mar 17 19:10:56.398138 kubelet[2761]: I0317 19:10:56.397684 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-xtables-lock\") pod \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " Mar 17 19:10:56.398138 kubelet[2761]: I0317 19:10:56.397705 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-cilium-cgroup\") pod \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " Mar 17 19:10:56.398138 kubelet[2761]: I0317 19:10:56.397738 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-host-proc-sys-net\") pod \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " Mar 17 19:10:56.398138 kubelet[2761]: I0317 19:10:56.397767 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r7jm\" (UniqueName: \"kubernetes.io/projected/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-kube-api-access-4r7jm\") pod \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " Mar 17 19:10:56.398138 kubelet[2761]: I0317 19:10:56.397787 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j9w6\" (UniqueName: \"kubernetes.io/projected/812fc024-a846-49c3-b983-24c834ccdc65-kube-api-access-8j9w6\") pod \"812fc024-a846-49c3-b983-24c834ccdc65\" (UID: \"812fc024-a846-49c3-b983-24c834ccdc65\") " Mar 17 19:10:56.398613 kubelet[2761]: I0317 19:10:56.397811 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-hostproc\") pod \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " Mar 17 19:10:56.398613 kubelet[2761]: I0317 19:10:56.397827 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-bpf-maps\") pod \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " Mar 17 19:10:56.398613 kubelet[2761]: I0317 19:10:56.397844 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-cilium-run\") pod \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " Mar 17 19:10:56.398613 kubelet[2761]: I0317 19:10:56.397865 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-cilium-config-path\") pod \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " Mar 17 19:10:56.398613 kubelet[2761]: I0317 19:10:56.397883 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-cni-path\") pod \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " Mar 17 19:10:56.398613 kubelet[2761]: I0317 19:10:56.397899 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-lib-modules\") pod \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " Mar 17 19:10:56.398773 kubelet[2761]: I0317 19:10:56.397877 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" (UID: "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:10:56.398773 kubelet[2761]: I0317 19:10:56.397953 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" (UID: "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:10:56.398773 kubelet[2761]: I0317 19:10:56.397979 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" (UID: "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:10:56.398773 kubelet[2761]: I0317 19:10:56.398000 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" (UID: "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:10:56.398773 kubelet[2761]: I0317 19:10:56.398014 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" (UID: "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:10:56.398911 kubelet[2761]: I0317 19:10:56.397916 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-etc-cni-netd\") pod \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " Mar 17 19:10:56.398911 kubelet[2761]: I0317 19:10:56.398164 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-clustermesh-secrets\") pod \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " Mar 17 19:10:56.398911 kubelet[2761]: I0317 19:10:56.398222 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/812fc024-a846-49c3-b983-24c834ccdc65-cilium-config-path\") pod \"812fc024-a846-49c3-b983-24c834ccdc65\" (UID: \"812fc024-a846-49c3-b983-24c834ccdc65\") " Mar 17 19:10:56.398911 kubelet[2761]: I0317 19:10:56.398271 2761 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-hubble-tls\") pod \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\" (UID: \"c0e4e2f4-a265-4c49-a5e9-b8b792b9317e\") " Mar 17 19:10:56.398911 kubelet[2761]: I0317 19:10:56.398350 2761 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-host-proc-sys-kernel\") on node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" DevicePath \"\"" Mar 17 19:10:56.398911 kubelet[2761]: I0317 19:10:56.398380 2761 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-xtables-lock\") on node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" DevicePath \"\"" Mar 17 19:10:56.402107 kubelet[2761]: I0317 19:10:56.398406 2761 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-cilium-cgroup\") on node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" DevicePath \"\"" Mar 17 19:10:56.402107 kubelet[2761]: I0317 19:10:56.398432 2761 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-host-proc-sys-net\") on node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" DevicePath \"\"" Mar 17 19:10:56.402107 kubelet[2761]: I0317 19:10:56.398456 2761 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-etc-cni-netd\") on node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" DevicePath \"\"" Mar 17 19:10:56.402107 kubelet[2761]: I0317 19:10:56.400085 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-hostproc" (OuterVolumeSpecName: "hostproc") pod "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" (UID: "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:10:56.402107 kubelet[2761]: I0317 19:10:56.400117 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" (UID: "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:10:56.402107 kubelet[2761]: I0317 19:10:56.400134 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" (UID: "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:10:56.408170 kubelet[2761]: I0317 19:10:56.406566 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" (UID: "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 19:10:56.408379 kubelet[2761]: I0317 19:10:56.407623 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-cni-path" (OuterVolumeSpecName: "cni-path") pod "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" (UID: "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:10:56.408445 kubelet[2761]: I0317 19:10:56.408260 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" (UID: "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 19:10:56.411819 kubelet[2761]: I0317 19:10:56.411755 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812fc024-a846-49c3-b983-24c834ccdc65-kube-api-access-8j9w6" (OuterVolumeSpecName: "kube-api-access-8j9w6") pod "812fc024-a846-49c3-b983-24c834ccdc65" (UID: "812fc024-a846-49c3-b983-24c834ccdc65"). InnerVolumeSpecName "kube-api-access-8j9w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 19:10:56.411999 kubelet[2761]: I0317 19:10:56.411956 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" (UID: "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 19:10:56.413698 kubelet[2761]: I0317 19:10:56.413669 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/812fc024-a846-49c3-b983-24c834ccdc65-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "812fc024-a846-49c3-b983-24c834ccdc65" (UID: "812fc024-a846-49c3-b983-24c834ccdc65"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 19:10:56.416792 kubelet[2761]: I0317 19:10:56.416764 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" (UID: "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 19:10:56.417794 kubelet[2761]: I0317 19:10:56.417771 2761 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-kube-api-access-4r7jm" (OuterVolumeSpecName: "kube-api-access-4r7jm") pod "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" (UID: "c0e4e2f4-a265-4c49-a5e9-b8b792b9317e"). InnerVolumeSpecName "kube-api-access-4r7jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 19:10:56.499266 kubelet[2761]: I0317 19:10:56.499189 2761 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4r7jm\" (UniqueName: \"kubernetes.io/projected/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-kube-api-access-4r7jm\") on node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" DevicePath \"\"" Mar 17 19:10:56.499577 kubelet[2761]: I0317 19:10:56.499424 2761 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8j9w6\" (UniqueName: \"kubernetes.io/projected/812fc024-a846-49c3-b983-24c834ccdc65-kube-api-access-8j9w6\") on node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" DevicePath \"\"" Mar 17 19:10:56.499577 kubelet[2761]: I0317 19:10:56.499444 2761 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-hostproc\") on node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" DevicePath \"\"" Mar 17 19:10:56.499577 kubelet[2761]: I0317 19:10:56.499457 2761 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-bpf-maps\") on node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" DevicePath \"\"" Mar 17 19:10:56.499577 kubelet[2761]: I0317 19:10:56.499471 2761 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-cilium-run\") on node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" DevicePath \"\"" Mar 17 19:10:56.499577 kubelet[2761]: I0317 19:10:56.499484 2761 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-cilium-config-path\") on node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" DevicePath \"\"" Mar 17 19:10:56.499577 kubelet[2761]: I0317 19:10:56.499515 2761 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-cni-path\") on node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" DevicePath \"\"" Mar 17 19:10:56.499577 kubelet[2761]: I0317 19:10:56.499525 2761 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-lib-modules\") on node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" DevicePath \"\"" Mar 17 19:10:56.499782 kubelet[2761]: I0317 19:10:56.499535 2761 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-clustermesh-secrets\") on node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" DevicePath \"\"" Mar 17 19:10:56.499782 kubelet[2761]: I0317 19:10:56.499548 2761 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/812fc024-a846-49c3-b983-24c834ccdc65-cilium-config-path\") on node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" DevicePath \"\"" Mar 17 19:10:56.499782 kubelet[2761]: I0317 19:10:56.499560 2761 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e-hubble-tls\") on node \"ci-4230-1-0-c-fc9f5e1ee2.novalocal\" DevicePath \"\"" Mar 17 19:10:56.708144 kubelet[2761]: I0317 19:10:56.707929 2761 scope.go:117] "RemoveContainer" containerID="cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da" Mar 17 19:10:56.712689 containerd[1480]: time="2025-03-17T19:10:56.712426417Z" level=info msg="RemoveContainer for \"cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da\"" Mar 17 19:10:56.715815 systemd[1]: Removed slice kubepods-burstable-podc0e4e2f4_a265_4c49_a5e9_b8b792b9317e.slice - libcontainer container kubepods-burstable-podc0e4e2f4_a265_4c49_a5e9_b8b792b9317e.slice. Mar 17 19:10:56.716121 systemd[1]: kubepods-burstable-podc0e4e2f4_a265_4c49_a5e9_b8b792b9317e.slice: Consumed 8.714s CPU time, 125.8M memory peak, 144K read from disk, 13.3M written to disk. Mar 17 19:10:56.725070 systemd[1]: Removed slice kubepods-besteffort-pod812fc024_a846_49c3_b983_24c834ccdc65.slice - libcontainer container kubepods-besteffort-pod812fc024_a846_49c3_b983_24c834ccdc65.slice. Mar 17 19:10:56.741397 containerd[1480]: time="2025-03-17T19:10:56.741184931Z" level=info msg="RemoveContainer for \"cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da\" returns successfully" Mar 17 19:10:56.742229 kubelet[2761]: I0317 19:10:56.741846 2761 scope.go:117] "RemoveContainer" containerID="fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d" Mar 17 19:10:56.744714 containerd[1480]: time="2025-03-17T19:10:56.744684395Z" level=info msg="RemoveContainer for \"fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d\"" Mar 17 19:10:56.754146 containerd[1480]: time="2025-03-17T19:10:56.753998850Z" level=info msg="RemoveContainer for \"fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d\" returns successfully" Mar 17 19:10:56.755095 kubelet[2761]: I0317 19:10:56.754814 2761 scope.go:117] "RemoveContainer" containerID="50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f" Mar 17 19:10:56.757206 containerd[1480]: time="2025-03-17T19:10:56.757146251Z" level=info msg="RemoveContainer for \"50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f\"" Mar 17 19:10:56.762363 containerd[1480]: time="2025-03-17T19:10:56.762071377Z" level=info msg="RemoveContainer for \"50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f\" returns successfully" Mar 17 19:10:56.763312 kubelet[2761]: I0317 19:10:56.762956 2761 scope.go:117] "RemoveContainer" containerID="e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf" Mar 17 19:10:56.766481 containerd[1480]: time="2025-03-17T19:10:56.766436920Z" level=info msg="RemoveContainer for \"e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf\"" Mar 17 19:10:56.771283 containerd[1480]: time="2025-03-17T19:10:56.771203327Z" level=info msg="RemoveContainer for \"e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf\" returns successfully" Mar 17 19:10:56.771559 kubelet[2761]: I0317 19:10:56.771441 2761 scope.go:117] "RemoveContainer" containerID="12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19" Mar 17 19:10:56.773344 containerd[1480]: time="2025-03-17T19:10:56.773196897Z" level=info msg="RemoveContainer for \"12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19\"" Mar 17 19:10:56.777126 containerd[1480]: time="2025-03-17T19:10:56.777013238Z" level=info msg="RemoveContainer for \"12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19\" returns successfully" Mar 17 19:10:56.777849 kubelet[2761]: I0317 19:10:56.777710 2761 scope.go:117] "RemoveContainer" containerID="cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da" Mar 17 19:10:56.778455 containerd[1480]: time="2025-03-17T19:10:56.778397692Z" level=error msg="ContainerStatus for \"cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da\": not found" Mar 17 19:10:56.778936 kubelet[2761]: E0317 19:10:56.778767 2761 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da\": not found" containerID="cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da" Mar 17 19:10:56.778936 kubelet[2761]: I0317 19:10:56.778805 2761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da"} err="failed to get container status \"cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbd1cdc55f42070c422f9408bebd2305731fbcf41d8ae5060cac4df9b00b91da\": not found" Mar 17 19:10:56.778936 kubelet[2761]: I0317 19:10:56.778904 2761 scope.go:117] "RemoveContainer" containerID="fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d" Mar 17 19:10:56.779572 containerd[1480]: time="2025-03-17T19:10:56.779342409Z" level=error msg="ContainerStatus for \"fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d\": not found" Mar 17 19:10:56.779626 kubelet[2761]: E0317 19:10:56.779450 2761 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d\": not found" containerID="fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d" Mar 17 19:10:56.779626 kubelet[2761]: I0317 19:10:56.779485 2761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d"} err="failed to get container status \"fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d\": rpc error: code = NotFound desc = an error occurred when try to find container \"fbae4c85f4eca039a8477ebf74980a71e4d1e8aaa05f03297f4026b692ebe86d\": not found" Mar 17 19:10:56.779626 kubelet[2761]: I0317 19:10:56.779525 2761 scope.go:117] "RemoveContainer" containerID="50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f" Mar 17 19:10:56.780170 containerd[1480]: time="2025-03-17T19:10:56.779877175Z" level=error msg="ContainerStatus for \"50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f\": not found" Mar 17 19:10:56.780221 kubelet[2761]: E0317 19:10:56.780071 2761 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f\": not found" containerID="50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f" Mar 17 19:10:56.780221 kubelet[2761]: I0317 19:10:56.780094 2761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f"} err="failed to get container status \"50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"50e563468d4fdc8ddc3e694c9f00c02c46f74656e9420f22e5608981c7663c5f\": not found" Mar 17 19:10:56.780221 kubelet[2761]: I0317 19:10:56.780109 2761 scope.go:117] "RemoveContainer" containerID="e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf" Mar 17 19:10:56.780546 containerd[1480]: time="2025-03-17T19:10:56.780464681Z" level=error msg="ContainerStatus for \"e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf\": not found" Mar 17 19:10:56.780805 kubelet[2761]: E0317 19:10:56.780750 2761 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf\": not found" containerID="e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf" Mar 17 19:10:56.780886 kubelet[2761]: I0317 19:10:56.780826 2761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf"} err="failed to get container status \"e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf\": rpc error: code = NotFound desc = an error occurred when try to find container \"e181d381c20bd5904b9e09234f8fc1356eb8188a42be55ea41f0ace1913fbdaf\": not found" Mar 17 19:10:56.780928 kubelet[2761]: I0317 19:10:56.780892 2761 scope.go:117] "RemoveContainer" containerID="12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19" Mar 17 19:10:56.781172 containerd[1480]: time="2025-03-17T19:10:56.781149038Z" level=error msg="ContainerStatus for \"12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19\": not found" Mar 17 19:10:56.781786 kubelet[2761]: E0317 19:10:56.781482 2761 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19\": not found" containerID="12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19" Mar 17 19:10:56.781786 kubelet[2761]: I0317 19:10:56.781528 2761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19"} err="failed to get container status \"12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19\": rpc error: code = NotFound desc = an error occurred when try to find container \"12310157cad92c06d8b3c5967509daa2a11ca03fa3e5c479ddf8707b8749af19\": not found" Mar 17 19:10:56.781786 kubelet[2761]: I0317 19:10:56.781549 2761 scope.go:117] "RemoveContainer" containerID="baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1" Mar 17 19:10:56.783047 containerd[1480]: time="2025-03-17T19:10:56.782920331Z" level=info msg="RemoveContainer for \"baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1\"" Mar 17 19:10:56.786922 containerd[1480]: time="2025-03-17T19:10:56.786796012Z" level=info msg="RemoveContainer for \"baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1\" returns successfully" Mar 17 19:10:56.787162 kubelet[2761]: I0317 19:10:56.787122 2761 scope.go:117] "RemoveContainer" containerID="baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1" Mar 17 19:10:56.787549 containerd[1480]: time="2025-03-17T19:10:56.787522138Z" level=error msg="ContainerStatus for \"baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1\": not found" Mar 17 19:10:56.788004 kubelet[2761]: E0317 19:10:56.787938 2761 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1\": not found" containerID="baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1" Mar 17 19:10:56.788004 kubelet[2761]: I0317 19:10:56.787975 2761 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1"} err="failed to get container status \"baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"baff868c99c7bd4bb361808836e038c4769b399921736f58dc26e7a83269d4c1\": not found" Mar 17 19:10:57.111290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd-rootfs.mount: Deactivated successfully. Mar 17 19:10:57.111529 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd-shm.mount: Deactivated successfully. Mar 17 19:10:57.111706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284-rootfs.mount: Deactivated successfully. Mar 17 19:10:57.111862 systemd[1]: var-lib-kubelet-pods-c0e4e2f4\x2da265\x2d4c49\x2da5e9\x2db8b792b9317e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4r7jm.mount: Deactivated successfully. Mar 17 19:10:57.112070 systemd[1]: var-lib-kubelet-pods-812fc024\x2da846\x2d49c3\x2db983\x2d24c834ccdc65-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8j9w6.mount: Deactivated successfully. Mar 17 19:10:57.112238 systemd[1]: var-lib-kubelet-pods-c0e4e2f4\x2da265\x2d4c49\x2da5e9\x2db8b792b9317e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 19:10:57.112385 systemd[1]: var-lib-kubelet-pods-c0e4e2f4\x2da265\x2d4c49\x2da5e9\x2db8b792b9317e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 19:10:58.066426 kubelet[2761]: I0317 19:10:58.066336 2761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="812fc024-a846-49c3-b983-24c834ccdc65" path="/var/lib/kubelet/pods/812fc024-a846-49c3-b983-24c834ccdc65/volumes" Mar 17 19:10:58.067462 kubelet[2761]: I0317 19:10:58.067395 2761 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" path="/var/lib/kubelet/pods/c0e4e2f4-a265-4c49-a5e9-b8b792b9317e/volumes" Mar 17 19:10:58.264067 sshd[4337]: Connection closed by 172.24.4.1 port 52614 Mar 17 19:10:58.265102 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Mar 17 19:10:58.281276 systemd[1]: sshd@23-172.24.4.57:22-172.24.4.1:52614.service: Deactivated successfully. Mar 17 19:10:58.288239 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 19:10:58.290560 systemd-logind[1460]: Session 26 logged out. Waiting for processes to exit. Mar 17 19:10:58.298820 systemd[1]: Started sshd@24-172.24.4.57:22-172.24.4.1:43212.service - OpenSSH per-connection server daemon (172.24.4.1:43212). Mar 17 19:10:58.302980 systemd-logind[1460]: Removed session 26. Mar 17 19:10:59.612814 sshd[4498]: Accepted publickey for core from 172.24.4.1 port 43212 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:10:59.615669 sshd-session[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:10:59.627637 systemd-logind[1460]: New session 27 of user core. Mar 17 19:10:59.637365 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 19:11:00.207392 kubelet[2761]: E0317 19:11:00.207084 2761 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 19:11:00.826659 kubelet[2761]: I0317 19:11:00.825623 2761 topology_manager.go:215] "Topology Admit Handler" podUID="af51ddfd-44c4-41c6-ac10-11ad7bd58b66" podNamespace="kube-system" podName="cilium-g68p7" Mar 17 19:11:00.826659 kubelet[2761]: E0317 19:11:00.825685 2761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" containerName="mount-bpf-fs" Mar 17 19:11:00.826659 kubelet[2761]: E0317 19:11:00.825698 2761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" containerName="clean-cilium-state" Mar 17 19:11:00.826659 kubelet[2761]: E0317 19:11:00.825707 2761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="812fc024-a846-49c3-b983-24c834ccdc65" containerName="cilium-operator" Mar 17 19:11:00.826659 kubelet[2761]: E0317 19:11:00.825714 2761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" containerName="mount-cgroup" Mar 17 19:11:00.826659 kubelet[2761]: E0317 19:11:00.825721 2761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" containerName="apply-sysctl-overwrites" Mar 17 19:11:00.826659 kubelet[2761]: E0317 19:11:00.825728 2761 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" containerName="cilium-agent" Mar 17 19:11:00.826659 kubelet[2761]: I0317 19:11:00.825756 2761 memory_manager.go:354] "RemoveStaleState removing state" podUID="812fc024-a846-49c3-b983-24c834ccdc65" containerName="cilium-operator" Mar 17 19:11:00.826659 kubelet[2761]: I0317 19:11:00.825764 2761 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e4e2f4-a265-4c49-a5e9-b8b792b9317e" containerName="cilium-agent" Mar 17 19:11:00.833847 systemd[1]: Created slice kubepods-burstable-podaf51ddfd_44c4_41c6_ac10_11ad7bd58b66.slice - libcontainer container kubepods-burstable-podaf51ddfd_44c4_41c6_ac10_11ad7bd58b66.slice. Mar 17 19:11:00.929377 kubelet[2761]: I0317 19:11:00.929340 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pkvc\" (UniqueName: \"kubernetes.io/projected/af51ddfd-44c4-41c6-ac10-11ad7bd58b66-kube-api-access-9pkvc\") pod \"cilium-g68p7\" (UID: \"af51ddfd-44c4-41c6-ac10-11ad7bd58b66\") " pod="kube-system/cilium-g68p7" Mar 17 19:11:00.929579 kubelet[2761]: I0317 19:11:00.929531 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af51ddfd-44c4-41c6-ac10-11ad7bd58b66-host-proc-sys-kernel\") pod \"cilium-g68p7\" (UID: \"af51ddfd-44c4-41c6-ac10-11ad7bd58b66\") " pod="kube-system/cilium-g68p7" Mar 17 19:11:00.929697 kubelet[2761]: I0317 19:11:00.929562 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af51ddfd-44c4-41c6-ac10-11ad7bd58b66-hubble-tls\") pod \"cilium-g68p7\" (UID: \"af51ddfd-44c4-41c6-ac10-11ad7bd58b66\") " pod="kube-system/cilium-g68p7" Mar 17 19:11:00.929815 kubelet[2761]: I0317 19:11:00.929734 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af51ddfd-44c4-41c6-ac10-11ad7bd58b66-bpf-maps\") pod \"cilium-g68p7\" (UID: \"af51ddfd-44c4-41c6-ac10-11ad7bd58b66\") " pod="kube-system/cilium-g68p7" Mar 17 19:11:00.929815 kubelet[2761]: I0317 19:11:00.929759 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af51ddfd-44c4-41c6-ac10-11ad7bd58b66-hostproc\") pod \"cilium-g68p7\" (UID: \"af51ddfd-44c4-41c6-ac10-11ad7bd58b66\") " pod="kube-system/cilium-g68p7" Mar 17 19:11:00.929815 kubelet[2761]: I0317 19:11:00.929779 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af51ddfd-44c4-41c6-ac10-11ad7bd58b66-cilium-run\") pod \"cilium-g68p7\" (UID: \"af51ddfd-44c4-41c6-ac10-11ad7bd58b66\") " pod="kube-system/cilium-g68p7" Mar 17 19:11:00.929815 kubelet[2761]: I0317 19:11:00.929797 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af51ddfd-44c4-41c6-ac10-11ad7bd58b66-cilium-cgroup\") pod \"cilium-g68p7\" (UID: \"af51ddfd-44c4-41c6-ac10-11ad7bd58b66\") " pod="kube-system/cilium-g68p7" Mar 17 19:11:00.929920 kubelet[2761]: I0317 19:11:00.929816 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af51ddfd-44c4-41c6-ac10-11ad7bd58b66-etc-cni-netd\") pod \"cilium-g68p7\" (UID: \"af51ddfd-44c4-41c6-ac10-11ad7bd58b66\") " pod="kube-system/cilium-g68p7" Mar 17 19:11:00.929920 kubelet[2761]: I0317 19:11:00.929835 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af51ddfd-44c4-41c6-ac10-11ad7bd58b66-cni-path\") pod \"cilium-g68p7\" (UID: \"af51ddfd-44c4-41c6-ac10-11ad7bd58b66\") " pod="kube-system/cilium-g68p7" Mar 17 19:11:00.929920 kubelet[2761]: I0317 19:11:00.929853 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af51ddfd-44c4-41c6-ac10-11ad7bd58b66-lib-modules\") pod \"cilium-g68p7\" (UID: \"af51ddfd-44c4-41c6-ac10-11ad7bd58b66\") " pod="kube-system/cilium-g68p7" Mar 17 19:11:00.929920 kubelet[2761]: I0317 19:11:00.929869 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/af51ddfd-44c4-41c6-ac10-11ad7bd58b66-cilium-ipsec-secrets\") pod \"cilium-g68p7\" (UID: \"af51ddfd-44c4-41c6-ac10-11ad7bd58b66\") " pod="kube-system/cilium-g68p7" Mar 17 19:11:00.929920 kubelet[2761]: I0317 19:11:00.929886 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af51ddfd-44c4-41c6-ac10-11ad7bd58b66-clustermesh-secrets\") pod \"cilium-g68p7\" (UID: \"af51ddfd-44c4-41c6-ac10-11ad7bd58b66\") " pod="kube-system/cilium-g68p7" Mar 17 19:11:00.929920 kubelet[2761]: I0317 19:11:00.929902 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af51ddfd-44c4-41c6-ac10-11ad7bd58b66-xtables-lock\") pod \"cilium-g68p7\" (UID: \"af51ddfd-44c4-41c6-ac10-11ad7bd58b66\") " pod="kube-system/cilium-g68p7" Mar 17 19:11:00.930104 kubelet[2761]: I0317 19:11:00.929920 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af51ddfd-44c4-41c6-ac10-11ad7bd58b66-cilium-config-path\") pod \"cilium-g68p7\" (UID: \"af51ddfd-44c4-41c6-ac10-11ad7bd58b66\") " pod="kube-system/cilium-g68p7" Mar 17 19:11:00.930104 kubelet[2761]: I0317 19:11:00.929937 2761 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af51ddfd-44c4-41c6-ac10-11ad7bd58b66-host-proc-sys-net\") pod \"cilium-g68p7\" (UID: \"af51ddfd-44c4-41c6-ac10-11ad7bd58b66\") " pod="kube-system/cilium-g68p7" Mar 17 19:11:01.002749 sshd[4501]: Connection closed by 172.24.4.1 port 43212 Mar 17 19:11:01.003944 sshd-session[4498]: pam_unix(sshd:session): session closed for user core Mar 17 19:11:01.017356 systemd[1]: Started sshd@25-172.24.4.57:22-172.24.4.1:43214.service - OpenSSH per-connection server daemon (172.24.4.1:43214). Mar 17 19:11:01.017817 systemd[1]: sshd@24-172.24.4.57:22-172.24.4.1:43212.service: Deactivated successfully. Mar 17 19:11:01.027403 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 19:11:01.034725 systemd-logind[1460]: Session 27 logged out. Waiting for processes to exit. Mar 17 19:11:01.042530 systemd-logind[1460]: Removed session 27. Mar 17 19:11:01.137807 containerd[1480]: time="2025-03-17T19:11:01.137637956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g68p7,Uid:af51ddfd-44c4-41c6-ac10-11ad7bd58b66,Namespace:kube-system,Attempt:0,}" Mar 17 19:11:01.167328 containerd[1480]: time="2025-03-17T19:11:01.166094751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 19:11:01.167582 containerd[1480]: time="2025-03-17T19:11:01.167547944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 19:11:01.167955 containerd[1480]: time="2025-03-17T19:11:01.167605422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:11:01.167955 containerd[1480]: time="2025-03-17T19:11:01.167867094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 19:11:01.189352 systemd[1]: Started cri-containerd-3215dc2e74452c8e78a8c48ca5448ea24130eedba4b0d70e54a5beeafd94be54.scope - libcontainer container 3215dc2e74452c8e78a8c48ca5448ea24130eedba4b0d70e54a5beeafd94be54. Mar 17 19:11:01.228432 containerd[1480]: time="2025-03-17T19:11:01.228375535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g68p7,Uid:af51ddfd-44c4-41c6-ac10-11ad7bd58b66,Namespace:kube-system,Attempt:0,} returns sandbox id \"3215dc2e74452c8e78a8c48ca5448ea24130eedba4b0d70e54a5beeafd94be54\"" Mar 17 19:11:01.235083 containerd[1480]: time="2025-03-17T19:11:01.234632874Z" level=info msg="CreateContainer within sandbox \"3215dc2e74452c8e78a8c48ca5448ea24130eedba4b0d70e54a5beeafd94be54\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 19:11:01.250355 containerd[1480]: time="2025-03-17T19:11:01.250298920Z" level=info msg="CreateContainer within sandbox \"3215dc2e74452c8e78a8c48ca5448ea24130eedba4b0d70e54a5beeafd94be54\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8f1b9524ad71e76d81ec044a49fda34f436b45a51c8a3ea741f388dcb395c735\"" Mar 17 19:11:01.251947 containerd[1480]: time="2025-03-17T19:11:01.250987215Z" level=info msg="StartContainer for \"8f1b9524ad71e76d81ec044a49fda34f436b45a51c8a3ea741f388dcb395c735\"" Mar 17 19:11:01.276188 systemd[1]: Started cri-containerd-8f1b9524ad71e76d81ec044a49fda34f436b45a51c8a3ea741f388dcb395c735.scope - libcontainer container 8f1b9524ad71e76d81ec044a49fda34f436b45a51c8a3ea741f388dcb395c735. Mar 17 19:11:01.304676 containerd[1480]: time="2025-03-17T19:11:01.304596198Z" level=info msg="StartContainer for \"8f1b9524ad71e76d81ec044a49fda34f436b45a51c8a3ea741f388dcb395c735\" returns successfully" Mar 17 19:11:01.312801 systemd[1]: cri-containerd-8f1b9524ad71e76d81ec044a49fda34f436b45a51c8a3ea741f388dcb395c735.scope: Deactivated successfully. Mar 17 19:11:01.357962 containerd[1480]: time="2025-03-17T19:11:01.357797875Z" level=info msg="shim disconnected" id=8f1b9524ad71e76d81ec044a49fda34f436b45a51c8a3ea741f388dcb395c735 namespace=k8s.io Mar 17 19:11:01.357962 containerd[1480]: time="2025-03-17T19:11:01.357846456Z" level=warning msg="cleaning up after shim disconnected" id=8f1b9524ad71e76d81ec044a49fda34f436b45a51c8a3ea741f388dcb395c735 namespace=k8s.io Mar 17 19:11:01.357962 containerd[1480]: time="2025-03-17T19:11:01.357856615Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 19:11:01.749834 containerd[1480]: time="2025-03-17T19:11:01.748410007Z" level=info msg="CreateContainer within sandbox \"3215dc2e74452c8e78a8c48ca5448ea24130eedba4b0d70e54a5beeafd94be54\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 19:11:01.774087 containerd[1480]: time="2025-03-17T19:11:01.773928835Z" level=info msg="CreateContainer within sandbox \"3215dc2e74452c8e78a8c48ca5448ea24130eedba4b0d70e54a5beeafd94be54\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eb37f400db5323b3627e387a26d72f00277c61c26988793bbc85311c1308c24e\"" Mar 17 19:11:01.776437 containerd[1480]: time="2025-03-17T19:11:01.776357174Z" level=info msg="StartContainer for \"eb37f400db5323b3627e387a26d72f00277c61c26988793bbc85311c1308c24e\"" Mar 17 19:11:01.831629 systemd[1]: Started cri-containerd-eb37f400db5323b3627e387a26d72f00277c61c26988793bbc85311c1308c24e.scope - libcontainer container eb37f400db5323b3627e387a26d72f00277c61c26988793bbc85311c1308c24e. Mar 17 19:11:01.880456 containerd[1480]: time="2025-03-17T19:11:01.880306852Z" level=info msg="StartContainer for \"eb37f400db5323b3627e387a26d72f00277c61c26988793bbc85311c1308c24e\" returns successfully" Mar 17 19:11:01.883140 systemd[1]: cri-containerd-eb37f400db5323b3627e387a26d72f00277c61c26988793bbc85311c1308c24e.scope: Deactivated successfully. Mar 17 19:11:01.908862 containerd[1480]: time="2025-03-17T19:11:01.908642899Z" level=info msg="shim disconnected" id=eb37f400db5323b3627e387a26d72f00277c61c26988793bbc85311c1308c24e namespace=k8s.io Mar 17 19:11:01.908862 containerd[1480]: time="2025-03-17T19:11:01.908706068Z" level=warning msg="cleaning up after shim disconnected" id=eb37f400db5323b3627e387a26d72f00277c61c26988793bbc85311c1308c24e namespace=k8s.io Mar 17 19:11:01.908862 containerd[1480]: time="2025-03-17T19:11:01.908721958Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 19:11:01.919814 containerd[1480]: time="2025-03-17T19:11:01.919706980Z" level=warning msg="cleanup warnings time=\"2025-03-17T19:11:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 19:11:02.357639 sshd[4509]: Accepted publickey for core from 172.24.4.1 port 43214 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:11:02.360437 sshd-session[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:11:02.369897 systemd-logind[1460]: New session 28 of user core. Mar 17 19:11:02.374307 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 19:11:02.757333 containerd[1480]: time="2025-03-17T19:11:02.755837494Z" level=info msg="CreateContainer within sandbox \"3215dc2e74452c8e78a8c48ca5448ea24130eedba4b0d70e54a5beeafd94be54\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 19:11:02.796818 containerd[1480]: time="2025-03-17T19:11:02.794876880Z" level=info msg="CreateContainer within sandbox \"3215dc2e74452c8e78a8c48ca5448ea24130eedba4b0d70e54a5beeafd94be54\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ddfb1dbe20407efa2ff9f574dd5aab41938a9f40f04b798877a267bfc9a84b00\"" Mar 17 19:11:02.800787 containerd[1480]: time="2025-03-17T19:11:02.800507169Z" level=info msg="StartContainer for \"ddfb1dbe20407efa2ff9f574dd5aab41938a9f40f04b798877a267bfc9a84b00\"" Mar 17 19:11:02.858199 systemd[1]: Started cri-containerd-ddfb1dbe20407efa2ff9f574dd5aab41938a9f40f04b798877a267bfc9a84b00.scope - libcontainer container ddfb1dbe20407efa2ff9f574dd5aab41938a9f40f04b798877a267bfc9a84b00. Mar 17 19:11:02.887286 systemd[1]: cri-containerd-ddfb1dbe20407efa2ff9f574dd5aab41938a9f40f04b798877a267bfc9a84b00.scope: Deactivated successfully. Mar 17 19:11:02.892517 containerd[1480]: time="2025-03-17T19:11:02.892448033Z" level=info msg="StartContainer for \"ddfb1dbe20407efa2ff9f574dd5aab41938a9f40f04b798877a267bfc9a84b00\" returns successfully" Mar 17 19:11:02.898390 sshd[4680]: Connection closed by 172.24.4.1 port 43214 Mar 17 19:11:02.898798 sshd-session[4509]: pam_unix(sshd:session): session closed for user core Mar 17 19:11:02.908547 systemd[1]: sshd@25-172.24.4.57:22-172.24.4.1:43214.service: Deactivated successfully. Mar 17 19:11:02.911565 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 19:11:02.914865 systemd-logind[1460]: Session 28 logged out. Waiting for processes to exit. Mar 17 19:11:02.922296 systemd[1]: Started sshd@26-172.24.4.57:22-172.24.4.1:43230.service - OpenSSH per-connection server daemon (172.24.4.1:43230). Mar 17 19:11:02.924303 systemd-logind[1460]: Removed session 28. Mar 17 19:11:02.935839 containerd[1480]: time="2025-03-17T19:11:02.935664165Z" level=info msg="shim disconnected" id=ddfb1dbe20407efa2ff9f574dd5aab41938a9f40f04b798877a267bfc9a84b00 namespace=k8s.io Mar 17 19:11:02.935839 containerd[1480]: time="2025-03-17T19:11:02.935740087Z" level=warning msg="cleaning up after shim disconnected" id=ddfb1dbe20407efa2ff9f574dd5aab41938a9f40f04b798877a267bfc9a84b00 namespace=k8s.io Mar 17 19:11:02.935839 containerd[1480]: time="2025-03-17T19:11:02.935750908Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 19:11:03.044378 systemd[1]: run-containerd-runc-k8s.io-ddfb1dbe20407efa2ff9f574dd5aab41938a9f40f04b798877a267bfc9a84b00-runc.fyN5lR.mount: Deactivated successfully. Mar 17 19:11:03.044498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddfb1dbe20407efa2ff9f574dd5aab41938a9f40f04b798877a267bfc9a84b00-rootfs.mount: Deactivated successfully. Mar 17 19:11:03.396958 kubelet[2761]: I0317 19:11:03.396808 2761 setters.go:580] "Node became not ready" node="ci-4230-1-0-c-fc9f5e1ee2.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T19:11:03Z","lastTransitionTime":"2025-03-17T19:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 19:11:03.768107 containerd[1480]: time="2025-03-17T19:11:03.767441308Z" level=info msg="CreateContainer within sandbox \"3215dc2e74452c8e78a8c48ca5448ea24130eedba4b0d70e54a5beeafd94be54\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 19:11:03.820997 containerd[1480]: time="2025-03-17T19:11:03.820689063Z" level=info msg="CreateContainer within sandbox \"3215dc2e74452c8e78a8c48ca5448ea24130eedba4b0d70e54a5beeafd94be54\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b935a9cf4742c000981b1354729a04bcbdd3d927f89f25c91440a2620e999e15\"" Mar 17 19:11:03.826097 containerd[1480]: time="2025-03-17T19:11:03.824787350Z" level=info msg="StartContainer for \"b935a9cf4742c000981b1354729a04bcbdd3d927f89f25c91440a2620e999e15\"" Mar 17 19:11:03.867185 systemd[1]: Started cri-containerd-b935a9cf4742c000981b1354729a04bcbdd3d927f89f25c91440a2620e999e15.scope - libcontainer container b935a9cf4742c000981b1354729a04bcbdd3d927f89f25c91440a2620e999e15. Mar 17 19:11:03.890917 systemd[1]: cri-containerd-b935a9cf4742c000981b1354729a04bcbdd3d927f89f25c91440a2620e999e15.scope: Deactivated successfully. Mar 17 19:11:03.897970 containerd[1480]: time="2025-03-17T19:11:03.897811049Z" level=info msg="StartContainer for \"b935a9cf4742c000981b1354729a04bcbdd3d927f89f25c91440a2620e999e15\" returns successfully" Mar 17 19:11:03.927697 containerd[1480]: time="2025-03-17T19:11:03.927532357Z" level=info msg="shim disconnected" id=b935a9cf4742c000981b1354729a04bcbdd3d927f89f25c91440a2620e999e15 namespace=k8s.io Mar 17 19:11:03.927697 containerd[1480]: time="2025-03-17T19:11:03.927682218Z" level=warning msg="cleaning up after shim disconnected" id=b935a9cf4742c000981b1354729a04bcbdd3d927f89f25c91440a2620e999e15 namespace=k8s.io Mar 17 19:11:03.927697 containerd[1480]: time="2025-03-17T19:11:03.927694220Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 19:11:04.047161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b935a9cf4742c000981b1354729a04bcbdd3d927f89f25c91440a2620e999e15-rootfs.mount: Deactivated successfully. Mar 17 19:11:04.452620 sshd[4730]: Accepted publickey for core from 172.24.4.1 port 43230 ssh2: RSA SHA256:6FuAkzenA5l/Ko+cX3bOB7QljO46GEEMiFFsjoH+RSg Mar 17 19:11:04.452359 sshd-session[4730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 19:11:04.469345 systemd-logind[1460]: New session 29 of user core. Mar 17 19:11:04.479307 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 17 19:11:04.782344 containerd[1480]: time="2025-03-17T19:11:04.782252279Z" level=info msg="CreateContainer within sandbox \"3215dc2e74452c8e78a8c48ca5448ea24130eedba4b0d70e54a5beeafd94be54\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 19:11:04.826125 containerd[1480]: time="2025-03-17T19:11:04.825993070Z" level=info msg="CreateContainer within sandbox \"3215dc2e74452c8e78a8c48ca5448ea24130eedba4b0d70e54a5beeafd94be54\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9a60c8e616767573fc998f1b9499eca4816bc4407d699ad594ff89f0d10e2061\"" Mar 17 19:11:04.829914 containerd[1480]: time="2025-03-17T19:11:04.829363669Z" level=info msg="StartContainer for \"9a60c8e616767573fc998f1b9499eca4816bc4407d699ad594ff89f0d10e2061\"" Mar 17 19:11:04.877193 systemd[1]: Started cri-containerd-9a60c8e616767573fc998f1b9499eca4816bc4407d699ad594ff89f0d10e2061.scope - libcontainer container 9a60c8e616767573fc998f1b9499eca4816bc4407d699ad594ff89f0d10e2061. Mar 17 19:11:04.924912 containerd[1480]: time="2025-03-17T19:11:04.924862949Z" level=info msg="StartContainer for \"9a60c8e616767573fc998f1b9499eca4816bc4407d699ad594ff89f0d10e2061\" returns successfully" Mar 17 19:11:05.332066 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 19:11:05.390091 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Mar 17 19:11:08.594099 systemd-networkd[1386]: lxc_health: Link UP Mar 17 19:11:08.599132 systemd-networkd[1386]: lxc_health: Gained carrier Mar 17 19:11:09.171303 kubelet[2761]: I0317 19:11:09.170894 2761 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g68p7" podStartSLOduration=9.170878257 podStartE2EDuration="9.170878257s" podCreationTimestamp="2025-03-17 19:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 19:11:05.80388893 +0000 UTC m=+165.847032486" watchObservedRunningTime="2025-03-17 19:11:09.170878257 +0000 UTC m=+169.214021784" Mar 17 19:11:09.988118 systemd-networkd[1386]: lxc_health: Gained IPv6LL Mar 17 19:11:11.540937 systemd[1]: run-containerd-runc-k8s.io-9a60c8e616767573fc998f1b9499eca4816bc4407d699ad594ff89f0d10e2061-runc.e69ZAT.mount: Deactivated successfully. Mar 17 19:11:13.732318 systemd[1]: run-containerd-runc-k8s.io-9a60c8e616767573fc998f1b9499eca4816bc4407d699ad594ff89f0d10e2061-runc.Hn1CuE.mount: Deactivated successfully. Mar 17 19:11:16.285130 sshd[4800]: Connection closed by 172.24.4.1 port 43230 Mar 17 19:11:16.287148 sshd-session[4730]: pam_unix(sshd:session): session closed for user core Mar 17 19:11:16.293490 systemd[1]: sshd@26-172.24.4.57:22-172.24.4.1:43230.service: Deactivated successfully. Mar 17 19:11:16.298267 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 19:11:16.301850 systemd-logind[1460]: Session 29 logged out. Waiting for processes to exit. Mar 17 19:11:16.304635 systemd-logind[1460]: Removed session 29. Mar 17 19:11:20.068672 containerd[1480]: time="2025-03-17T19:11:20.068562617Z" level=info msg="StopPodSandbox for \"b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284\"" Mar 17 19:11:20.070426 containerd[1480]: time="2025-03-17T19:11:20.068737615Z" level=info msg="TearDown network for sandbox \"b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284\" successfully" Mar 17 19:11:20.070426 containerd[1480]: time="2025-03-17T19:11:20.068782440Z" level=info msg="StopPodSandbox for \"b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284\" returns successfully" Mar 17 19:11:20.070426 containerd[1480]: time="2025-03-17T19:11:20.069745930Z" level=info msg="RemovePodSandbox for \"b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284\"" Mar 17 19:11:20.070426 containerd[1480]: time="2025-03-17T19:11:20.069797578Z" level=info msg="Forcibly stopping sandbox \"b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284\"" Mar 17 19:11:20.070426 containerd[1480]: time="2025-03-17T19:11:20.069898027Z" level=info msg="TearDown network for sandbox \"b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284\" successfully" Mar 17 19:11:20.076804 containerd[1480]: time="2025-03-17T19:11:20.076729035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 19:11:20.076982 containerd[1480]: time="2025-03-17T19:11:20.076839393Z" level=info msg="RemovePodSandbox \"b2c4d6b4895d7692a917a628a68614ffc549f4adf3c46d495725549bb96bb284\" returns successfully" Mar 17 19:11:20.077974 containerd[1480]: time="2025-03-17T19:11:20.077600383Z" level=info msg="StopPodSandbox for \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\"" Mar 17 19:11:20.077974 containerd[1480]: time="2025-03-17T19:11:20.077755234Z" level=info msg="TearDown network for sandbox \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\" successfully" Mar 17 19:11:20.077974 containerd[1480]: time="2025-03-17T19:11:20.077821409Z" level=info msg="StopPodSandbox for \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\" returns successfully" Mar 17 19:11:20.078797 containerd[1480]: time="2025-03-17T19:11:20.078745926Z" level=info msg="RemovePodSandbox for \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\"" Mar 17 19:11:20.078901 containerd[1480]: time="2025-03-17T19:11:20.078805107Z" level=info msg="Forcibly stopping sandbox \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\"" Mar 17 19:11:20.079001 containerd[1480]: time="2025-03-17T19:11:20.078908301Z" level=info msg="TearDown network for sandbox \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\" successfully" Mar 17 19:11:20.084531 containerd[1480]: time="2025-03-17T19:11:20.084332406Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 19:11:20.084531 containerd[1480]: time="2025-03-17T19:11:20.084416253Z" level=info msg="RemovePodSandbox \"e44af097147c7caef45b70f3f8d1cd359c22e61526aea63c4f68ddc4eca18abd\" returns successfully"