Mar 17 18:41:21.982388 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:09:25 -00 2025 Mar 17 18:41:21.982415 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 18:41:21.982425 kernel: BIOS-provided physical RAM map: Mar 17 18:41:21.982433 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 18:41:21.982441 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 18:41:21.982451 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 18:41:21.982460 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Mar 17 18:41:21.982468 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Mar 17 18:41:21.982475 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 18:41:21.982483 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 18:41:21.982491 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Mar 17 18:41:21.982499 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 18:41:21.982506 kernel: NX (Execute Disable) protection: active Mar 17 18:41:21.982514 kernel: APIC: Static calls initialized Mar 17 18:41:21.982525 kernel: SMBIOS 3.0.0 present. Mar 17 18:41:21.982534 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Mar 17 18:41:21.982542 kernel: Hypervisor detected: KVM Mar 17 18:41:21.982550 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 18:41:21.982558 kernel: kvm-clock: using sched offset of 3469327428 cycles Mar 17 18:41:21.982568 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 18:41:21.982577 kernel: tsc: Detected 1996.249 MHz processor Mar 17 18:41:21.982585 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 18:41:21.982594 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 18:41:21.982602 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Mar 17 18:41:21.982611 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 18:41:21.982619 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 18:41:21.982628 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Mar 17 18:41:21.982636 kernel: ACPI: Early table checksum verification disabled Mar 17 18:41:21.982647 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Mar 17 18:41:21.982655 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:21.982664 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:21.982672 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:21.982680 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Mar 17 18:41:21.982688 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:21.982697 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:21.982705 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Mar 17 18:41:21.982713 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Mar 17 18:41:21.982723 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Mar 17 18:41:21.982732 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Mar 17 18:41:21.982740 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Mar 17 18:41:21.982751 kernel: No NUMA configuration found Mar 17 18:41:21.982760 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Mar 17 18:41:21.982768 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Mar 17 18:41:21.982779 kernel: Zone ranges: Mar 17 18:41:21.982787 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 18:41:21.982796 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 17 18:41:21.982804 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Mar 17 18:41:21.982813 kernel: Movable zone start for each node Mar 17 18:41:21.982821 kernel: Early memory node ranges Mar 17 18:41:21.982830 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 18:41:21.982839 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Mar 17 18:41:21.982849 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Mar 17 18:41:21.982858 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Mar 17 18:41:21.982867 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:41:21.982875 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 18:41:21.982884 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 17 18:41:21.982893 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 18:41:21.982901 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 18:41:21.982910 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 18:41:21.982918 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 18:41:21.982929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 18:41:21.982937 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 18:41:21.982946 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 18:41:21.982955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 18:41:21.982963 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 18:41:21.982972 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 18:41:21.982980 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 18:41:21.982989 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Mar 17 18:41:21.982997 kernel: Booting paravirtualized kernel on KVM Mar 17 18:41:21.983008 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 18:41:21.983017 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 17 18:41:21.983026 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 17 18:41:21.983049 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 17 18:41:21.983057 kernel: pcpu-alloc: [0] 0 1 Mar 17 18:41:21.983066 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 17 18:41:21.983076 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 18:41:21.983085 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:41:21.983096 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:41:21.983105 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:41:21.983113 kernel: Fallback order for Node 0: 0 Mar 17 18:41:21.983122 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 17 18:41:21.983130 kernel: Policy zone: Normal Mar 17 18:41:21.983139 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:41:21.983148 kernel: software IO TLB: area num 2. Mar 17 18:41:21.983156 kernel: Memory: 3964156K/4193772K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43476K init, 1596K bss, 229356K reserved, 0K cma-reserved) Mar 17 18:41:21.983165 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:41:21.983176 kernel: ftrace: allocating 37910 entries in 149 pages Mar 17 18:41:21.983184 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 18:41:21.983193 kernel: Dynamic Preempt: voluntary Mar 17 18:41:21.983201 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:41:21.983211 kernel: rcu: RCU event tracing is enabled. Mar 17 18:41:21.983220 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:41:21.983229 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:41:21.983237 kernel: Rude variant of Tasks RCU enabled. Mar 17 18:41:21.983246 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:41:21.983256 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:41:21.983265 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:41:21.983273 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 18:41:21.983282 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 18:41:21.983290 kernel: Console: colour VGA+ 80x25 Mar 17 18:41:21.983299 kernel: printk: console [tty0] enabled Mar 17 18:41:21.983307 kernel: printk: console [ttyS0] enabled Mar 17 18:41:21.983316 kernel: ACPI: Core revision 20230628 Mar 17 18:41:21.983324 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 18:41:21.983333 kernel: x2apic enabled Mar 17 18:41:21.983344 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 18:41:21.983352 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 18:41:21.983361 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 18:41:21.983370 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Mar 17 18:41:21.983378 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 18:41:21.983387 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 18:41:21.983396 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 18:41:21.983404 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 18:41:21.983413 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 18:41:21.983424 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 18:41:21.983432 kernel: Speculative Store Bypass: Vulnerable Mar 17 18:41:21.983441 kernel: x86/fpu: x87 FPU will use FXSAVE Mar 17 18:41:21.983450 kernel: Freeing SMP alternatives memory: 32K Mar 17 18:41:21.983464 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:41:21.983475 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 18:41:21.983484 kernel: landlock: Up and running. Mar 17 18:41:21.983493 kernel: SELinux: Initializing. Mar 17 18:41:21.983502 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:41:21.983511 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:41:21.983520 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Mar 17 18:41:21.983531 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 18:41:21.983541 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 18:41:21.983550 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 18:41:21.983559 kernel: Performance Events: AMD PMU driver. Mar 17 18:41:21.983568 kernel: ... version: 0 Mar 17 18:41:21.983579 kernel: ... bit width: 48 Mar 17 18:41:21.983588 kernel: ... generic registers: 4 Mar 17 18:41:21.983596 kernel: ... value mask: 0000ffffffffffff Mar 17 18:41:21.983605 kernel: ... max period: 00007fffffffffff Mar 17 18:41:21.983614 kernel: ... fixed-purpose events: 0 Mar 17 18:41:21.983623 kernel: ... event mask: 000000000000000f Mar 17 18:41:21.983632 kernel: signal: max sigframe size: 1440 Mar 17 18:41:21.983641 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:41:21.983650 kernel: rcu: Max phase no-delay instances is 400. Mar 17 18:41:21.983661 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:41:21.983669 kernel: smpboot: x86: Booting SMP configuration: Mar 17 18:41:21.983678 kernel: .... node #0, CPUs: #1 Mar 17 18:41:21.983687 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:41:21.983696 kernel: smpboot: Max logical packages: 2 Mar 17 18:41:21.983705 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Mar 17 18:41:21.983714 kernel: devtmpfs: initialized Mar 17 18:41:21.983723 kernel: x86/mm: Memory block size: 128MB Mar 17 18:41:21.983732 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:41:21.983742 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:41:21.983753 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:41:21.983762 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:41:21.983771 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:41:21.983780 kernel: audit: type=2000 audit(1742236881.421:1): state=initialized audit_enabled=0 res=1 Mar 17 18:41:21.983789 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:41:21.983798 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 18:41:21.983807 kernel: cpuidle: using governor menu Mar 17 18:41:21.983816 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:41:21.983825 kernel: dca service started, version 1.12.1 Mar 17 18:41:21.983836 kernel: PCI: Using configuration type 1 for base access Mar 17 18:41:21.983845 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 18:41:21.983854 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:41:21.983863 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 18:41:21.983872 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:41:21.983881 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:41:21.983890 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:41:21.983899 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:41:21.983908 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:41:21.983919 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 18:41:21.983927 kernel: ACPI: Interpreter enabled Mar 17 18:41:21.983936 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 18:41:21.983945 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 18:41:21.983954 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 18:41:21.983963 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 18:41:21.983972 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 17 18:41:21.983982 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:41:21.984143 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:41:21.984251 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 17 18:41:21.984349 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 17 18:41:21.984363 kernel: acpiphp: Slot [3] registered Mar 17 18:41:21.984372 kernel: acpiphp: Slot [4] registered Mar 17 18:41:21.984381 kernel: acpiphp: Slot [5] registered Mar 17 18:41:21.984390 kernel: acpiphp: Slot [6] registered Mar 17 18:41:21.984399 kernel: acpiphp: Slot [7] registered Mar 17 18:41:21.984411 kernel: acpiphp: Slot [8] registered Mar 17 18:41:21.984420 kernel: acpiphp: Slot [9] registered Mar 17 18:41:21.984429 kernel: acpiphp: Slot [10] registered Mar 17 18:41:21.984438 kernel: acpiphp: Slot [11] registered Mar 17 18:41:21.984447 kernel: acpiphp: Slot [12] registered Mar 17 18:41:21.984456 kernel: acpiphp: Slot [13] registered Mar 17 18:41:21.984465 kernel: acpiphp: Slot [14] registered Mar 17 18:41:21.984474 kernel: acpiphp: Slot [15] registered Mar 17 18:41:21.984483 kernel: acpiphp: Slot [16] registered Mar 17 18:41:21.984494 kernel: acpiphp: Slot [17] registered Mar 17 18:41:21.984502 kernel: acpiphp: Slot [18] registered Mar 17 18:41:21.984511 kernel: acpiphp: Slot [19] registered Mar 17 18:41:21.984520 kernel: acpiphp: Slot [20] registered Mar 17 18:41:21.984529 kernel: acpiphp: Slot [21] registered Mar 17 18:41:21.984538 kernel: acpiphp: Slot [22] registered Mar 17 18:41:21.984547 kernel: acpiphp: Slot [23] registered Mar 17 18:41:21.984555 kernel: acpiphp: Slot [24] registered Mar 17 18:41:21.984564 kernel: acpiphp: Slot [25] registered Mar 17 18:41:21.984573 kernel: acpiphp: Slot [26] registered Mar 17 18:41:21.984584 kernel: acpiphp: Slot [27] registered Mar 17 18:41:21.984593 kernel: acpiphp: Slot [28] registered Mar 17 18:41:21.984602 kernel: acpiphp: Slot [29] registered Mar 17 18:41:21.984611 kernel: acpiphp: Slot [30] registered Mar 17 18:41:21.984619 kernel: acpiphp: Slot [31] registered Mar 17 18:41:21.984628 kernel: PCI host bridge to bus 0000:00 Mar 17 18:41:21.984729 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 18:41:21.984813 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 18:41:21.984901 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 18:41:21.984983 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 18:41:21.985146 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Mar 17 18:41:21.985234 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:41:21.985353 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 18:41:21.985461 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 17 18:41:21.985572 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 17 18:41:21.985670 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Mar 17 18:41:21.985767 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 17 18:41:21.985861 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 17 18:41:21.985958 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 17 18:41:21.986081 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 17 18:41:21.986185 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 17 18:41:21.986288 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 17 18:41:21.986383 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 17 18:41:21.986488 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 17 18:41:21.986585 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 17 18:41:21.986680 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Mar 17 18:41:21.986776 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Mar 17 18:41:21.986870 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Mar 17 18:41:21.986973 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 18:41:21.987120 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 18:41:21.987220 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Mar 17 18:41:21.987316 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Mar 17 18:41:21.987411 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Mar 17 18:41:21.987505 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Mar 17 18:41:21.987610 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 17 18:41:21.987712 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 18:41:21.987808 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Mar 17 18:41:21.987904 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Mar 17 18:41:21.988006 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Mar 17 18:41:21.988139 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Mar 17 18:41:21.988234 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Mar 17 18:41:21.988335 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 18:41:21.988434 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Mar 17 18:41:21.988526 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Mar 17 18:41:21.988618 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Mar 17 18:41:21.988631 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 18:41:21.988641 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 18:41:21.988650 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 18:41:21.988659 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 18:41:21.988668 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 18:41:21.988681 kernel: iommu: Default domain type: Translated Mar 17 18:41:21.988690 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 18:41:21.988699 kernel: PCI: Using ACPI for IRQ routing Mar 17 18:41:21.988708 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 18:41:21.988717 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 18:41:21.988726 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Mar 17 18:41:21.988819 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 17 18:41:21.988912 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 17 18:41:21.989013 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 18:41:21.989056 kernel: vgaarb: loaded Mar 17 18:41:21.989065 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 18:41:21.989075 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:41:21.989084 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:41:21.989093 kernel: pnp: PnP ACPI init Mar 17 18:41:21.989215 kernel: pnp 00:03: [dma 2] Mar 17 18:41:21.989231 kernel: pnp: PnP ACPI: found 5 devices Mar 17 18:41:21.989240 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 18:41:21.989253 kernel: NET: Registered PF_INET protocol family Mar 17 18:41:21.989262 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:41:21.989271 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:41:21.989281 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:41:21.989290 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:41:21.989299 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 18:41:21.989308 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:41:21.989317 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:41:21.989329 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:41:21.989338 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:41:21.989347 kernel: NET: Registered PF_XDP protocol family Mar 17 18:41:21.989431 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 18:41:21.989515 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 18:41:21.989599 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 18:41:21.989684 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Mar 17 18:41:21.989768 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Mar 17 18:41:21.989864 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 17 18:41:21.989964 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 18:41:21.989978 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:41:21.989987 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 17 18:41:21.989996 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Mar 17 18:41:21.990006 kernel: Initialise system trusted keyrings Mar 17 18:41:21.990015 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:41:21.990024 kernel: Key type asymmetric registered Mar 17 18:41:21.990078 kernel: Asymmetric key parser 'x509' registered Mar 17 18:41:21.990092 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 18:41:21.990101 kernel: io scheduler mq-deadline registered Mar 17 18:41:21.990110 kernel: io scheduler kyber registered Mar 17 18:41:21.990119 kernel: io scheduler bfq registered Mar 17 18:41:21.990128 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 18:41:21.990138 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 17 18:41:21.990147 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 17 18:41:21.990156 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 17 18:41:21.990165 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 17 18:41:21.990178 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:41:21.990187 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 18:41:21.990196 kernel: random: crng init done Mar 17 18:41:21.990205 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 18:41:21.990214 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 18:41:21.990223 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 18:41:21.990320 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 18:41:21.990335 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 18:41:21.990416 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 18:41:21.990506 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T18:41:21 UTC (1742236881) Mar 17 18:41:21.990589 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 17 18:41:21.990603 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 17 18:41:21.990612 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:41:21.990621 kernel: Segment Routing with IPv6 Mar 17 18:41:21.990630 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:41:21.990639 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:41:21.990649 kernel: Key type dns_resolver registered Mar 17 18:41:21.990662 kernel: IPI shorthand broadcast: enabled Mar 17 18:41:21.990671 kernel: sched_clock: Marking stable (1008007054, 170511111)->(1221217954, -42699789) Mar 17 18:41:21.990680 kernel: registered taskstats version 1 Mar 17 18:41:21.990689 kernel: Loading compiled-in X.509 certificates Mar 17 18:41:21.990698 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 2d438fc13e28f87f3f580874887bade2e2b0c7dd' Mar 17 18:41:21.990707 kernel: Key type .fscrypt registered Mar 17 18:41:21.990716 kernel: Key type fscrypt-provisioning registered Mar 17 18:41:21.990726 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:41:21.990735 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:41:21.990746 kernel: ima: No architecture policies found Mar 17 18:41:21.990755 kernel: clk: Disabling unused clocks Mar 17 18:41:21.990764 kernel: Freeing unused kernel image (initmem) memory: 43476K Mar 17 18:41:21.990773 kernel: Write protecting the kernel read-only data: 38912k Mar 17 18:41:21.990782 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 17 18:41:21.990792 kernel: Run /init as init process Mar 17 18:41:21.990801 kernel: with arguments: Mar 17 18:41:21.990810 kernel: /init Mar 17 18:41:21.990819 kernel: with environment: Mar 17 18:41:21.990830 kernel: HOME=/ Mar 17 18:41:21.990838 kernel: TERM=linux Mar 17 18:41:21.990847 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:41:21.990858 systemd[1]: Successfully made /usr/ read-only. Mar 17 18:41:21.990871 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 18:41:21.990881 systemd[1]: Detected virtualization kvm. Mar 17 18:41:21.990891 systemd[1]: Detected architecture x86-64. Mar 17 18:41:21.990902 systemd[1]: Running in initrd. Mar 17 18:41:21.990912 systemd[1]: No hostname configured, using default hostname. Mar 17 18:41:21.990922 systemd[1]: Hostname set to . Mar 17 18:41:21.990932 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:41:21.990942 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:41:21.990951 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 18:41:21.990961 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 18:41:21.990982 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 18:41:21.990994 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 18:41:21.991004 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 18:41:21.991015 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 18:41:21.991026 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 18:41:21.991052 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 18:41:21.991065 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 18:41:21.991075 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 18:41:21.991374 systemd[1]: Reached target paths.target - Path Units. Mar 17 18:41:21.991389 systemd[1]: Reached target slices.target - Slice Units. Mar 17 18:41:21.991399 systemd[1]: Reached target swap.target - Swaps. Mar 17 18:41:21.991409 systemd[1]: Reached target timers.target - Timer Units. Mar 17 18:41:21.991419 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 18:41:21.991430 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 18:41:21.991444 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 18:41:21.991455 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 18:41:21.991465 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 18:41:21.991475 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 18:41:21.991485 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 18:41:21.991495 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 18:41:21.991505 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 18:41:21.991515 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 18:41:21.991525 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 18:41:21.991538 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:41:21.991548 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 18:41:21.991558 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 18:41:21.991568 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 18:41:21.991578 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 18:41:21.991588 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 18:41:21.991600 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:41:21.991611 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 18:41:21.991645 systemd-journald[185]: Collecting audit messages is disabled. Mar 17 18:41:21.991672 systemd-journald[185]: Journal started Mar 17 18:41:21.991696 systemd-journald[185]: Runtime Journal (/run/log/journal/e8e85b19c8c440b2bc01877c2f8cfb52) is 8M, max 78.3M, 70.3M free. Mar 17 18:41:21.994054 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 18:41:22.009209 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 18:41:22.013488 systemd-modules-load[186]: Inserted module 'overlay' Mar 17 18:41:22.016088 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 18:41:22.072751 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:41:22.072776 kernel: Bridge firewalling registered Mar 17 18:41:22.022219 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 18:41:22.046744 systemd-modules-load[186]: Inserted module 'br_netfilter' Mar 17 18:41:22.075714 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 18:41:22.077398 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:41:22.078704 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 18:41:22.081456 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 18:41:22.088149 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 18:41:22.090148 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 18:41:22.099379 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:41:22.105178 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 18:41:22.106629 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 18:41:22.120327 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 18:41:22.134243 dracut-cmdline[223]: dracut-dracut-053 Mar 17 18:41:22.138652 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 18:41:22.146072 systemd-resolved[220]: Positive Trust Anchors: Mar 17 18:41:22.146090 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:41:22.146133 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 18:41:22.149370 systemd-resolved[220]: Defaulting to hostname 'linux'. Mar 17 18:41:22.150343 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 18:41:22.151245 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 18:41:22.227126 kernel: SCSI subsystem initialized Mar 17 18:41:22.238093 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:41:22.250197 kernel: iscsi: registered transport (tcp) Mar 17 18:41:22.273223 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:41:22.273286 kernel: QLogic iSCSI HBA Driver Mar 17 18:41:22.332895 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 18:41:22.340295 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 18:41:22.392357 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:41:22.392477 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:41:22.394616 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 18:41:22.458181 kernel: raid6: sse2x4 gen() 5179 MB/s Mar 17 18:41:22.476146 kernel: raid6: sse2x2 gen() 5998 MB/s Mar 17 18:41:22.494429 kernel: raid6: sse2x1 gen() 9354 MB/s Mar 17 18:41:22.494492 kernel: raid6: using algorithm sse2x1 gen() 9354 MB/s Mar 17 18:41:22.513506 kernel: raid6: .... xor() 7375 MB/s, rmw enabled Mar 17 18:41:22.513577 kernel: raid6: using ssse3x2 recovery algorithm Mar 17 18:41:22.535085 kernel: xor: measuring software checksum speed Mar 17 18:41:22.537567 kernel: prefetch64-sse : 16908 MB/sec Mar 17 18:41:22.537615 kernel: generic_sse : 16854 MB/sec Mar 17 18:41:22.537656 kernel: xor: using function: prefetch64-sse (16908 MB/sec) Mar 17 18:41:22.716103 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 18:41:22.734087 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 18:41:22.742301 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 18:41:22.764294 systemd-udevd[406]: Using default interface naming scheme 'v255'. Mar 17 18:41:22.769378 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 18:41:22.778300 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 18:41:22.803741 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Mar 17 18:41:22.848993 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 18:41:22.865357 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 18:41:22.914475 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 18:41:22.928725 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 18:41:22.978288 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 18:41:22.979449 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 18:41:22.981380 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 18:41:22.982572 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 18:41:22.989231 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 18:41:23.000312 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Mar 17 18:41:23.039308 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Mar 17 18:41:23.039447 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:41:23.039462 kernel: GPT:17805311 != 20971519 Mar 17 18:41:23.039478 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:41:23.039490 kernel: GPT:17805311 != 20971519 Mar 17 18:41:23.039501 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:41:23.039512 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:41:23.003613 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 18:41:23.041238 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:41:23.041382 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 18:41:23.042115 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 18:41:23.042793 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:41:23.042920 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:41:23.045786 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 18:41:23.053152 kernel: libata version 3.00 loaded. Mar 17 18:41:23.053312 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 18:41:23.062072 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 17 18:41:23.075108 kernel: scsi host0: ata_piix Mar 17 18:41:23.075259 kernel: scsi host1: ata_piix Mar 17 18:41:23.075387 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Mar 17 18:41:23.075401 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Mar 17 18:41:23.087052 kernel: BTRFS: device fsid 16b3954e-2e86-4c7f-a948-d3d3817b1bdc devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (465) Mar 17 18:41:23.089056 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (464) Mar 17 18:41:23.109723 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 18:41:23.153535 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:41:23.172331 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 18:41:23.183578 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 18:41:23.192390 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 18:41:23.192956 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 18:41:23.203241 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 18:41:23.207182 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 18:41:23.215147 disk-uuid[508]: Primary Header is updated. Mar 17 18:41:23.215147 disk-uuid[508]: Secondary Entries is updated. Mar 17 18:41:23.215147 disk-uuid[508]: Secondary Header is updated. Mar 17 18:41:23.224057 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:41:23.231349 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 18:41:24.247140 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:41:24.248516 disk-uuid[509]: The operation has completed successfully. Mar 17 18:41:24.330822 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:41:24.330940 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 18:41:24.383162 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 18:41:24.386399 sh[528]: Success Mar 17 18:41:24.399138 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Mar 17 18:41:24.482888 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 18:41:24.493276 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 18:41:24.513500 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 18:41:24.578148 kernel: BTRFS info (device dm-0): first mount of filesystem 16b3954e-2e86-4c7f-a948-d3d3817b1bdc Mar 17 18:41:24.578246 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:41:24.580672 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 18:41:24.585547 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 18:41:24.589282 kernel: BTRFS info (device dm-0): using free space tree Mar 17 18:41:24.744679 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 18:41:24.747377 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 18:41:24.755348 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 18:41:24.761319 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 18:41:24.788948 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 18:41:24.789088 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:41:24.793177 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:41:24.805099 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 18:41:24.879996 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:41:24.885514 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 18:41:24.975951 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 18:41:24.987315 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 18:41:25.028548 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 18:41:25.039396 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 18:41:25.062278 systemd-networkd[708]: lo: Link UP Mar 17 18:41:25.062292 systemd-networkd[708]: lo: Gained carrier Mar 17 18:41:25.063524 systemd-networkd[708]: Enumeration completed Mar 17 18:41:25.063968 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 18:41:25.064231 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:41:25.064234 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:41:25.067050 systemd-networkd[708]: eth0: Link UP Mar 17 18:41:25.067387 systemd-networkd[708]: eth0: Gained carrier Mar 17 18:41:25.067398 systemd-networkd[708]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:41:25.068353 systemd[1]: Reached target network.target - Network. Mar 17 18:41:25.083129 systemd-networkd[708]: eth0: DHCPv4 address 172.24.4.236/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 17 18:41:25.271317 ignition[712]: Ignition 2.20.0 Mar 17 18:41:25.271343 ignition[712]: Stage: fetch-offline Mar 17 18:41:25.271417 ignition[712]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:25.271440 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 18:41:25.271641 ignition[712]: parsed url from cmdline: "" Mar 17 18:41:25.271650 ignition[712]: no config URL provided Mar 17 18:41:25.271662 ignition[712]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:41:25.274617 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 18:41:25.271680 ignition[712]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:41:25.271691 ignition[712]: failed to fetch config: resource requires networking Mar 17 18:41:25.272158 ignition[712]: Ignition finished successfully Mar 17 18:41:25.281165 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 18:41:25.307580 ignition[724]: Ignition 2.20.0 Mar 17 18:41:25.307610 ignition[724]: Stage: fetch Mar 17 18:41:25.307999 ignition[724]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:25.308027 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 18:41:25.308289 ignition[724]: parsed url from cmdline: "" Mar 17 18:41:25.308299 ignition[724]: no config URL provided Mar 17 18:41:25.308312 ignition[724]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:41:25.308334 ignition[724]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:41:25.308499 ignition[724]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 17 18:41:25.308563 ignition[724]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 17 18:41:25.308598 ignition[724]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 17 18:41:25.461389 ignition[724]: GET result: OK Mar 17 18:41:25.461579 ignition[724]: parsing config with SHA512: 8f3b85a1a145d20a9597328f32232cbd0588d309a52d2e1d572bde9a8d11e02c082f89bbf1648f1f5fb379a746ae2dbd436ed81fe2b25610126b59e99c9edb49 Mar 17 18:41:25.472474 unknown[724]: fetched base config from "system" Mar 17 18:41:25.472500 unknown[724]: fetched base config from "system" Mar 17 18:41:25.474500 ignition[724]: fetch: fetch complete Mar 17 18:41:25.472514 unknown[724]: fetched user config from "openstack" Mar 17 18:41:25.474527 ignition[724]: fetch: fetch passed Mar 17 18:41:25.479253 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 18:41:25.474631 ignition[724]: Ignition finished successfully Mar 17 18:41:25.489428 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 18:41:25.536643 ignition[731]: Ignition 2.20.0 Mar 17 18:41:25.536677 ignition[731]: Stage: kargs Mar 17 18:41:25.537267 ignition[731]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:25.537297 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 18:41:25.539652 ignition[731]: kargs: kargs passed Mar 17 18:41:25.542016 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 18:41:25.539757 ignition[731]: Ignition finished successfully Mar 17 18:41:25.552504 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 18:41:25.590846 ignition[737]: Ignition 2.20.0 Mar 17 18:41:25.590873 ignition[737]: Stage: disks Mar 17 18:41:25.591326 ignition[737]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:25.591353 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 18:41:25.593703 ignition[737]: disks: disks passed Mar 17 18:41:25.593818 ignition[737]: Ignition finished successfully Mar 17 18:41:25.595156 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 18:41:25.596888 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 18:41:25.598292 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 18:41:25.600075 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 18:41:25.602223 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 18:41:25.604394 systemd[1]: Reached target basic.target - Basic System. Mar 17 18:41:25.616449 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 18:41:25.644309 systemd-fsck[745]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 17 18:41:25.657594 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 18:41:25.668238 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 18:41:25.841090 kernel: EXT4-fs (vda9): mounted filesystem 21764504-a65e-45eb-84e1-376b55b62aba r/w with ordered data mode. Quota mode: none. Mar 17 18:41:25.842059 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 18:41:25.843635 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 18:41:25.851260 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 18:41:25.855286 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 18:41:25.856984 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 18:41:25.861400 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 17 18:41:25.865346 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:41:25.883600 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (753) Mar 17 18:41:25.883632 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 18:41:25.883648 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:41:25.883669 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:41:25.865389 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 18:41:25.868451 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 18:41:25.891841 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 18:41:25.908219 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 18:41:25.921146 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 18:41:25.994443 initrd-setup-root[780]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:41:25.998771 initrd-setup-root[788]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:41:26.003786 initrd-setup-root[795]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:41:26.009328 initrd-setup-root[802]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:41:26.118197 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 18:41:26.126199 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 18:41:26.132325 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 18:41:26.140152 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 18:41:26.166829 ignition[869]: INFO : Ignition 2.20.0 Mar 17 18:41:26.166829 ignition[869]: INFO : Stage: mount Mar 17 18:41:26.168044 ignition[869]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:26.168044 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 18:41:26.171381 ignition[869]: INFO : mount: mount passed Mar 17 18:41:26.171381 ignition[869]: INFO : Ignition finished successfully Mar 17 18:41:26.170922 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 18:41:26.185367 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 18:41:26.558843 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 18:41:27.098271 systemd-networkd[708]: eth0: Gained IPv6LL Mar 17 18:41:33.058141 coreos-metadata[755]: Mar 17 18:41:33.058 WARN failed to locate config-drive, using the metadata service API instead Mar 17 18:41:33.098855 coreos-metadata[755]: Mar 17 18:41:33.098 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 17 18:41:33.117544 coreos-metadata[755]: Mar 17 18:41:33.117 INFO Fetch successful Mar 17 18:41:33.117544 coreos-metadata[755]: Mar 17 18:41:33.117 INFO wrote hostname ci-4230-1-0-9-731388c134.novalocal to /sysroot/etc/hostname Mar 17 18:41:33.121575 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 17 18:41:33.121819 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 17 18:41:33.133193 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 18:41:33.172466 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 18:41:33.191097 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (886) Mar 17 18:41:33.198918 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 18:41:33.198980 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:41:33.203134 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:41:33.215129 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 18:41:33.220943 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 18:41:33.262940 ignition[904]: INFO : Ignition 2.20.0 Mar 17 18:41:33.262940 ignition[904]: INFO : Stage: files Mar 17 18:41:33.265832 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:33.265832 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 18:41:33.265832 ignition[904]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:41:33.265832 ignition[904]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:41:33.265832 ignition[904]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:41:33.276178 ignition[904]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:41:33.276178 ignition[904]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:41:33.276178 ignition[904]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:41:33.276178 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:41:33.276178 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 18:41:33.271298 unknown[904]: wrote ssh authorized keys file for user: core Mar 17 18:41:33.352940 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 18:41:33.695131 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:41:33.695131 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:41:33.695131 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 18:41:34.462977 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:41:35.052764 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:41:35.052764 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:41:35.052764 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:41:35.059309 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:41:35.059309 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:41:35.059309 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:41:35.059309 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:41:35.059309 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:41:35.059309 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:41:35.059309 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:41:35.059309 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:41:35.059309 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:41:35.059309 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:41:35.059309 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:41:35.059309 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 17 18:41:35.516368 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 18:41:37.034956 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:41:37.036597 ignition[904]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 18:41:37.038367 ignition[904]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:41:37.038367 ignition[904]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:41:37.038367 ignition[904]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 18:41:37.046908 ignition[904]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:41:37.046908 ignition[904]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:41:37.046908 ignition[904]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:41:37.046908 ignition[904]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:41:37.046908 ignition[904]: INFO : files: files passed Mar 17 18:41:37.046908 ignition[904]: INFO : Ignition finished successfully Mar 17 18:41:37.040260 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 18:41:37.050577 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 18:41:37.052162 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 18:41:37.056343 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:41:37.056434 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 18:41:37.072770 initrd-setup-root-after-ignition[932]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:41:37.073792 initrd-setup-root-after-ignition[932]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:41:37.075306 initrd-setup-root-after-ignition[936]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:41:37.078314 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 18:41:37.079129 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 18:41:37.087207 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 18:41:37.135704 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:41:37.135829 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 18:41:37.136667 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 18:41:37.138367 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 18:41:37.140625 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 18:41:37.147233 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 18:41:37.164569 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 18:41:37.169335 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 18:41:37.191466 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 18:41:37.193144 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 18:41:37.195369 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 18:41:37.197416 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:41:37.197561 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 18:41:37.199945 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 18:41:37.200885 systemd[1]: Stopped target basic.target - Basic System. Mar 17 18:41:37.202659 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 18:41:37.204144 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 18:41:37.205615 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 18:41:37.207387 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 18:41:37.209285 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 18:41:37.211158 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 18:41:37.212882 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 18:41:37.214636 systemd[1]: Stopped target swap.target - Swaps. Mar 17 18:41:37.216293 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:41:37.216441 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 18:41:37.218363 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 18:41:37.219308 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 18:41:37.220711 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 18:41:37.223108 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 18:41:37.223789 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:41:37.223918 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 18:41:37.226395 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:41:37.226537 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 18:41:37.227450 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:41:37.227589 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 18:41:37.236281 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 18:41:37.236923 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:41:37.237151 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 18:41:37.244207 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 18:41:37.246403 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:41:37.248802 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 18:41:37.251577 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:41:37.251814 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 18:41:37.258508 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:41:37.258596 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 18:41:37.267056 ignition[956]: INFO : Ignition 2.20.0 Mar 17 18:41:37.267056 ignition[956]: INFO : Stage: umount Mar 17 18:41:37.267056 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:37.267056 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 18:41:37.274212 ignition[956]: INFO : umount: umount passed Mar 17 18:41:37.274212 ignition[956]: INFO : Ignition finished successfully Mar 17 18:41:37.271359 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:41:37.274762 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:41:37.274875 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 18:41:37.276091 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:41:37.276182 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 18:41:37.277608 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:41:37.277682 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 18:41:37.278578 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:41:37.278621 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 18:41:37.279583 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:41:37.279625 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 18:41:37.280566 systemd[1]: Stopped target network.target - Network. Mar 17 18:41:37.281535 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:41:37.281582 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 18:41:37.282640 systemd[1]: Stopped target paths.target - Path Units. Mar 17 18:41:37.283577 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:41:37.287080 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 18:41:37.287992 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 18:41:37.289138 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 18:41:37.290376 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:41:37.290411 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 18:41:37.291356 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:41:37.291386 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 18:41:37.292343 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:41:37.292385 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 18:41:37.293333 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 18:41:37.293377 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 18:41:37.294348 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:41:37.294390 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 18:41:37.295446 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 18:41:37.296489 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 18:41:37.298685 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:41:37.298816 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 18:41:37.302205 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 18:41:37.302674 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 18:41:37.302736 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 18:41:37.304702 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 18:41:37.306557 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:41:37.306673 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 18:41:37.311863 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 18:41:37.312352 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:41:37.312404 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 18:41:37.320370 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 18:41:37.320876 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:41:37.320936 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 18:41:37.321567 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:41:37.321613 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:41:37.323091 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:41:37.323136 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 18:41:37.323842 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 18:41:37.326651 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:41:37.333359 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:41:37.333506 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 18:41:37.335645 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:41:37.335723 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 18:41:37.337679 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:41:37.337740 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 18:41:37.338429 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:41:37.338460 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 18:41:37.339546 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:41:37.339590 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 18:41:37.341169 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:41:37.341212 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 18:41:37.342292 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:41:37.342335 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 18:41:37.352401 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 18:41:37.352948 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 18:41:37.353020 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 18:41:37.353682 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:41:37.353727 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:41:37.357283 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:41:37.357401 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 18:41:37.358444 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 18:41:37.365190 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 18:41:37.372748 systemd[1]: Switching root. Mar 17 18:41:37.404790 systemd-journald[185]: Journal stopped Mar 17 18:41:39.228783 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Mar 17 18:41:39.228847 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:41:39.228868 kernel: SELinux: policy capability open_perms=1 Mar 17 18:41:39.228882 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:41:39.228894 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:41:39.228906 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:41:39.228922 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:41:39.228940 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:41:39.228952 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:41:39.228964 kernel: audit: type=1403 audit(1742236898.011:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:41:39.228982 systemd[1]: Successfully loaded SELinux policy in 84.294ms. Mar 17 18:41:39.229019 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 27.043ms. Mar 17 18:41:39.231195 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 18:41:39.231220 systemd[1]: Detected virtualization kvm. Mar 17 18:41:39.231241 systemd[1]: Detected architecture x86-64. Mar 17 18:41:39.231256 systemd[1]: Detected first boot. Mar 17 18:41:39.231271 systemd[1]: Hostname set to . Mar 17 18:41:39.231285 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:41:39.231300 zram_generator::config[1001]: No configuration found. Mar 17 18:41:39.231318 kernel: Guest personality initialized and is inactive Mar 17 18:41:39.231333 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 17 18:41:39.231348 kernel: Initialized host personality Mar 17 18:41:39.231363 kernel: NET: Registered PF_VSOCK protocol family Mar 17 18:41:39.231378 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:41:39.231394 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 18:41:39.231409 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:41:39.231423 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 18:41:39.231437 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:41:39.231453 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 18:41:39.231467 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 18:41:39.231482 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 18:41:39.231499 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 18:41:39.231520 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 18:41:39.231535 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 18:41:39.231550 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 18:41:39.231564 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 18:41:39.231579 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 18:41:39.231594 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 18:41:39.231608 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 18:41:39.231623 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 18:41:39.231640 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 18:41:39.231656 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 18:41:39.231671 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 18:41:39.231685 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 18:41:39.231700 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 18:41:39.231715 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 18:41:39.231732 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 18:41:39.231747 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 18:41:39.231762 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 18:41:39.231776 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 18:41:39.231790 systemd[1]: Reached target slices.target - Slice Units. Mar 17 18:41:39.231804 systemd[1]: Reached target swap.target - Swaps. Mar 17 18:41:39.231820 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 18:41:39.231834 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 18:41:39.231849 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 18:41:39.231866 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 18:41:39.231881 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 18:41:39.231895 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 18:41:39.231909 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 18:41:39.231924 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 18:41:39.231939 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 18:41:39.231953 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 18:41:39.231968 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:39.231983 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 18:41:39.232000 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 18:41:39.232015 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 18:41:39.232045 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:41:39.232061 systemd[1]: Reached target machines.target - Containers. Mar 17 18:41:39.232076 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 18:41:39.232091 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 18:41:39.232111 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 18:41:39.232126 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 18:41:39.232144 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 18:41:39.232158 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 18:41:39.232172 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 18:41:39.232187 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 18:41:39.232201 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 18:41:39.232216 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:41:39.232230 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:41:39.232245 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 18:41:39.232260 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:41:39.232277 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:41:39.232292 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 18:41:39.232307 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 18:41:39.232321 kernel: fuse: init (API version 7.39) Mar 17 18:41:39.232335 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 18:41:39.232351 kernel: loop: module loaded Mar 17 18:41:39.232365 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 18:41:39.232381 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 18:41:39.232399 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 18:41:39.232413 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 18:41:39.232428 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:41:39.232443 systemd[1]: Stopped verity-setup.service. Mar 17 18:41:39.232458 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:39.232476 kernel: ACPI: bus type drm_connector registered Mar 17 18:41:39.232490 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 18:41:39.232505 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 18:41:39.232520 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 18:41:39.232534 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 18:41:39.232554 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 18:41:39.232569 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 18:41:39.232583 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 18:41:39.232623 systemd-journald[1105]: Collecting audit messages is disabled. Mar 17 18:41:39.232651 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 18:41:39.232667 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:41:39.232682 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 18:41:39.232697 systemd-journald[1105]: Journal started Mar 17 18:41:39.232730 systemd-journald[1105]: Runtime Journal (/run/log/journal/e8e85b19c8c440b2bc01877c2f8cfb52) is 8M, max 78.3M, 70.3M free. Mar 17 18:41:38.858983 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:41:39.236081 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 18:41:38.867465 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 18:41:38.868005 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:41:39.236013 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:39.237261 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 18:41:39.238295 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:41:39.238490 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 18:41:39.239267 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:39.240104 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 18:41:39.241066 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:41:39.241706 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 18:41:39.242785 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:39.243107 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 18:41:39.243978 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 18:41:39.244971 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 18:41:39.245882 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 18:41:39.246945 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 18:41:39.257805 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 18:41:39.264221 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 18:41:39.269147 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 18:41:39.270128 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:41:39.270238 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 18:41:39.272507 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 18:41:39.278176 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 18:41:39.285228 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 18:41:39.285898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 18:41:39.291203 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 18:41:39.300208 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 18:41:39.301395 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:41:39.302500 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 18:41:39.303501 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 18:41:39.307219 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 18:41:39.312371 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 18:41:39.315323 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 18:41:39.320331 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 18:41:39.322676 systemd-journald[1105]: Time spent on flushing to /var/log/journal/e8e85b19c8c440b2bc01877c2f8cfb52 is 49.488ms for 958 entries. Mar 17 18:41:39.322676 systemd-journald[1105]: System Journal (/var/log/journal/e8e85b19c8c440b2bc01877c2f8cfb52) is 8M, max 584.8M, 576.8M free. Mar 17 18:41:39.438434 systemd-journald[1105]: Received client request to flush runtime journal. Mar 17 18:41:39.438491 kernel: loop0: detected capacity change from 0 to 205544 Mar 17 18:41:39.336244 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 18:41:39.337002 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 18:41:39.338892 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 18:41:39.340691 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 18:41:39.346273 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 18:41:39.355235 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 18:41:39.357866 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 18:41:39.399263 udevadm[1150]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:41:39.438640 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:41:39.441838 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 18:41:39.466531 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 18:41:39.496252 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:41:39.505087 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 18:41:39.511158 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 18:41:39.529068 kernel: loop1: detected capacity change from 0 to 8 Mar 17 18:41:39.554198 kernel: loop2: detected capacity change from 0 to 138176 Mar 17 18:41:39.560341 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Mar 17 18:41:39.560542 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Mar 17 18:41:39.569184 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 18:41:39.623063 kernel: loop3: detected capacity change from 0 to 147912 Mar 17 18:41:39.700075 kernel: loop4: detected capacity change from 0 to 205544 Mar 17 18:41:39.757174 kernel: loop5: detected capacity change from 0 to 8 Mar 17 18:41:39.760053 kernel: loop6: detected capacity change from 0 to 138176 Mar 17 18:41:39.829094 kernel: loop7: detected capacity change from 0 to 147912 Mar 17 18:41:39.857685 (sd-merge)[1165]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 17 18:41:39.858797 (sd-merge)[1165]: Merged extensions into '/usr'. Mar 17 18:41:39.865697 systemd[1]: Reload requested from client PID 1140 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 18:41:39.865786 systemd[1]: Reloading... Mar 17 18:41:39.968057 zram_generator::config[1189]: No configuration found. Mar 17 18:41:40.159325 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:41:40.244772 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:41:40.244977 systemd[1]: Reloading finished in 378 ms. Mar 17 18:41:40.260890 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 18:41:40.261817 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 18:41:40.275150 systemd[1]: Starting ensure-sysext.service... Mar 17 18:41:40.278173 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 18:41:40.287167 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 18:41:40.296211 ldconfig[1135]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:41:40.306410 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 18:41:40.307538 systemd[1]: Reload requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Mar 17 18:41:40.307558 systemd[1]: Reloading... Mar 17 18:41:40.310399 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:41:40.310940 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 18:41:40.311794 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:41:40.312334 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Mar 17 18:41:40.312505 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Mar 17 18:41:40.317459 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 18:41:40.317582 systemd-tmpfiles[1250]: Skipping /boot Mar 17 18:41:40.328883 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 18:41:40.329116 systemd-tmpfiles[1250]: Skipping /boot Mar 17 18:41:40.356745 systemd-udevd[1251]: Using default interface naming scheme 'v255'. Mar 17 18:41:40.387151 zram_generator::config[1278]: No configuration found. Mar 17 18:41:40.536138 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1303) Mar 17 18:41:40.559097 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 18:41:40.577128 kernel: ACPI: button: Power Button [PWRF] Mar 17 18:41:40.582052 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 17 18:41:40.611101 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 18:41:40.652635 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:41:40.693048 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:41:40.693118 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Mar 17 18:41:40.695983 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Mar 17 18:41:40.703042 kernel: Console: switching to colour dummy device 80x25 Mar 17 18:41:40.703077 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 18:41:40.703096 kernel: [drm] features: -context_init Mar 17 18:41:40.705545 kernel: [drm] number of scanouts: 1 Mar 17 18:41:40.708053 kernel: [drm] number of cap sets: 0 Mar 17 18:41:40.711049 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Mar 17 18:41:40.716051 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 17 18:41:40.716087 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 18:41:40.727056 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 18:41:40.765385 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 18:41:40.765545 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 18:41:40.768613 systemd[1]: Reloading finished in 460 ms. Mar 17 18:41:40.787277 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 18:41:40.803830 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 18:41:40.849132 systemd[1]: Finished ensure-sysext.service. Mar 17 18:41:40.867444 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 18:41:40.882881 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:40.889306 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 18:41:40.895328 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 18:41:40.895855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 18:41:40.907367 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 18:41:40.916708 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 18:41:40.919237 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 18:41:40.923434 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 18:41:40.938234 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 18:41:40.938510 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 18:41:40.942251 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 18:41:40.942348 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 18:41:40.945210 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 18:41:40.951721 lvm[1377]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:41:40.953685 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 18:41:40.961195 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 18:41:40.963232 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 18:41:40.974197 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 18:41:40.982346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 18:41:40.983959 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:40.984897 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:41:40.985153 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 18:41:40.985502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:40.985661 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 18:41:40.985935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:40.988225 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 18:41:40.993736 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:40.993926 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 18:41:40.997872 augenrules[1405]: No rules Mar 17 18:41:40.997824 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 18:41:40.998594 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 18:41:40.998767 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 18:41:41.003143 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 18:41:41.018155 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:41:41.018320 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 18:41:41.026349 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 18:41:41.031953 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 18:41:41.041714 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 18:41:41.047536 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 18:41:41.054601 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 18:41:41.061495 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 18:41:41.071233 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 18:41:41.078114 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:41:41.082925 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 18:41:41.083705 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:41:41.104447 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 18:41:41.107894 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 18:41:41.143926 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 18:41:41.189686 systemd-networkd[1397]: lo: Link UP Mar 17 18:41:41.189697 systemd-networkd[1397]: lo: Gained carrier Mar 17 18:41:41.191062 systemd-networkd[1397]: Enumeration completed Mar 17 18:41:41.191215 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 18:41:41.201331 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 18:41:41.204295 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 18:41:41.211463 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:41:41.211472 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:41:41.215063 systemd-networkd[1397]: eth0: Link UP Mar 17 18:41:41.215074 systemd-networkd[1397]: eth0: Gained carrier Mar 17 18:41:41.215089 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 18:41:41.237095 systemd-networkd[1397]: eth0: DHCPv4 address 172.24.4.236/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 17 18:41:41.239423 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 18:41:41.256523 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 18:41:41.259451 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 18:41:41.259883 systemd-resolved[1399]: Positive Trust Anchors: Mar 17 18:41:41.259895 systemd-resolved[1399]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:41:41.259937 systemd-resolved[1399]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 18:41:41.266372 systemd-resolved[1399]: Using system hostname 'ci-4230-1-0-9-731388c134.novalocal'. Mar 17 18:41:41.267841 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 18:41:41.268624 systemd[1]: Reached target network.target - Network. Mar 17 18:41:41.269105 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 18:41:41.269532 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 18:41:41.271105 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 18:41:41.273559 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 18:41:41.276001 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 18:41:41.277711 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 18:41:41.279254 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 18:41:41.280469 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:41:41.280574 systemd[1]: Reached target paths.target - Path Units. Mar 17 18:41:41.282266 systemd[1]: Reached target timers.target - Timer Units. Mar 17 18:41:41.285612 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 18:41:41.288508 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 18:41:41.292927 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 18:41:41.299821 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 18:41:41.300612 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 18:41:41.312502 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 18:41:41.316496 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 18:41:41.317925 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 18:41:41.320640 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 18:41:41.321225 systemd[1]: Reached target basic.target - Basic System. Mar 17 18:41:41.321773 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 18:41:41.321806 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 18:41:41.328123 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 18:41:41.332828 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 18:41:41.337259 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 18:41:41.341114 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 18:41:41.351207 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 18:41:41.351879 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 18:41:41.357297 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 18:41:41.364845 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 18:41:41.374138 jq[1448]: false Mar 17 18:41:41.372186 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 18:41:42.207282 systemd-timesyncd[1400]: Contacted time server 99.113.165.209:123 (0.flatcar.pool.ntp.org). Mar 17 18:41:42.207329 systemd-timesyncd[1400]: Initial clock synchronization to Mon 2025-03-17 18:41:42.207186 UTC. Mar 17 18:41:42.213455 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 18:41:42.217088 systemd-resolved[1399]: Clock change detected. Flushing caches. Mar 17 18:41:42.220636 extend-filesystems[1451]: Found loop4 Mar 17 18:41:42.223674 extend-filesystems[1451]: Found loop5 Mar 17 18:41:42.223674 extend-filesystems[1451]: Found loop6 Mar 17 18:41:42.223674 extend-filesystems[1451]: Found loop7 Mar 17 18:41:42.223674 extend-filesystems[1451]: Found vda Mar 17 18:41:42.223674 extend-filesystems[1451]: Found vda1 Mar 17 18:41:42.223674 extend-filesystems[1451]: Found vda2 Mar 17 18:41:42.223674 extend-filesystems[1451]: Found vda3 Mar 17 18:41:42.223674 extend-filesystems[1451]: Found usr Mar 17 18:41:42.223674 extend-filesystems[1451]: Found vda4 Mar 17 18:41:42.223674 extend-filesystems[1451]: Found vda6 Mar 17 18:41:42.223674 extend-filesystems[1451]: Found vda7 Mar 17 18:41:42.223674 extend-filesystems[1451]: Found vda9 Mar 17 18:41:42.223674 extend-filesystems[1451]: Checking size of /dev/vda9 Mar 17 18:41:42.340954 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Mar 17 18:41:42.341007 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Mar 17 18:41:42.344608 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1308) Mar 17 18:41:42.238827 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 18:41:42.344824 extend-filesystems[1451]: Resized partition /dev/vda9 Mar 17 18:41:42.228215 dbus-daemon[1447]: [system] SELinux support is enabled Mar 17 18:41:42.246973 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:41:42.365130 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) Mar 17 18:41:42.365130 extend-filesystems[1468]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 18:41:42.365130 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 18:41:42.365130 extend-filesystems[1468]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Mar 17 18:41:42.254331 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:41:42.381744 extend-filesystems[1451]: Resized filesystem in /dev/vda9 Mar 17 18:41:42.256187 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 18:41:42.264143 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 18:41:42.285302 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 18:41:42.394564 update_engine[1471]: I20250317 18:41:42.359155 1471 main.cc:92] Flatcar Update Engine starting Mar 17 18:41:42.394564 update_engine[1471]: I20250317 18:41:42.372005 1471 update_check_scheduler.cc:74] Next update check in 11m6s Mar 17 18:41:42.297389 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:41:42.394906 jq[1472]: true Mar 17 18:41:42.297624 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 18:41:42.297901 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:41:42.298240 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 18:41:42.399584 jq[1477]: true Mar 17 18:41:42.311009 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:41:42.311779 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 18:41:42.332343 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:41:42.332983 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 18:41:42.403565 tar[1476]: linux-amd64/helm Mar 17 18:41:42.357152 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:41:42.357179 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 18:41:42.358678 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:41:42.358697 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 18:41:42.371311 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 18:41:42.371857 systemd[1]: Started update-engine.service - Update Engine. Mar 17 18:41:42.388376 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 18:41:42.466088 systemd-logind[1463]: New seat seat0. Mar 17 18:41:42.505607 systemd-logind[1463]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 18:41:42.505636 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:41:42.505849 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 18:41:42.523167 bash[1505]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:41:42.524400 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 18:41:42.540292 systemd[1]: Starting sshkeys.service... Mar 17 18:41:42.569358 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 18:41:42.582981 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 18:41:42.585595 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:41:42.676153 sshd_keygen[1473]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:41:42.700090 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 18:41:42.712373 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 18:41:42.735060 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:41:42.735247 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 18:41:42.748659 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 18:41:42.776499 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 18:41:42.792472 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 18:41:42.803481 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 18:41:42.806462 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 18:41:42.815360 containerd[1483]: time="2025-03-17T18:41:42.815271202Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 18:41:42.850570 containerd[1483]: time="2025-03-17T18:41:42.850256798Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:42.853061 containerd[1483]: time="2025-03-17T18:41:42.852302625Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:41:42.853061 containerd[1483]: time="2025-03-17T18:41:42.852340937Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:41:42.853061 containerd[1483]: time="2025-03-17T18:41:42.852366234Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:41:42.853061 containerd[1483]: time="2025-03-17T18:41:42.852529190Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 18:41:42.853061 containerd[1483]: time="2025-03-17T18:41:42.852548907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:42.853061 containerd[1483]: time="2025-03-17T18:41:42.852614820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:41:42.853061 containerd[1483]: time="2025-03-17T18:41:42.852630991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:42.853061 containerd[1483]: time="2025-03-17T18:41:42.852822841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:41:42.853061 containerd[1483]: time="2025-03-17T18:41:42.852841195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:42.853061 containerd[1483]: time="2025-03-17T18:41:42.852855732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:41:42.853061 containerd[1483]: time="2025-03-17T18:41:42.852866553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:42.853404 containerd[1483]: time="2025-03-17T18:41:42.852948967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:42.853621 containerd[1483]: time="2025-03-17T18:41:42.853601922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:42.853807 containerd[1483]: time="2025-03-17T18:41:42.853786788Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:41:42.853870 containerd[1483]: time="2025-03-17T18:41:42.853856539Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:41:42.854015 containerd[1483]: time="2025-03-17T18:41:42.853990611Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:41:42.854141 containerd[1483]: time="2025-03-17T18:41:42.854125083Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:41:42.863505 containerd[1483]: time="2025-03-17T18:41:42.863484977Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:41:42.863634 containerd[1483]: time="2025-03-17T18:41:42.863618969Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:41:42.865914 containerd[1483]: time="2025-03-17T18:41:42.863712564Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 18:41:42.865914 containerd[1483]: time="2025-03-17T18:41:42.863736128Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 18:41:42.865914 containerd[1483]: time="2025-03-17T18:41:42.863751327Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:41:42.865914 containerd[1483]: time="2025-03-17T18:41:42.863857566Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:41:42.865914 containerd[1483]: time="2025-03-17T18:41:42.864110010Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:41:42.865914 containerd[1483]: time="2025-03-17T18:41:42.864203385Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 18:41:42.865914 containerd[1483]: time="2025-03-17T18:41:42.864221078Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 18:41:42.865914 containerd[1483]: time="2025-03-17T18:41:42.864234904Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 18:41:42.865914 containerd[1483]: time="2025-03-17T18:41:42.864248710Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:41:42.865914 containerd[1483]: time="2025-03-17T18:41:42.864262235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:41:42.865914 containerd[1483]: time="2025-03-17T18:41:42.864275751Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:41:42.865914 containerd[1483]: time="2025-03-17T18:41:42.864289737Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:41:42.865914 containerd[1483]: time="2025-03-17T18:41:42.864303793Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:41:42.865914 containerd[1483]: time="2025-03-17T18:41:42.864318080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:41:42.866238 containerd[1483]: time="2025-03-17T18:41:42.864330764Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:41:42.866238 containerd[1483]: time="2025-03-17T18:41:42.864342846Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:41:42.866238 containerd[1483]: time="2025-03-17T18:41:42.864363766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866238 containerd[1483]: time="2025-03-17T18:41:42.864377992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866238 containerd[1483]: time="2025-03-17T18:41:42.864397288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866238 containerd[1483]: time="2025-03-17T18:41:42.864412337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866238 containerd[1483]: time="2025-03-17T18:41:42.864425692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866238 containerd[1483]: time="2025-03-17T18:41:42.864439427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866238 containerd[1483]: time="2025-03-17T18:41:42.864451821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866238 containerd[1483]: time="2025-03-17T18:41:42.864465867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866238 containerd[1483]: time="2025-03-17T18:41:42.864479232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866238 containerd[1483]: time="2025-03-17T18:41:42.864496585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866238 containerd[1483]: time="2025-03-17T18:41:42.864509429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866238 containerd[1483]: time="2025-03-17T18:41:42.864522303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866543 containerd[1483]: time="2025-03-17T18:41:42.864534546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866543 containerd[1483]: time="2025-03-17T18:41:42.864550796Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 18:41:42.866543 containerd[1483]: time="2025-03-17T18:41:42.864571255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866543 containerd[1483]: time="2025-03-17T18:41:42.864584480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866543 containerd[1483]: time="2025-03-17T18:41:42.864596562Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:41:42.866543 containerd[1483]: time="2025-03-17T18:41:42.864639503Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:41:42.866543 containerd[1483]: time="2025-03-17T18:41:42.864658518Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 18:41:42.866543 containerd[1483]: time="2025-03-17T18:41:42.864670270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:41:42.866543 containerd[1483]: time="2025-03-17T18:41:42.864686140Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 18:41:42.866543 containerd[1483]: time="2025-03-17T18:41:42.864696550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866543 containerd[1483]: time="2025-03-17T18:41:42.864713401Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 18:41:42.866543 containerd[1483]: time="2025-03-17T18:41:42.864725845Z" level=info msg="NRI interface is disabled by configuration." Mar 17 18:41:42.866543 containerd[1483]: time="2025-03-17T18:41:42.864737847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:41:42.866799 containerd[1483]: time="2025-03-17T18:41:42.865048760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:41:42.866799 containerd[1483]: time="2025-03-17T18:41:42.865108222Z" level=info msg="Connect containerd service" Mar 17 18:41:42.866799 containerd[1483]: time="2025-03-17T18:41:42.865140663Z" level=info msg="using legacy CRI server" Mar 17 18:41:42.866799 containerd[1483]: time="2025-03-17T18:41:42.865148297Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 18:41:42.866799 containerd[1483]: time="2025-03-17T18:41:42.865266138Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:41:42.866799 containerd[1483]: time="2025-03-17T18:41:42.865753582Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:41:42.867126 containerd[1483]: time="2025-03-17T18:41:42.867096781Z" level=info msg="Start subscribing containerd event" Mar 17 18:41:42.867206 containerd[1483]: time="2025-03-17T18:41:42.867192030Z" level=info msg="Start recovering state" Mar 17 18:41:42.867300 containerd[1483]: time="2025-03-17T18:41:42.867286317Z" level=info msg="Start event monitor" Mar 17 18:41:42.867362 containerd[1483]: time="2025-03-17T18:41:42.867349365Z" level=info msg="Start snapshots syncer" Mar 17 18:41:42.867413 containerd[1483]: time="2025-03-17T18:41:42.867402765Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:41:42.867462 containerd[1483]: time="2025-03-17T18:41:42.867452068Z" level=info msg="Start streaming server" Mar 17 18:41:42.867736 containerd[1483]: time="2025-03-17T18:41:42.867719419Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:41:42.867911 containerd[1483]: time="2025-03-17T18:41:42.867882084Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:41:42.870082 containerd[1483]: time="2025-03-17T18:41:42.870067293Z" level=info msg="containerd successfully booted in 0.055686s" Mar 17 18:41:42.870167 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 18:41:43.020565 tar[1476]: linux-amd64/LICENSE Mar 17 18:41:43.020859 tar[1476]: linux-amd64/README.md Mar 17 18:41:43.033774 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 18:41:43.161337 systemd-networkd[1397]: eth0: Gained IPv6LL Mar 17 18:41:43.165940 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 18:41:43.170531 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 18:41:43.181679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:41:43.196402 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 18:41:43.271464 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 18:41:45.481337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:41:45.492901 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:41:46.996910 kubelet[1562]: E0317 18:41:46.996810 1562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:41:47.001408 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:41:47.001749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:41:47.002996 systemd[1]: kubelet.service: Consumed 2.307s CPU time, 240M memory peak. Mar 17 18:41:47.888383 login[1535]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 17 18:41:47.889953 login[1533]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 18:41:47.916234 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 18:41:47.927763 systemd[1]: Started sshd@0-172.24.4.236:22-172.24.4.1:53322.service - OpenSSH per-connection server daemon (172.24.4.1:53322). Mar 17 18:41:47.940985 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 18:41:47.953255 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 18:41:47.985189 systemd-logind[1463]: New session 1 of user core. Mar 17 18:41:47.994057 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 18:41:48.000343 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 18:41:48.003948 (systemd)[1578]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:48.006546 systemd-logind[1463]: New session c1 of user core. Mar 17 18:41:48.169916 systemd[1578]: Queued start job for default target default.target. Mar 17 18:41:48.177958 systemd[1578]: Created slice app.slice - User Application Slice. Mar 17 18:41:48.177987 systemd[1578]: Reached target paths.target - Paths. Mar 17 18:41:48.178027 systemd[1578]: Reached target timers.target - Timers. Mar 17 18:41:48.179237 systemd[1578]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 18:41:48.209370 systemd[1578]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 18:41:48.209612 systemd[1578]: Reached target sockets.target - Sockets. Mar 17 18:41:48.209714 systemd[1578]: Reached target basic.target - Basic System. Mar 17 18:41:48.209805 systemd[1578]: Reached target default.target - Main User Target. Mar 17 18:41:48.209859 systemd[1578]: Startup finished in 198ms. Mar 17 18:41:48.210360 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 18:41:48.219517 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 18:41:48.893605 login[1535]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 17 18:41:48.904866 systemd-logind[1463]: New session 2 of user core. Mar 17 18:41:48.908510 sshd[1575]: Accepted publickey for core from 172.24.4.1 port 53322 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:41:48.910791 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:41:48.913831 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 18:41:48.933509 systemd-logind[1463]: New session 3 of user core. Mar 17 18:41:48.942122 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 18:41:49.247789 coreos-metadata[1446]: Mar 17 18:41:49.247 WARN failed to locate config-drive, using the metadata service API instead Mar 17 18:41:49.296832 coreos-metadata[1446]: Mar 17 18:41:49.296 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 17 18:41:49.530317 systemd[1]: Started sshd@1-172.24.4.236:22-172.24.4.1:53332.service - OpenSSH per-connection server daemon (172.24.4.1:53332). Mar 17 18:41:49.564853 coreos-metadata[1446]: Mar 17 18:41:49.564 INFO Fetch successful Mar 17 18:41:49.564853 coreos-metadata[1446]: Mar 17 18:41:49.564 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 17 18:41:49.578847 coreos-metadata[1446]: Mar 17 18:41:49.578 INFO Fetch successful Mar 17 18:41:49.578847 coreos-metadata[1446]: Mar 17 18:41:49.578 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 17 18:41:49.593341 coreos-metadata[1446]: Mar 17 18:41:49.593 INFO Fetch successful Mar 17 18:41:49.593341 coreos-metadata[1446]: Mar 17 18:41:49.593 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 17 18:41:49.606323 coreos-metadata[1446]: Mar 17 18:41:49.606 INFO Fetch successful Mar 17 18:41:49.606654 coreos-metadata[1446]: Mar 17 18:41:49.606 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 17 18:41:49.622231 coreos-metadata[1446]: Mar 17 18:41:49.622 INFO Fetch successful Mar 17 18:41:49.622231 coreos-metadata[1446]: Mar 17 18:41:49.622 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 17 18:41:49.636456 coreos-metadata[1446]: Mar 17 18:41:49.636 INFO Fetch successful Mar 17 18:41:49.681105 coreos-metadata[1512]: Mar 17 18:41:49.679 WARN failed to locate config-drive, using the metadata service API instead Mar 17 18:41:49.684218 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 18:41:49.685443 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 18:41:49.719298 coreos-metadata[1512]: Mar 17 18:41:49.719 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 17 18:41:49.734761 coreos-metadata[1512]: Mar 17 18:41:49.734 INFO Fetch successful Mar 17 18:41:49.734761 coreos-metadata[1512]: Mar 17 18:41:49.734 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 18:41:49.749714 coreos-metadata[1512]: Mar 17 18:41:49.749 INFO Fetch successful Mar 17 18:41:49.754463 unknown[1512]: wrote ssh authorized keys file for user: core Mar 17 18:41:49.783136 update-ssh-keys[1620]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:41:49.784257 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 18:41:49.786915 systemd[1]: Finished sshkeys.service. Mar 17 18:41:49.791535 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 18:41:49.791775 systemd[1]: Startup finished in 1.155s (kernel) + 16.257s (initrd) + 11.031s (userspace) = 28.444s. Mar 17 18:41:50.801466 sshd[1612]: Accepted publickey for core from 172.24.4.1 port 53332 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:41:50.804150 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:41:50.814708 systemd-logind[1463]: New session 4 of user core. Mar 17 18:41:50.822347 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 18:41:51.442668 sshd[1625]: Connection closed by 172.24.4.1 port 53332 Mar 17 18:41:51.444393 sshd-session[1612]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:51.460448 systemd[1]: sshd@1-172.24.4.236:22-172.24.4.1:53332.service: Deactivated successfully. Mar 17 18:41:51.463661 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:41:51.465815 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:41:51.474604 systemd[1]: Started sshd@2-172.24.4.236:22-172.24.4.1:53346.service - OpenSSH per-connection server daemon (172.24.4.1:53346). Mar 17 18:41:51.477410 systemd-logind[1463]: Removed session 4. Mar 17 18:41:52.762733 sshd[1630]: Accepted publickey for core from 172.24.4.1 port 53346 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:41:52.765572 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:41:52.777336 systemd-logind[1463]: New session 5 of user core. Mar 17 18:41:52.791383 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 18:41:53.403990 sshd[1633]: Connection closed by 172.24.4.1 port 53346 Mar 17 18:41:53.405126 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:53.421366 systemd[1]: sshd@2-172.24.4.236:22-172.24.4.1:53346.service: Deactivated successfully. Mar 17 18:41:53.425322 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:41:53.428351 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:41:53.437700 systemd[1]: Started sshd@3-172.24.4.236:22-172.24.4.1:53360.service - OpenSSH per-connection server daemon (172.24.4.1:53360). Mar 17 18:41:53.440587 systemd-logind[1463]: Removed session 5. Mar 17 18:41:54.780438 sshd[1638]: Accepted publickey for core from 172.24.4.1 port 53360 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:41:54.782570 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:41:54.793340 systemd-logind[1463]: New session 6 of user core. Mar 17 18:41:54.803366 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 18:41:55.422002 sshd[1641]: Connection closed by 172.24.4.1 port 53360 Mar 17 18:41:55.424522 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:55.440953 systemd[1]: sshd@3-172.24.4.236:22-172.24.4.1:53360.service: Deactivated successfully. Mar 17 18:41:55.444727 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:41:55.446991 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:41:55.456642 systemd[1]: Started sshd@4-172.24.4.236:22-172.24.4.1:50944.service - OpenSSH per-connection server daemon (172.24.4.1:50944). Mar 17 18:41:55.459921 systemd-logind[1463]: Removed session 6. Mar 17 18:41:56.768834 sshd[1646]: Accepted publickey for core from 172.24.4.1 port 50944 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:41:56.771700 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:41:56.784777 systemd-logind[1463]: New session 7 of user core. Mar 17 18:41:56.792355 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 18:41:57.252661 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:41:57.264254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:41:57.270281 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 18:41:57.271318 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:41:57.303875 sudo[1650]: pam_unix(sudo:session): session closed for user root Mar 17 18:41:57.513078 sshd[1649]: Connection closed by 172.24.4.1 port 50944 Mar 17 18:41:57.510609 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:57.532268 systemd[1]: sshd@4-172.24.4.236:22-172.24.4.1:50944.service: Deactivated successfully. Mar 17 18:41:57.538595 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:41:57.541320 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:41:57.555211 systemd[1]: Started sshd@5-172.24.4.236:22-172.24.4.1:50948.service - OpenSSH per-connection server daemon (172.24.4.1:50948). Mar 17 18:41:57.562276 systemd-logind[1463]: Removed session 7. Mar 17 18:41:57.596196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:41:57.601270 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:41:57.666913 kubelet[1664]: E0317 18:41:57.666858 1664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:41:57.674629 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:41:57.674966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:41:57.675920 systemd[1]: kubelet.service: Consumed 247ms CPU time, 98.2M memory peak. Mar 17 18:41:58.660854 sshd[1658]: Accepted publickey for core from 172.24.4.1 port 50948 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:41:58.663750 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:41:58.676983 systemd-logind[1463]: New session 8 of user core. Mar 17 18:41:58.685353 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 18:41:59.096640 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 18:41:59.097967 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:41:59.105900 sudo[1675]: pam_unix(sudo:session): session closed for user root Mar 17 18:41:59.117808 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 18:41:59.118525 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:41:59.147690 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 18:41:59.210222 augenrules[1697]: No rules Mar 17 18:41:59.212202 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 18:41:59.212675 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 18:41:59.214741 sudo[1674]: pam_unix(sudo:session): session closed for user root Mar 17 18:41:59.364210 sshd[1673]: Connection closed by 172.24.4.1 port 50948 Mar 17 18:41:59.362600 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:59.380158 systemd[1]: sshd@5-172.24.4.236:22-172.24.4.1:50948.service: Deactivated successfully. Mar 17 18:41:59.383627 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:41:59.387526 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:41:59.394616 systemd[1]: Started sshd@6-172.24.4.236:22-172.24.4.1:50960.service - OpenSSH per-connection server daemon (172.24.4.1:50960). Mar 17 18:41:59.397923 systemd-logind[1463]: Removed session 8. Mar 17 18:42:00.583465 sshd[1705]: Accepted publickey for core from 172.24.4.1 port 50960 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:42:00.586284 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:42:00.598658 systemd-logind[1463]: New session 9 of user core. Mar 17 18:42:00.613403 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 18:42:01.049554 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:42:01.050249 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 18:42:01.811462 (dockerd)[1728]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 18:42:01.812350 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 18:42:02.505018 dockerd[1728]: time="2025-03-17T18:42:02.504720500Z" level=info msg="Starting up" Mar 17 18:42:02.724867 systemd[1]: var-lib-docker-metacopy\x2dcheck3881754942-merged.mount: Deactivated successfully. Mar 17 18:42:02.762445 dockerd[1728]: time="2025-03-17T18:42:02.761786276Z" level=info msg="Loading containers: start." Mar 17 18:42:03.017152 kernel: Initializing XFRM netlink socket Mar 17 18:42:03.148976 systemd-networkd[1397]: docker0: Link UP Mar 17 18:42:03.185871 dockerd[1728]: time="2025-03-17T18:42:03.185346884Z" level=info msg="Loading containers: done." Mar 17 18:42:03.209568 dockerd[1728]: time="2025-03-17T18:42:03.209530420Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:42:03.209825 dockerd[1728]: time="2025-03-17T18:42:03.209805186Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 18:42:03.209978 dockerd[1728]: time="2025-03-17T18:42:03.209962611Z" level=info msg="Daemon has completed initialization" Mar 17 18:42:03.257321 dockerd[1728]: time="2025-03-17T18:42:03.257211767Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:42:03.259435 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 18:42:05.091993 containerd[1483]: time="2025-03-17T18:42:05.091894534Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 17 18:42:05.832536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3619919981.mount: Deactivated successfully. Mar 17 18:42:07.384761 containerd[1483]: time="2025-03-17T18:42:07.384703767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:07.386882 containerd[1483]: time="2025-03-17T18:42:07.386840504Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=27959276" Mar 17 18:42:07.387149 containerd[1483]: time="2025-03-17T18:42:07.387102756Z" level=info msg="ImageCreate event name:\"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:07.390478 containerd[1483]: time="2025-03-17T18:42:07.390437951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:07.391836 containerd[1483]: time="2025-03-17T18:42:07.391657970Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"27956068\" in 2.299663158s" Mar 17 18:42:07.391836 containerd[1483]: time="2025-03-17T18:42:07.391697945Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\"" Mar 17 18:42:07.394222 containerd[1483]: time="2025-03-17T18:42:07.394167035Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 17 18:42:07.680941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:42:07.691419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:42:08.106081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:42:08.121627 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:42:08.255846 kubelet[1976]: E0317 18:42:08.255715 1976 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:42:08.258902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:42:08.259280 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:42:08.260204 systemd[1]: kubelet.service: Consumed 233ms CPU time, 98M memory peak. Mar 17 18:42:10.132795 containerd[1483]: time="2025-03-17T18:42:10.132683172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:10.149520 containerd[1483]: time="2025-03-17T18:42:10.149267305Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=24713784" Mar 17 18:42:10.160955 containerd[1483]: time="2025-03-17T18:42:10.160848496Z" level=info msg="ImageCreate event name:\"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:10.169806 containerd[1483]: time="2025-03-17T18:42:10.169668959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:10.173832 containerd[1483]: time="2025-03-17T18:42:10.172881063Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"26201384\" in 2.778674013s" Mar 17 18:42:10.173832 containerd[1483]: time="2025-03-17T18:42:10.172965181Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\"" Mar 17 18:42:10.174863 containerd[1483]: time="2025-03-17T18:42:10.174475834Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 17 18:42:12.137790 containerd[1483]: time="2025-03-17T18:42:12.137736659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:12.139372 containerd[1483]: time="2025-03-17T18:42:12.139114874Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=18780376" Mar 17 18:42:12.140582 containerd[1483]: time="2025-03-17T18:42:12.140517254Z" level=info msg="ImageCreate event name:\"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:12.144103 containerd[1483]: time="2025-03-17T18:42:12.144017208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:12.145809 containerd[1483]: time="2025-03-17T18:42:12.145684966Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"20267994\" in 1.971161333s" Mar 17 18:42:12.145809 containerd[1483]: time="2025-03-17T18:42:12.145716315Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\"" Mar 17 18:42:12.146616 containerd[1483]: time="2025-03-17T18:42:12.146462885Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 18:42:13.628556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2982572602.mount: Deactivated successfully. Mar 17 18:42:14.173916 containerd[1483]: time="2025-03-17T18:42:14.173753979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:14.174861 containerd[1483]: time="2025-03-17T18:42:14.174813777Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=30354638" Mar 17 18:42:14.176344 containerd[1483]: time="2025-03-17T18:42:14.176279155Z" level=info msg="ImageCreate event name:\"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:14.182510 containerd[1483]: time="2025-03-17T18:42:14.182475456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:14.183331 containerd[1483]: time="2025-03-17T18:42:14.183299572Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"30353649\" in 2.036810818s" Mar 17 18:42:14.183386 containerd[1483]: time="2025-03-17T18:42:14.183334798Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 17 18:42:14.183859 containerd[1483]: time="2025-03-17T18:42:14.183827311Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:42:14.790815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2962635916.mount: Deactivated successfully. Mar 17 18:42:16.140740 containerd[1483]: time="2025-03-17T18:42:16.140579257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:16.143928 containerd[1483]: time="2025-03-17T18:42:16.143811430Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Mar 17 18:42:16.146834 containerd[1483]: time="2025-03-17T18:42:16.145384439Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:16.155178 containerd[1483]: time="2025-03-17T18:42:16.155107252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:16.158600 containerd[1483]: time="2025-03-17T18:42:16.158542127Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.974565084s" Mar 17 18:42:16.158799 containerd[1483]: time="2025-03-17T18:42:16.158761111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 18:42:16.160226 containerd[1483]: time="2025-03-17T18:42:16.160143271Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 18:42:16.731532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1906884642.mount: Deactivated successfully. Mar 17 18:42:16.743305 containerd[1483]: time="2025-03-17T18:42:16.741979288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:16.745162 containerd[1483]: time="2025-03-17T18:42:16.745014722Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 17 18:42:16.747260 containerd[1483]: time="2025-03-17T18:42:16.747139427Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:16.753748 containerd[1483]: time="2025-03-17T18:42:16.752583483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:16.758014 containerd[1483]: time="2025-03-17T18:42:16.757954149Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 597.722567ms" Mar 17 18:42:16.758278 containerd[1483]: time="2025-03-17T18:42:16.758234861Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 17 18:42:16.763831 containerd[1483]: time="2025-03-17T18:42:16.763785834Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 17 18:42:17.681224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1830721841.mount: Deactivated successfully. Mar 17 18:42:18.431372 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 18:42:18.438290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:42:18.544215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:42:18.547002 (kubelet)[2105]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 18:42:18.852486 kubelet[2105]: E0317 18:42:18.770945 2105 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:42:18.775515 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:42:18.775820 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:42:18.776697 systemd[1]: kubelet.service: Consumed 141ms CPU time, 95.7M memory peak. Mar 17 18:42:20.534831 containerd[1483]: time="2025-03-17T18:42:20.533471221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:20.540598 containerd[1483]: time="2025-03-17T18:42:20.540475422Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Mar 17 18:42:20.552354 containerd[1483]: time="2025-03-17T18:42:20.550761562Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:20.616239 containerd[1483]: time="2025-03-17T18:42:20.615102103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:20.620229 containerd[1483]: time="2025-03-17T18:42:20.620129601Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.854956898s" Mar 17 18:42:20.620369 containerd[1483]: time="2025-03-17T18:42:20.620224934Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Mar 17 18:42:25.015479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:42:25.016319 systemd[1]: kubelet.service: Consumed 141ms CPU time, 95.7M memory peak. Mar 17 18:42:25.027263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:42:25.063933 systemd[1]: Reload requested from client PID 2145 ('systemctl') (unit session-9.scope)... Mar 17 18:42:25.063948 systemd[1]: Reloading... Mar 17 18:42:25.169133 zram_generator::config[2188]: No configuration found. Mar 17 18:42:25.331117 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:42:25.451423 systemd[1]: Reloading finished in 387 ms. Mar 17 18:42:25.496154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:42:25.499510 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 18:42:25.505872 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:42:25.506214 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:42:25.506445 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:42:25.506485 systemd[1]: kubelet.service: Consumed 102ms CPU time, 87.1M memory peak. Mar 17 18:42:25.512300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:42:25.611432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:42:25.619294 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 18:42:25.665824 kubelet[2266]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:42:25.665824 kubelet[2266]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:42:25.665824 kubelet[2266]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:42:25.849535 kubelet[2266]: I0317 18:42:25.848980 2266 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:42:26.650020 kubelet[2266]: I0317 18:42:26.649912 2266 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:42:26.650020 kubelet[2266]: I0317 18:42:26.649941 2266 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:42:26.650348 kubelet[2266]: I0317 18:42:26.650216 2266 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:42:27.101210 kubelet[2266]: I0317 18:42:27.101129 2266 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:42:27.106017 kubelet[2266]: E0317 18:42:27.105942 2266 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.236:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.236:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:42:27.131685 kubelet[2266]: E0317 18:42:27.131611 2266 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:42:27.132092 kubelet[2266]: I0317 18:42:27.132011 2266 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:42:27.142455 kubelet[2266]: I0317 18:42:27.142383 2266 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:42:27.142695 kubelet[2266]: I0317 18:42:27.142644 2266 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:42:27.143109 kubelet[2266]: I0317 18:42:27.142994 2266 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:42:27.143521 kubelet[2266]: I0317 18:42:27.143098 2266 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-0-9-731388c134.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:42:27.143521 kubelet[2266]: I0317 18:42:27.143511 2266 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:42:27.143819 kubelet[2266]: I0317 18:42:27.143539 2266 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:42:27.143819 kubelet[2266]: I0317 18:42:27.143758 2266 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:42:27.148873 kubelet[2266]: I0317 18:42:27.148464 2266 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:42:27.148873 kubelet[2266]: I0317 18:42:27.148515 2266 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:42:27.148873 kubelet[2266]: I0317 18:42:27.148574 2266 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:42:27.148873 kubelet[2266]: I0317 18:42:27.148601 2266 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:42:27.156028 kubelet[2266]: W0317 18:42:27.155563 2266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-9-731388c134.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Mar 17 18:42:27.156028 kubelet[2266]: E0317 18:42:27.155715 2266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-9-731388c134.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.236:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:42:27.159926 kubelet[2266]: I0317 18:42:27.159666 2266 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 18:42:27.165009 kubelet[2266]: I0317 18:42:27.164116 2266 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:42:27.165009 kubelet[2266]: W0317 18:42:27.164251 2266 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:42:27.165580 kubelet[2266]: I0317 18:42:27.165499 2266 server.go:1269] "Started kubelet" Mar 17 18:42:27.172838 kubelet[2266]: W0317 18:42:27.172754 2266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.236:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Mar 17 18:42:27.173116 kubelet[2266]: E0317 18:42:27.173030 2266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.236:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.236:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:42:27.173546 kubelet[2266]: I0317 18:42:27.173471 2266 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:42:27.177573 kubelet[2266]: I0317 18:42:27.177540 2266 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:42:27.178913 kubelet[2266]: I0317 18:42:27.178790 2266 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:42:27.179413 kubelet[2266]: I0317 18:42:27.179363 2266 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:42:27.185073 kubelet[2266]: E0317 18:42:27.179700 2266 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.236:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.236:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-1-0-9-731388c134.novalocal.182dab45ce5bb146 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-0-9-731388c134.novalocal,UID:ci-4230-1-0-9-731388c134.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-0-9-731388c134.novalocal,},FirstTimestamp:2025-03-17 18:42:27.165458758 +0000 UTC m=+1.542157218,LastTimestamp:2025-03-17 18:42:27.165458758 +0000 UTC m=+1.542157218,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-0-9-731388c134.novalocal,}" Mar 17 18:42:27.187355 kubelet[2266]: I0317 18:42:27.187299 2266 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:42:27.189390 kubelet[2266]: I0317 18:42:27.189333 2266 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:42:27.191395 kubelet[2266]: I0317 18:42:27.191361 2266 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:42:27.192017 kubelet[2266]: E0317 18:42:27.191976 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-0-9-731388c134.novalocal\" not found" Mar 17 18:42:27.194208 kubelet[2266]: E0317 18:42:27.194162 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-9-731388c134.novalocal?timeout=10s\": dial tcp 172.24.4.236:6443: connect: connection refused" interval="200ms" Mar 17 18:42:27.194734 kubelet[2266]: I0317 18:42:27.194689 2266 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:42:27.198320 kubelet[2266]: W0317 18:42:27.198240 2266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Mar 17 18:42:27.198379 kubelet[2266]: E0317 18:42:27.198351 2266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.236:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:42:27.199215 kubelet[2266]: I0317 18:42:27.199181 2266 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:42:27.199264 kubelet[2266]: I0317 18:42:27.199219 2266 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:42:27.199388 kubelet[2266]: I0317 18:42:27.199351 2266 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:42:27.199640 kubelet[2266]: I0317 18:42:27.199626 2266 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:42:27.206388 kubelet[2266]: E0317 18:42:27.206358 2266 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:42:27.213110 kubelet[2266]: I0317 18:42:27.213074 2266 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:42:27.214239 kubelet[2266]: I0317 18:42:27.214225 2266 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:42:27.214310 kubelet[2266]: I0317 18:42:27.214301 2266 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:42:27.214380 kubelet[2266]: I0317 18:42:27.214371 2266 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:42:27.214481 kubelet[2266]: E0317 18:42:27.214457 2266 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:42:27.224362 kubelet[2266]: W0317 18:42:27.224315 2266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Mar 17 18:42:27.224707 kubelet[2266]: E0317 18:42:27.224677 2266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.236:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:42:27.229731 kubelet[2266]: I0317 18:42:27.229716 2266 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:42:27.229859 kubelet[2266]: I0317 18:42:27.229840 2266 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:42:27.230057 kubelet[2266]: I0317 18:42:27.229955 2266 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:42:27.234608 kubelet[2266]: I0317 18:42:27.234562 2266 policy_none.go:49] "None policy: Start" Mar 17 18:42:27.235399 kubelet[2266]: I0317 18:42:27.235175 2266 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:42:27.235399 kubelet[2266]: I0317 18:42:27.235195 2266 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:42:27.246194 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 18:42:27.259680 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 18:42:27.262811 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 18:42:27.272839 kubelet[2266]: I0317 18:42:27.272812 2266 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:42:27.273074 kubelet[2266]: I0317 18:42:27.273005 2266 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:42:27.273074 kubelet[2266]: I0317 18:42:27.273028 2266 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:42:27.273432 kubelet[2266]: I0317 18:42:27.273401 2266 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:42:27.275826 kubelet[2266]: E0317 18:42:27.275733 2266 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-1-0-9-731388c134.novalocal\" not found" Mar 17 18:42:27.336545 systemd[1]: Created slice kubepods-burstable-pod881972891938c37b319d50441808a96a.slice - libcontainer container kubepods-burstable-pod881972891938c37b319d50441808a96a.slice. Mar 17 18:42:27.348096 update_engine[1471]: I20250317 18:42:27.347983 1471 update_attempter.cc:509] Updating boot flags... Mar 17 18:42:27.354094 systemd[1]: Created slice kubepods-burstable-podaeafdbf240e0d4410153f1a33b045c51.slice - libcontainer container kubepods-burstable-podaeafdbf240e0d4410153f1a33b045c51.slice. Mar 17 18:42:27.375982 systemd[1]: Created slice kubepods-burstable-pod3dd77e7272a70fc2862d9b7beac7109a.slice - libcontainer container kubepods-burstable-pod3dd77e7272a70fc2862d9b7beac7109a.slice. Mar 17 18:42:27.381256 kubelet[2266]: I0317 18:42:27.381200 2266 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:27.385597 kubelet[2266]: E0317 18:42:27.385538 2266 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.236:6443/api/v1/nodes\": dial tcp 172.24.4.236:6443: connect: connection refused" node="ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:27.398552 kubelet[2266]: E0317 18:42:27.398473 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-9-731388c134.novalocal?timeout=10s\": dial tcp 172.24.4.236:6443: connect: connection refused" interval="400ms" Mar 17 18:42:27.400915 kubelet[2266]: I0317 18:42:27.400768 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aeafdbf240e0d4410153f1a33b045c51-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"aeafdbf240e0d4410153f1a33b045c51\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:27.400915 kubelet[2266]: I0317 18:42:27.400873 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3dd77e7272a70fc2862d9b7beac7109a-kubeconfig\") pod \"kube-scheduler-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"3dd77e7272a70fc2862d9b7beac7109a\") " pod="kube-system/kube-scheduler-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:27.402083 kubelet[2266]: I0317 18:42:27.400926 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aeafdbf240e0d4410153f1a33b045c51-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"aeafdbf240e0d4410153f1a33b045c51\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:27.402083 kubelet[2266]: I0317 18:42:27.400975 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aeafdbf240e0d4410153f1a33b045c51-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"aeafdbf240e0d4410153f1a33b045c51\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:27.402083 kubelet[2266]: I0317 18:42:27.401030 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aeafdbf240e0d4410153f1a33b045c51-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"aeafdbf240e0d4410153f1a33b045c51\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:27.402083 kubelet[2266]: I0317 18:42:27.401923 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/881972891938c37b319d50441808a96a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"881972891938c37b319d50441808a96a\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:27.402301 kubelet[2266]: I0317 18:42:27.401970 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aeafdbf240e0d4410153f1a33b045c51-ca-certs\") pod \"kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"aeafdbf240e0d4410153f1a33b045c51\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:27.402301 kubelet[2266]: I0317 18:42:27.402015 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/881972891938c37b319d50441808a96a-ca-certs\") pod \"kube-apiserver-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"881972891938c37b319d50441808a96a\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:27.403221 kubelet[2266]: I0317 18:42:27.403174 2266 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/881972891938c37b319d50441808a96a-k8s-certs\") pod \"kube-apiserver-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"881972891938c37b319d50441808a96a\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:27.409092 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2302) Mar 17 18:42:27.478348 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2301) Mar 17 18:42:27.588216 kubelet[2266]: I0317 18:42:27.588146 2266 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:27.588543 kubelet[2266]: E0317 18:42:27.588506 2266 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.236:6443/api/v1/nodes\": dial tcp 172.24.4.236:6443: connect: connection refused" node="ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:27.651369 containerd[1483]: time="2025-03-17T18:42:27.650758205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-0-9-731388c134.novalocal,Uid:881972891938c37b319d50441808a96a,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:27.672542 containerd[1483]: time="2025-03-17T18:42:27.672294844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal,Uid:aeafdbf240e0d4410153f1a33b045c51,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:27.692197 containerd[1483]: time="2025-03-17T18:42:27.691558768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-0-9-731388c134.novalocal,Uid:3dd77e7272a70fc2862d9b7beac7109a,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:27.801005 kubelet[2266]: E0317 18:42:27.800873 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-9-731388c134.novalocal?timeout=10s\": dial tcp 172.24.4.236:6443: connect: connection refused" interval="800ms" Mar 17 18:42:27.992499 kubelet[2266]: I0317 18:42:27.992423 2266 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:27.993453 kubelet[2266]: E0317 18:42:27.993398 2266 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.236:6443/api/v1/nodes\": dial tcp 172.24.4.236:6443: connect: connection refused" node="ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:28.150192 kubelet[2266]: W0317 18:42:28.149928 2266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Mar 17 18:42:28.150192 kubelet[2266]: E0317 18:42:28.150113 2266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.236:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:42:28.242298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457070707.mount: Deactivated successfully. Mar 17 18:42:28.249702 containerd[1483]: time="2025-03-17T18:42:28.249453872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:42:28.253088 containerd[1483]: time="2025-03-17T18:42:28.252932858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 17 18:42:28.258371 containerd[1483]: time="2025-03-17T18:42:28.258316415Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:42:28.261926 containerd[1483]: time="2025-03-17T18:42:28.261808064Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:42:28.264101 containerd[1483]: time="2025-03-17T18:42:28.263842292Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:42:28.266075 containerd[1483]: time="2025-03-17T18:42:28.265443396Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 18:42:28.266803 containerd[1483]: time="2025-03-17T18:42:28.266740332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 18:42:28.272116 containerd[1483]: time="2025-03-17T18:42:28.271978774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 18:42:28.279580 containerd[1483]: time="2025-03-17T18:42:28.279471710Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 628.495732ms" Mar 17 18:42:28.284648 containerd[1483]: time="2025-03-17T18:42:28.284592268Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 612.075411ms" Mar 17 18:42:28.293215 containerd[1483]: time="2025-03-17T18:42:28.293023469Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 601.239333ms" Mar 17 18:42:28.355236 kubelet[2266]: W0317 18:42:28.355168 2266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.236:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Mar 17 18:42:28.355236 kubelet[2266]: E0317 18:42:28.355232 2266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.236:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.236:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:42:28.480388 containerd[1483]: time="2025-03-17T18:42:28.477339941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:28.480388 containerd[1483]: time="2025-03-17T18:42:28.480248021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:28.480388 containerd[1483]: time="2025-03-17T18:42:28.480280802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:28.480968 containerd[1483]: time="2025-03-17T18:42:28.480688458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:28.483284 containerd[1483]: time="2025-03-17T18:42:28.483117025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:28.483396 containerd[1483]: time="2025-03-17T18:42:28.483259406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:28.483591 containerd[1483]: time="2025-03-17T18:42:28.483514892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:28.484083 containerd[1483]: time="2025-03-17T18:42:28.483829681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:28.484207 containerd[1483]: time="2025-03-17T18:42:28.484056602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:28.484207 containerd[1483]: time="2025-03-17T18:42:28.484078103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:28.484490 containerd[1483]: time="2025-03-17T18:42:28.484269697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:28.486397 containerd[1483]: time="2025-03-17T18:42:28.486274289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:28.507222 systemd[1]: Started cri-containerd-793e6859bc692a5efa07be8af97853a410fbc31e9ddb3d887d22e28616b50a6c.scope - libcontainer container 793e6859bc692a5efa07be8af97853a410fbc31e9ddb3d887d22e28616b50a6c. Mar 17 18:42:28.519261 systemd[1]: Started cri-containerd-610fbf96b99df8aa853aa6d20f6ee55e868cc26cd0c55489033e77a0012fa615.scope - libcontainer container 610fbf96b99df8aa853aa6d20f6ee55e868cc26cd0c55489033e77a0012fa615. Mar 17 18:42:28.523558 systemd[1]: Started cri-containerd-b5d8a95c2229f94985b6b9614addd93dd03a5f43013746f028e975f4fadb1571.scope - libcontainer container b5d8a95c2229f94985b6b9614addd93dd03a5f43013746f028e975f4fadb1571. Mar 17 18:42:28.533214 kubelet[2266]: W0317 18:42:28.533002 2266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Mar 17 18:42:28.533214 kubelet[2266]: E0317 18:42:28.533089 2266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.236:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:42:28.586904 containerd[1483]: time="2025-03-17T18:42:28.586753777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-0-9-731388c134.novalocal,Uid:881972891938c37b319d50441808a96a,Namespace:kube-system,Attempt:0,} returns sandbox id \"610fbf96b99df8aa853aa6d20f6ee55e868cc26cd0c55489033e77a0012fa615\"" Mar 17 18:42:28.591406 containerd[1483]: time="2025-03-17T18:42:28.591281366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal,Uid:aeafdbf240e0d4410153f1a33b045c51,Namespace:kube-system,Attempt:0,} returns sandbox id \"793e6859bc692a5efa07be8af97853a410fbc31e9ddb3d887d22e28616b50a6c\"" Mar 17 18:42:28.592207 containerd[1483]: time="2025-03-17T18:42:28.592107097Z" level=info msg="CreateContainer within sandbox \"610fbf96b99df8aa853aa6d20f6ee55e868cc26cd0c55489033e77a0012fa615\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:42:28.595558 containerd[1483]: time="2025-03-17T18:42:28.595439252Z" level=info msg="CreateContainer within sandbox \"793e6859bc692a5efa07be8af97853a410fbc31e9ddb3d887d22e28616b50a6c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:42:28.602134 kubelet[2266]: E0317 18:42:28.602084 2266 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-0-9-731388c134.novalocal?timeout=10s\": dial tcp 172.24.4.236:6443: connect: connection refused" interval="1.6s" Mar 17 18:42:28.613971 containerd[1483]: time="2025-03-17T18:42:28.613920965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-0-9-731388c134.novalocal,Uid:3dd77e7272a70fc2862d9b7beac7109a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5d8a95c2229f94985b6b9614addd93dd03a5f43013746f028e975f4fadb1571\"" Mar 17 18:42:28.616775 containerd[1483]: time="2025-03-17T18:42:28.616740155Z" level=info msg="CreateContainer within sandbox \"b5d8a95c2229f94985b6b9614addd93dd03a5f43013746f028e975f4fadb1571\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:42:28.622799 containerd[1483]: time="2025-03-17T18:42:28.622673227Z" level=info msg="CreateContainer within sandbox \"610fbf96b99df8aa853aa6d20f6ee55e868cc26cd0c55489033e77a0012fa615\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fb409a98eca950cd3782596c6818499322eea874bbe5505b079f1d8661eaf336\"" Mar 17 18:42:28.624257 containerd[1483]: time="2025-03-17T18:42:28.623231448Z" level=info msg="StartContainer for \"fb409a98eca950cd3782596c6818499322eea874bbe5505b079f1d8661eaf336\"" Mar 17 18:42:28.630241 containerd[1483]: time="2025-03-17T18:42:28.630201823Z" level=info msg="CreateContainer within sandbox \"793e6859bc692a5efa07be8af97853a410fbc31e9ddb3d887d22e28616b50a6c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"41a99e1a39b4a7b02a4c3aa1d72bef0d622ec13ecec796f467b6b33a5ef779f5\"" Mar 17 18:42:28.630947 containerd[1483]: time="2025-03-17T18:42:28.630914587Z" level=info msg="StartContainer for \"41a99e1a39b4a7b02a4c3aa1d72bef0d622ec13ecec796f467b6b33a5ef779f5\"" Mar 17 18:42:28.653221 systemd[1]: Started cri-containerd-fb409a98eca950cd3782596c6818499322eea874bbe5505b079f1d8661eaf336.scope - libcontainer container fb409a98eca950cd3782596c6818499322eea874bbe5505b079f1d8661eaf336. Mar 17 18:42:28.661258 containerd[1483]: time="2025-03-17T18:42:28.661211666Z" level=info msg="CreateContainer within sandbox \"b5d8a95c2229f94985b6b9614addd93dd03a5f43013746f028e975f4fadb1571\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"69b900c0fe651d70ee70df522b227047a560645b57ebd5176a1bf05fc7549381\"" Mar 17 18:42:28.662432 containerd[1483]: time="2025-03-17T18:42:28.662414283Z" level=info msg="StartContainer for \"69b900c0fe651d70ee70df522b227047a560645b57ebd5176a1bf05fc7549381\"" Mar 17 18:42:28.674309 systemd[1]: Started cri-containerd-41a99e1a39b4a7b02a4c3aa1d72bef0d622ec13ecec796f467b6b33a5ef779f5.scope - libcontainer container 41a99e1a39b4a7b02a4c3aa1d72bef0d622ec13ecec796f467b6b33a5ef779f5. Mar 17 18:42:28.701109 systemd[1]: Started cri-containerd-69b900c0fe651d70ee70df522b227047a560645b57ebd5176a1bf05fc7549381.scope - libcontainer container 69b900c0fe651d70ee70df522b227047a560645b57ebd5176a1bf05fc7549381. Mar 17 18:42:28.728105 containerd[1483]: time="2025-03-17T18:42:28.726893859Z" level=info msg="StartContainer for \"fb409a98eca950cd3782596c6818499322eea874bbe5505b079f1d8661eaf336\" returns successfully" Mar 17 18:42:28.730969 kubelet[2266]: W0317 18:42:28.730910 2266 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-9-731388c134.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Mar 17 18:42:28.731046 kubelet[2266]: E0317 18:42:28.730977 2266 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-0-9-731388c134.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.236:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:42:28.772592 containerd[1483]: time="2025-03-17T18:42:28.772381532Z" level=info msg="StartContainer for \"69b900c0fe651d70ee70df522b227047a560645b57ebd5176a1bf05fc7549381\" returns successfully" Mar 17 18:42:28.772592 containerd[1483]: time="2025-03-17T18:42:28.772382343Z" level=info msg="StartContainer for \"41a99e1a39b4a7b02a4c3aa1d72bef0d622ec13ecec796f467b6b33a5ef779f5\" returns successfully" Mar 17 18:42:28.797287 kubelet[2266]: I0317 18:42:28.796699 2266 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:28.798169 kubelet[2266]: E0317 18:42:28.798143 2266 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.236:6443/api/v1/nodes\": dial tcp 172.24.4.236:6443: connect: connection refused" node="ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:30.400941 kubelet[2266]: I0317 18:42:30.400896 2266 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:30.994825 kubelet[2266]: E0317 18:42:30.994783 2266 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-1-0-9-731388c134.novalocal\" not found" node="ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:31.050503 kubelet[2266]: I0317 18:42:31.049848 2266 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:31.050503 kubelet[2266]: E0317 18:42:31.050497 2266 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230-1-0-9-731388c134.novalocal\": node \"ci-4230-1-0-9-731388c134.novalocal\" not found" Mar 17 18:42:31.065242 kubelet[2266]: E0317 18:42:31.065146 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-0-9-731388c134.novalocal\" not found" Mar 17 18:42:31.165907 kubelet[2266]: E0317 18:42:31.165853 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-0-9-731388c134.novalocal\" not found" Mar 17 18:42:31.266463 kubelet[2266]: E0317 18:42:31.266249 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-0-9-731388c134.novalocal\" not found" Mar 17 18:42:31.366809 kubelet[2266]: E0317 18:42:31.366735 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-0-9-731388c134.novalocal\" not found" Mar 17 18:42:31.467750 kubelet[2266]: E0317 18:42:31.467668 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-0-9-731388c134.novalocal\" not found" Mar 17 18:42:31.568270 kubelet[2266]: E0317 18:42:31.568082 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-0-9-731388c134.novalocal\" not found" Mar 17 18:42:31.668877 kubelet[2266]: E0317 18:42:31.668778 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-0-9-731388c134.novalocal\" not found" Mar 17 18:42:31.769548 kubelet[2266]: E0317 18:42:31.769472 2266 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-0-9-731388c134.novalocal\" not found" Mar 17 18:42:32.177079 kubelet[2266]: I0317 18:42:32.176281 2266 apiserver.go:52] "Watching apiserver" Mar 17 18:42:32.195555 kubelet[2266]: I0317 18:42:32.195485 2266 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:42:33.387553 kubelet[2266]: W0317 18:42:33.387448 2266 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:42:33.887238 systemd[1]: Reload requested from client PID 2552 ('systemctl') (unit session-9.scope)... Mar 17 18:42:33.887263 systemd[1]: Reloading... Mar 17 18:42:34.018096 zram_generator::config[2598]: No configuration found. Mar 17 18:42:34.183420 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:42:34.327297 systemd[1]: Reloading finished in 439 ms. Mar 17 18:42:34.361500 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:42:34.375624 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:42:34.375996 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:42:34.376057 systemd[1]: kubelet.service: Consumed 1.464s CPU time, 118.5M memory peak. Mar 17 18:42:34.381389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 18:42:34.669386 (kubelet)[2662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 18:42:34.669444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 18:42:34.718753 kubelet[2662]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:42:34.718753 kubelet[2662]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:42:34.718753 kubelet[2662]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:42:34.719565 kubelet[2662]: I0317 18:42:34.718770 2662 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:42:34.726234 kubelet[2662]: I0317 18:42:34.726198 2662 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:42:34.726234 kubelet[2662]: I0317 18:42:34.726222 2662 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:42:34.727330 kubelet[2662]: I0317 18:42:34.726467 2662 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:42:34.728465 kubelet[2662]: I0317 18:42:34.727868 2662 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:42:34.735322 kubelet[2662]: I0317 18:42:34.735101 2662 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:42:34.743891 kubelet[2662]: E0317 18:42:34.743826 2662 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:42:34.744013 kubelet[2662]: I0317 18:42:34.744001 2662 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:42:34.747095 kubelet[2662]: I0317 18:42:34.746965 2662 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:42:34.748164 kubelet[2662]: I0317 18:42:34.748142 2662 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:42:34.748325 kubelet[2662]: I0317 18:42:34.748288 2662 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:42:34.748544 kubelet[2662]: I0317 18:42:34.748320 2662 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-0-9-731388c134.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:42:34.748544 kubelet[2662]: I0317 18:42:34.748543 2662 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:42:34.748746 kubelet[2662]: I0317 18:42:34.748555 2662 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:42:34.748746 kubelet[2662]: I0317 18:42:34.748602 2662 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:42:34.748746 kubelet[2662]: I0317 18:42:34.748708 2662 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:42:34.748746 kubelet[2662]: I0317 18:42:34.748721 2662 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:42:34.748746 kubelet[2662]: I0317 18:42:34.748747 2662 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:42:34.748865 kubelet[2662]: I0317 18:42:34.748758 2662 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:42:34.751537 kubelet[2662]: I0317 18:42:34.750233 2662 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 18:42:34.751537 kubelet[2662]: I0317 18:42:34.750645 2662 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:42:34.751537 kubelet[2662]: I0317 18:42:34.751077 2662 server.go:1269] "Started kubelet" Mar 17 18:42:34.753446 kubelet[2662]: I0317 18:42:34.752867 2662 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:42:34.760013 kubelet[2662]: I0317 18:42:34.758105 2662 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:42:34.760013 kubelet[2662]: I0317 18:42:34.758953 2662 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:42:34.760013 kubelet[2662]: I0317 18:42:34.759738 2662 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:42:34.763392 kubelet[2662]: I0317 18:42:34.763205 2662 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:42:34.763453 kubelet[2662]: I0317 18:42:34.763444 2662 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:42:34.769073 kubelet[2662]: I0317 18:42:34.767480 2662 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:42:34.769073 kubelet[2662]: E0317 18:42:34.767659 2662 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-1-0-9-731388c134.novalocal\" not found" Mar 17 18:42:34.770396 kubelet[2662]: I0317 18:42:34.770364 2662 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:42:34.770500 kubelet[2662]: I0317 18:42:34.770482 2662 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:42:34.772458 kubelet[2662]: I0317 18:42:34.772414 2662 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:42:34.774456 kubelet[2662]: I0317 18:42:34.774430 2662 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:42:34.774537 kubelet[2662]: I0317 18:42:34.774462 2662 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:42:34.774537 kubelet[2662]: I0317 18:42:34.774478 2662 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:42:34.774537 kubelet[2662]: E0317 18:42:34.774512 2662 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:42:34.789566 kubelet[2662]: E0317 18:42:34.788429 2662 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:42:34.796528 kubelet[2662]: I0317 18:42:34.796496 2662 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:42:34.796528 kubelet[2662]: I0317 18:42:34.796519 2662 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:42:34.796922 kubelet[2662]: I0317 18:42:34.796890 2662 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:42:34.868125 kubelet[2662]: I0317 18:42:34.868095 2662 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:42:34.868125 kubelet[2662]: I0317 18:42:34.868113 2662 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:42:34.868125 kubelet[2662]: I0317 18:42:34.868129 2662 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:42:34.868296 kubelet[2662]: I0317 18:42:34.868271 2662 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:42:34.868296 kubelet[2662]: I0317 18:42:34.868282 2662 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:42:34.868362 kubelet[2662]: I0317 18:42:34.868300 2662 policy_none.go:49] "None policy: Start" Mar 17 18:42:34.868983 kubelet[2662]: I0317 18:42:34.868968 2662 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:42:34.869085 kubelet[2662]: I0317 18:42:34.868988 2662 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:42:34.869357 kubelet[2662]: I0317 18:42:34.869329 2662 state_mem.go:75] "Updated machine memory state" Mar 17 18:42:34.874599 kubelet[2662]: E0317 18:42:34.874578 2662 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:42:34.874848 sudo[2691]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:42:34.875244 sudo[2691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 18:42:34.875723 kubelet[2662]: I0317 18:42:34.875696 2662 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:42:34.875869 kubelet[2662]: I0317 18:42:34.875844 2662 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:42:34.875903 kubelet[2662]: I0317 18:42:34.875861 2662 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:42:34.878918 kubelet[2662]: I0317 18:42:34.878689 2662 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:42:34.986108 kubelet[2662]: I0317 18:42:34.984388 2662 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:34.994068 kubelet[2662]: I0317 18:42:34.994005 2662 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:34.994264 kubelet[2662]: I0317 18:42:34.994095 2662 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:35.084360 kubelet[2662]: W0317 18:42:35.084279 2662 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:42:35.089045 kubelet[2662]: W0317 18:42:35.088604 2662 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:42:35.091373 kubelet[2662]: W0317 18:42:35.091320 2662 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:42:35.091514 kubelet[2662]: E0317 18:42:35.091468 2662 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:35.272500 kubelet[2662]: I0317 18:42:35.272387 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aeafdbf240e0d4410153f1a33b045c51-ca-certs\") pod \"kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"aeafdbf240e0d4410153f1a33b045c51\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:35.272500 kubelet[2662]: I0317 18:42:35.272441 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aeafdbf240e0d4410153f1a33b045c51-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"aeafdbf240e0d4410153f1a33b045c51\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:35.272500 kubelet[2662]: I0317 18:42:35.272472 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/881972891938c37b319d50441808a96a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"881972891938c37b319d50441808a96a\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:35.272769 kubelet[2662]: I0317 18:42:35.272517 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aeafdbf240e0d4410153f1a33b045c51-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"aeafdbf240e0d4410153f1a33b045c51\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:35.272769 kubelet[2662]: I0317 18:42:35.272546 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aeafdbf240e0d4410153f1a33b045c51-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"aeafdbf240e0d4410153f1a33b045c51\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:35.272769 kubelet[2662]: I0317 18:42:35.272567 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aeafdbf240e0d4410153f1a33b045c51-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"aeafdbf240e0d4410153f1a33b045c51\") " pod="kube-system/kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:35.272769 kubelet[2662]: I0317 18:42:35.272611 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3dd77e7272a70fc2862d9b7beac7109a-kubeconfig\") pod \"kube-scheduler-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"3dd77e7272a70fc2862d9b7beac7109a\") " pod="kube-system/kube-scheduler-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:35.272769 kubelet[2662]: I0317 18:42:35.272637 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/881972891938c37b319d50441808a96a-ca-certs\") pod \"kube-apiserver-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"881972891938c37b319d50441808a96a\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:35.272963 kubelet[2662]: I0317 18:42:35.272663 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/881972891938c37b319d50441808a96a-k8s-certs\") pod \"kube-apiserver-ci-4230-1-0-9-731388c134.novalocal\" (UID: \"881972891938c37b319d50441808a96a\") " pod="kube-system/kube-apiserver-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:35.454266 sudo[2691]: pam_unix(sudo:session): session closed for user root Mar 17 18:42:35.750030 kubelet[2662]: I0317 18:42:35.749905 2662 apiserver.go:52] "Watching apiserver" Mar 17 18:42:35.771340 kubelet[2662]: I0317 18:42:35.771293 2662 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:42:35.874423 kubelet[2662]: W0317 18:42:35.873817 2662 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 18:42:35.876005 kubelet[2662]: E0317 18:42:35.875324 2662 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230-1-0-9-731388c134.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4230-1-0-9-731388c134.novalocal" Mar 17 18:42:35.935173 kubelet[2662]: I0317 18:42:35.934875 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-1-0-9-731388c134.novalocal" podStartSLOduration=2.93485903 podStartE2EDuration="2.93485903s" podCreationTimestamp="2025-03-17 18:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:42:35.934683568 +0000 UTC m=+1.260474382" watchObservedRunningTime="2025-03-17 18:42:35.93485903 +0000 UTC m=+1.260649844" Mar 17 18:42:35.935173 kubelet[2662]: I0317 18:42:35.934972 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-1-0-9-731388c134.novalocal" podStartSLOduration=0.934967416 podStartE2EDuration="934.967416ms" podCreationTimestamp="2025-03-17 18:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:42:35.921252647 +0000 UTC m=+1.247043461" watchObservedRunningTime="2025-03-17 18:42:35.934967416 +0000 UTC m=+1.260758220" Mar 17 18:42:35.961380 kubelet[2662]: I0317 18:42:35.961186 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-1-0-9-731388c134.novalocal" podStartSLOduration=0.961167828 podStartE2EDuration="961.167828ms" podCreationTimestamp="2025-03-17 18:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:42:35.946987067 +0000 UTC m=+1.272777891" watchObservedRunningTime="2025-03-17 18:42:35.961167828 +0000 UTC m=+1.286958652" Mar 17 18:42:37.626814 sudo[1709]: pam_unix(sudo:session): session closed for user root Mar 17 18:42:37.642471 kubelet[2662]: I0317 18:42:37.642320 2662 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:42:37.642856 kubelet[2662]: I0317 18:42:37.642737 2662 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:42:37.642890 containerd[1483]: time="2025-03-17T18:42:37.642593596Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:42:37.907125 sshd[1708]: Connection closed by 172.24.4.1 port 50960 Mar 17 18:42:37.907287 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:37.918019 systemd[1]: sshd@6-172.24.4.236:22-172.24.4.1:50960.service: Deactivated successfully. Mar 17 18:42:37.927199 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:42:37.927678 systemd[1]: session-9.scope: Consumed 7.093s CPU time, 260.2M memory peak. Mar 17 18:42:37.937126 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:42:37.942235 systemd-logind[1463]: Removed session 9. Mar 17 18:42:38.535840 systemd[1]: Created slice kubepods-besteffort-pod5559b8ae_7fa0_4f8a_9c34_9070c70a778e.slice - libcontainer container kubepods-besteffort-pod5559b8ae_7fa0_4f8a_9c34_9070c70a778e.slice. Mar 17 18:42:38.554071 systemd[1]: Created slice kubepods-burstable-pod9661ec8e_e1bb_4f46_a46e_995b7d287c8b.slice - libcontainer container kubepods-burstable-pod9661ec8e_e1bb_4f46_a46e_995b7d287c8b.slice. Mar 17 18:42:38.598927 kubelet[2662]: I0317 18:42:38.598895 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5559b8ae-7fa0-4f8a-9c34-9070c70a778e-kube-proxy\") pod \"kube-proxy-x9h86\" (UID: \"5559b8ae-7fa0-4f8a-9c34-9070c70a778e\") " pod="kube-system/kube-proxy-x9h86" Mar 17 18:42:38.599303 kubelet[2662]: I0317 18:42:38.599125 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5559b8ae-7fa0-4f8a-9c34-9070c70a778e-lib-modules\") pod \"kube-proxy-x9h86\" (UID: \"5559b8ae-7fa0-4f8a-9c34-9070c70a778e\") " pod="kube-system/kube-proxy-x9h86" Mar 17 18:42:38.599303 kubelet[2662]: I0317 18:42:38.599155 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4hrr\" (UniqueName: \"kubernetes.io/projected/5559b8ae-7fa0-4f8a-9c34-9070c70a778e-kube-api-access-c4hrr\") pod \"kube-proxy-x9h86\" (UID: \"5559b8ae-7fa0-4f8a-9c34-9070c70a778e\") " pod="kube-system/kube-proxy-x9h86" Mar 17 18:42:38.599303 kubelet[2662]: I0317 18:42:38.599197 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-cilium-run\") pod \"cilium-dg6jw\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " pod="kube-system/cilium-dg6jw" Mar 17 18:42:38.599303 kubelet[2662]: I0317 18:42:38.599215 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-bpf-maps\") pod \"cilium-dg6jw\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " pod="kube-system/cilium-dg6jw" Mar 17 18:42:38.599303 kubelet[2662]: I0317 18:42:38.599281 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-host-proc-sys-net\") pod \"cilium-dg6jw\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " pod="kube-system/cilium-dg6jw" Mar 17 18:42:38.599472 kubelet[2662]: I0317 18:42:38.599330 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-host-proc-sys-kernel\") pod \"cilium-dg6jw\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " pod="kube-system/cilium-dg6jw" Mar 17 18:42:38.599472 kubelet[2662]: I0317 18:42:38.599379 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-cni-path\") pod \"cilium-dg6jw\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " pod="kube-system/cilium-dg6jw" Mar 17 18:42:38.599472 kubelet[2662]: I0317 18:42:38.599403 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh7cb\" (UniqueName: \"kubernetes.io/projected/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-kube-api-access-nh7cb\") pod \"cilium-dg6jw\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " pod="kube-system/cilium-dg6jw" Mar 17 18:42:38.599472 kubelet[2662]: I0317 18:42:38.599429 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-cilium-cgroup\") pod \"cilium-dg6jw\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " pod="kube-system/cilium-dg6jw" Mar 17 18:42:38.599472 kubelet[2662]: I0317 18:42:38.599453 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-lib-modules\") pod \"cilium-dg6jw\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " pod="kube-system/cilium-dg6jw" Mar 17 18:42:38.599472 kubelet[2662]: I0317 18:42:38.599472 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-clustermesh-secrets\") pod \"cilium-dg6jw\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " pod="kube-system/cilium-dg6jw" Mar 17 18:42:38.599618 kubelet[2662]: I0317 18:42:38.599496 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-etc-cni-netd\") pod \"cilium-dg6jw\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " pod="kube-system/cilium-dg6jw" Mar 17 18:42:38.599618 kubelet[2662]: I0317 18:42:38.599513 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-cilium-config-path\") pod \"cilium-dg6jw\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " pod="kube-system/cilium-dg6jw" Mar 17 18:42:38.599618 kubelet[2662]: I0317 18:42:38.599530 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5559b8ae-7fa0-4f8a-9c34-9070c70a778e-xtables-lock\") pod \"kube-proxy-x9h86\" (UID: \"5559b8ae-7fa0-4f8a-9c34-9070c70a778e\") " pod="kube-system/kube-proxy-x9h86" Mar 17 18:42:38.599618 kubelet[2662]: I0317 18:42:38.599546 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-hostproc\") pod \"cilium-dg6jw\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " pod="kube-system/cilium-dg6jw" Mar 17 18:42:38.599618 kubelet[2662]: I0317 18:42:38.599562 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-xtables-lock\") pod \"cilium-dg6jw\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " pod="kube-system/cilium-dg6jw" Mar 17 18:42:38.599618 kubelet[2662]: I0317 18:42:38.599579 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-hubble-tls\") pod \"cilium-dg6jw\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " pod="kube-system/cilium-dg6jw" Mar 17 18:42:38.809183 systemd[1]: Created slice kubepods-besteffort-pod8db7f485_3c3a_4691_ac92_9548382d0f9e.slice - libcontainer container kubepods-besteffort-pod8db7f485_3c3a_4691_ac92_9548382d0f9e.slice. Mar 17 18:42:38.851224 containerd[1483]: time="2025-03-17T18:42:38.851165185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x9h86,Uid:5559b8ae-7fa0-4f8a-9c34-9070c70a778e,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:38.861614 containerd[1483]: time="2025-03-17T18:42:38.861575552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dg6jw,Uid:9661ec8e-e1bb-4f46-a46e-995b7d287c8b,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:38.888876 containerd[1483]: time="2025-03-17T18:42:38.888273287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:38.889341 containerd[1483]: time="2025-03-17T18:42:38.889114346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:38.889341 containerd[1483]: time="2025-03-17T18:42:38.889258739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:38.889522 containerd[1483]: time="2025-03-17T18:42:38.889492310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:38.902698 kubelet[2662]: I0317 18:42:38.902656 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66f2v\" (UniqueName: \"kubernetes.io/projected/8db7f485-3c3a-4691-ac92-9548382d0f9e-kube-api-access-66f2v\") pod \"cilium-operator-5d85765b45-pngr9\" (UID: \"8db7f485-3c3a-4691-ac92-9548382d0f9e\") " pod="kube-system/cilium-operator-5d85765b45-pngr9" Mar 17 18:42:38.903021 kubelet[2662]: I0317 18:42:38.902712 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8db7f485-3c3a-4691-ac92-9548382d0f9e-cilium-config-path\") pod \"cilium-operator-5d85765b45-pngr9\" (UID: \"8db7f485-3c3a-4691-ac92-9548382d0f9e\") " pod="kube-system/cilium-operator-5d85765b45-pngr9" Mar 17 18:42:38.903367 containerd[1483]: time="2025-03-17T18:42:38.903284906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:38.903367 containerd[1483]: time="2025-03-17T18:42:38.903344579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:38.903557 containerd[1483]: time="2025-03-17T18:42:38.903492899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:38.903813 containerd[1483]: time="2025-03-17T18:42:38.903763360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:38.914199 systemd[1]: Started cri-containerd-87144c34b48de2e209ec9907d5bd703921032e98964d0931939155cb32420346.scope - libcontainer container 87144c34b48de2e209ec9907d5bd703921032e98964d0931939155cb32420346. Mar 17 18:42:38.930185 systemd[1]: Started cri-containerd-c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814.scope - libcontainer container c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814. Mar 17 18:42:38.954431 containerd[1483]: time="2025-03-17T18:42:38.954395449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x9h86,Uid:5559b8ae-7fa0-4f8a-9c34-9070c70a778e,Namespace:kube-system,Attempt:0,} returns sandbox id \"87144c34b48de2e209ec9907d5bd703921032e98964d0931939155cb32420346\"" Mar 17 18:42:38.958354 containerd[1483]: time="2025-03-17T18:42:38.958171543Z" level=info msg="CreateContainer within sandbox \"87144c34b48de2e209ec9907d5bd703921032e98964d0931939155cb32420346\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:42:38.968359 containerd[1483]: time="2025-03-17T18:42:38.968327669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dg6jw,Uid:9661ec8e-e1bb-4f46-a46e-995b7d287c8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814\"" Mar 17 18:42:38.971077 containerd[1483]: time="2025-03-17T18:42:38.971056254Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:42:38.989242 containerd[1483]: time="2025-03-17T18:42:38.989177928Z" level=info msg="CreateContainer within sandbox \"87144c34b48de2e209ec9907d5bd703921032e98964d0931939155cb32420346\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a857adf5a49a472bfbfeee2e0552a4be139011af1616ea2739878f77151056a4\"" Mar 17 18:42:38.989934 containerd[1483]: time="2025-03-17T18:42:38.989839558Z" level=info msg="StartContainer for \"a857adf5a49a472bfbfeee2e0552a4be139011af1616ea2739878f77151056a4\"" Mar 17 18:42:39.021230 systemd[1]: Started cri-containerd-a857adf5a49a472bfbfeee2e0552a4be139011af1616ea2739878f77151056a4.scope - libcontainer container a857adf5a49a472bfbfeee2e0552a4be139011af1616ea2739878f77151056a4. Mar 17 18:42:39.059086 containerd[1483]: time="2025-03-17T18:42:39.059000672Z" level=info msg="StartContainer for \"a857adf5a49a472bfbfeee2e0552a4be139011af1616ea2739878f77151056a4\" returns successfully" Mar 17 18:42:39.115187 containerd[1483]: time="2025-03-17T18:42:39.114778716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pngr9,Uid:8db7f485-3c3a-4691-ac92-9548382d0f9e,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:39.145313 containerd[1483]: time="2025-03-17T18:42:39.145216861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:39.145685 containerd[1483]: time="2025-03-17T18:42:39.145643616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:39.145851 containerd[1483]: time="2025-03-17T18:42:39.145804300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:39.146114 containerd[1483]: time="2025-03-17T18:42:39.146042711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:39.172573 systemd[1]: Started cri-containerd-8fd2874cc55dcad0e52b2334235d28c9e1b913bfbd90052cb3cf1da6abf9ce1b.scope - libcontainer container 8fd2874cc55dcad0e52b2334235d28c9e1b913bfbd90052cb3cf1da6abf9ce1b. Mar 17 18:42:39.222084 containerd[1483]: time="2025-03-17T18:42:39.221956485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pngr9,Uid:8db7f485-3c3a-4691-ac92-9548382d0f9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fd2874cc55dcad0e52b2334235d28c9e1b913bfbd90052cb3cf1da6abf9ce1b\"" Mar 17 18:42:39.888096 kubelet[2662]: I0317 18:42:39.887663 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x9h86" podStartSLOduration=1.887629614 podStartE2EDuration="1.887629614s" podCreationTimestamp="2025-03-17 18:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:42:39.885471629 +0000 UTC m=+5.211262483" watchObservedRunningTime="2025-03-17 18:42:39.887629614 +0000 UTC m=+5.213420468" Mar 17 18:42:45.661694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1773941031.mount: Deactivated successfully. Mar 17 18:42:52.819433 containerd[1483]: time="2025-03-17T18:42:52.819354325Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:52.821706 containerd[1483]: time="2025-03-17T18:42:52.821679950Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 17 18:42:52.823320 containerd[1483]: time="2025-03-17T18:42:52.823238150Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:52.825052 containerd[1483]: time="2025-03-17T18:42:52.824917198Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.853580185s" Mar 17 18:42:52.825052 containerd[1483]: time="2025-03-17T18:42:52.824949270Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 18:42:52.827076 containerd[1483]: time="2025-03-17T18:42:52.826557865Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:42:52.827876 containerd[1483]: time="2025-03-17T18:42:52.827852501Z" level=info msg="CreateContainer within sandbox \"c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:42:52.854985 containerd[1483]: time="2025-03-17T18:42:52.854926667Z" level=info msg="CreateContainer within sandbox \"c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d\"" Mar 17 18:42:52.856548 containerd[1483]: time="2025-03-17T18:42:52.856210913Z" level=info msg="StartContainer for \"e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d\"" Mar 17 18:42:52.889220 systemd[1]: Started cri-containerd-e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d.scope - libcontainer container e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d. Mar 17 18:42:52.946507 containerd[1483]: time="2025-03-17T18:42:52.946459472Z" level=info msg="StartContainer for \"e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d\" returns successfully" Mar 17 18:42:52.949415 systemd[1]: cri-containerd-e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d.scope: Deactivated successfully. Mar 17 18:42:52.950193 systemd[1]: cri-containerd-e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d.scope: Consumed 31ms CPU time, 6.5M memory peak, 3.2M written to disk. Mar 17 18:42:53.847799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d-rootfs.mount: Deactivated successfully. Mar 17 18:42:54.236169 containerd[1483]: time="2025-03-17T18:42:54.235710765Z" level=info msg="shim disconnected" id=e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d namespace=k8s.io Mar 17 18:42:54.236169 containerd[1483]: time="2025-03-17T18:42:54.235823698Z" level=warning msg="cleaning up after shim disconnected" id=e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d namespace=k8s.io Mar 17 18:42:54.236169 containerd[1483]: time="2025-03-17T18:42:54.235845488Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:42:54.921076 containerd[1483]: time="2025-03-17T18:42:54.920889710Z" level=info msg="CreateContainer within sandbox \"c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:42:54.983403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4171534575.mount: Deactivated successfully. Mar 17 18:42:55.017843 containerd[1483]: time="2025-03-17T18:42:55.017786756Z" level=info msg="CreateContainer within sandbox \"c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765\"" Mar 17 18:42:55.018263 containerd[1483]: time="2025-03-17T18:42:55.018230630Z" level=info msg="StartContainer for \"60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765\"" Mar 17 18:42:55.054218 systemd[1]: Started cri-containerd-60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765.scope - libcontainer container 60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765. Mar 17 18:42:55.086290 containerd[1483]: time="2025-03-17T18:42:55.084799904Z" level=info msg="StartContainer for \"60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765\" returns successfully" Mar 17 18:42:55.098263 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:42:55.098899 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:42:55.099178 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 18:42:55.104351 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 18:42:55.104579 systemd[1]: cri-containerd-60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765.scope: Deactivated successfully. Mar 17 18:42:55.122571 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 18:42:55.138513 containerd[1483]: time="2025-03-17T18:42:55.138444387Z" level=info msg="shim disconnected" id=60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765 namespace=k8s.io Mar 17 18:42:55.138513 containerd[1483]: time="2025-03-17T18:42:55.138503209Z" level=warning msg="cleaning up after shim disconnected" id=60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765 namespace=k8s.io Mar 17 18:42:55.138513 containerd[1483]: time="2025-03-17T18:42:55.138513237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:42:55.923122 containerd[1483]: time="2025-03-17T18:42:55.923004998Z" level=info msg="CreateContainer within sandbox \"c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:42:55.961883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765-rootfs.mount: Deactivated successfully. Mar 17 18:42:55.965504 containerd[1483]: time="2025-03-17T18:42:55.965377690Z" level=info msg="CreateContainer within sandbox \"c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d\"" Mar 17 18:42:55.966822 containerd[1483]: time="2025-03-17T18:42:55.966442481Z" level=info msg="StartContainer for \"bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d\"" Mar 17 18:42:56.013201 systemd[1]: run-containerd-runc-k8s.io-bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d-runc.92ACWs.mount: Deactivated successfully. Mar 17 18:42:56.025249 systemd[1]: Started cri-containerd-bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d.scope - libcontainer container bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d. Mar 17 18:42:56.078929 containerd[1483]: time="2025-03-17T18:42:56.078876090Z" level=info msg="StartContainer for \"bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d\" returns successfully" Mar 17 18:42:56.080256 systemd[1]: cri-containerd-bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d.scope: Deactivated successfully. Mar 17 18:42:56.292615 containerd[1483]: time="2025-03-17T18:42:56.292128694Z" level=info msg="shim disconnected" id=bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d namespace=k8s.io Mar 17 18:42:56.292615 containerd[1483]: time="2025-03-17T18:42:56.292351543Z" level=warning msg="cleaning up after shim disconnected" id=bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d namespace=k8s.io Mar 17 18:42:56.292615 containerd[1483]: time="2025-03-17T18:42:56.292371741Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:42:56.322956 containerd[1483]: time="2025-03-17T18:42:56.322844342Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:42:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 18:42:56.513446 containerd[1483]: time="2025-03-17T18:42:56.512568989Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:56.513621 containerd[1483]: time="2025-03-17T18:42:56.513588976Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 17 18:42:56.514702 containerd[1483]: time="2025-03-17T18:42:56.514680208Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 18:42:56.516183 containerd[1483]: time="2025-03-17T18:42:56.516145572Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.689553393s" Mar 17 18:42:56.516234 containerd[1483]: time="2025-03-17T18:42:56.516184465Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 18:42:56.518829 containerd[1483]: time="2025-03-17T18:42:56.518800062Z" level=info msg="CreateContainer within sandbox \"8fd2874cc55dcad0e52b2334235d28c9e1b913bfbd90052cb3cf1da6abf9ce1b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:42:56.537692 containerd[1483]: time="2025-03-17T18:42:56.537658792Z" level=info msg="CreateContainer within sandbox \"8fd2874cc55dcad0e52b2334235d28c9e1b913bfbd90052cb3cf1da6abf9ce1b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b\"" Mar 17 18:42:56.539224 containerd[1483]: time="2025-03-17T18:42:56.539183698Z" level=info msg="StartContainer for \"06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b\"" Mar 17 18:42:56.563181 systemd[1]: Started cri-containerd-06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b.scope - libcontainer container 06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b. Mar 17 18:42:56.594340 containerd[1483]: time="2025-03-17T18:42:56.594231736Z" level=info msg="StartContainer for \"06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b\" returns successfully" Mar 17 18:42:56.927919 containerd[1483]: time="2025-03-17T18:42:56.927704905Z" level=info msg="CreateContainer within sandbox \"c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:42:56.953633 containerd[1483]: time="2025-03-17T18:42:56.953583842Z" level=info msg="CreateContainer within sandbox \"c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa\"" Mar 17 18:42:56.954600 containerd[1483]: time="2025-03-17T18:42:56.954504212Z" level=info msg="StartContainer for \"528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa\"" Mar 17 18:42:56.963570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d-rootfs.mount: Deactivated successfully. Mar 17 18:42:57.004650 systemd[1]: Started cri-containerd-528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa.scope - libcontainer container 528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa. Mar 17 18:42:57.087183 containerd[1483]: time="2025-03-17T18:42:57.087055434Z" level=info msg="StartContainer for \"528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa\" returns successfully" Mar 17 18:42:57.093429 systemd[1]: cri-containerd-528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa.scope: Deactivated successfully. Mar 17 18:42:57.126771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa-rootfs.mount: Deactivated successfully. Mar 17 18:42:57.224839 containerd[1483]: time="2025-03-17T18:42:57.224644111Z" level=info msg="shim disconnected" id=528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa namespace=k8s.io Mar 17 18:42:57.224839 containerd[1483]: time="2025-03-17T18:42:57.224830512Z" level=warning msg="cleaning up after shim disconnected" id=528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa namespace=k8s.io Mar 17 18:42:57.224839 containerd[1483]: time="2025-03-17T18:42:57.224841222Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:42:57.952467 containerd[1483]: time="2025-03-17T18:42:57.952379206Z" level=info msg="CreateContainer within sandbox \"c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:42:58.000064 containerd[1483]: time="2025-03-17T18:42:57.999257603Z" level=info msg="CreateContainer within sandbox \"c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839\"" Mar 17 18:42:58.000874 containerd[1483]: time="2025-03-17T18:42:58.000842753Z" level=info msg="StartContainer for \"725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839\"" Mar 17 18:42:58.036312 kubelet[2662]: I0317 18:42:58.036252 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-pngr9" podStartSLOduration=2.742337728 podStartE2EDuration="20.036233391s" podCreationTimestamp="2025-03-17 18:42:38 +0000 UTC" firstStartedPulling="2025-03-17 18:42:39.223248606 +0000 UTC m=+4.549039410" lastFinishedPulling="2025-03-17 18:42:56.517144259 +0000 UTC m=+21.842935073" observedRunningTime="2025-03-17 18:42:57.048568358 +0000 UTC m=+22.374359182" watchObservedRunningTime="2025-03-17 18:42:58.036233391 +0000 UTC m=+23.362024205" Mar 17 18:42:58.051878 systemd[1]: run-containerd-runc-k8s.io-725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839-runc.dg3o33.mount: Deactivated successfully. Mar 17 18:42:58.065241 systemd[1]: Started cri-containerd-725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839.scope - libcontainer container 725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839. Mar 17 18:42:58.131311 containerd[1483]: time="2025-03-17T18:42:58.131181271Z" level=info msg="StartContainer for \"725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839\" returns successfully" Mar 17 18:42:58.247576 kubelet[2662]: I0317 18:42:58.246302 2662 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 18:42:58.292386 systemd[1]: Created slice kubepods-burstable-podd9a21df2_0224_4c18_b106_739c3b23a224.slice - libcontainer container kubepods-burstable-podd9a21df2_0224_4c18_b106_739c3b23a224.slice. Mar 17 18:42:58.303234 systemd[1]: Created slice kubepods-burstable-pod789bbd26_c649_4412_adf9_8493685fd0b4.slice - libcontainer container kubepods-burstable-pod789bbd26_c649_4412_adf9_8493685fd0b4.slice. Mar 17 18:42:58.343797 kubelet[2662]: I0317 18:42:58.343674 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw8hs\" (UniqueName: \"kubernetes.io/projected/789bbd26-c649-4412-adf9-8493685fd0b4-kube-api-access-jw8hs\") pod \"coredns-6f6b679f8f-gzlgs\" (UID: \"789bbd26-c649-4412-adf9-8493685fd0b4\") " pod="kube-system/coredns-6f6b679f8f-gzlgs" Mar 17 18:42:58.343797 kubelet[2662]: I0317 18:42:58.343712 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/789bbd26-c649-4412-adf9-8493685fd0b4-config-volume\") pod \"coredns-6f6b679f8f-gzlgs\" (UID: \"789bbd26-c649-4412-adf9-8493685fd0b4\") " pod="kube-system/coredns-6f6b679f8f-gzlgs" Mar 17 18:42:58.343797 kubelet[2662]: I0317 18:42:58.343731 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6crb\" (UniqueName: \"kubernetes.io/projected/d9a21df2-0224-4c18-b106-739c3b23a224-kube-api-access-m6crb\") pod \"coredns-6f6b679f8f-4slvw\" (UID: \"d9a21df2-0224-4c18-b106-739c3b23a224\") " pod="kube-system/coredns-6f6b679f8f-4slvw" Mar 17 18:42:58.343797 kubelet[2662]: I0317 18:42:58.343750 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9a21df2-0224-4c18-b106-739c3b23a224-config-volume\") pod \"coredns-6f6b679f8f-4slvw\" (UID: \"d9a21df2-0224-4c18-b106-739c3b23a224\") " pod="kube-system/coredns-6f6b679f8f-4slvw" Mar 17 18:42:58.601586 containerd[1483]: time="2025-03-17T18:42:58.601054728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4slvw,Uid:d9a21df2-0224-4c18-b106-739c3b23a224,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:58.610515 containerd[1483]: time="2025-03-17T18:42:58.610283832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gzlgs,Uid:789bbd26-c649-4412-adf9-8493685fd0b4,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:58.988113 kubelet[2662]: I0317 18:42:58.986845 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dg6jw" podStartSLOduration=7.130489456 podStartE2EDuration="20.986815157s" podCreationTimestamp="2025-03-17 18:42:38 +0000 UTC" firstStartedPulling="2025-03-17 18:42:38.969833484 +0000 UTC m=+4.295624288" lastFinishedPulling="2025-03-17 18:42:52.826159185 +0000 UTC m=+18.151949989" observedRunningTime="2025-03-17 18:42:58.983990879 +0000 UTC m=+24.309781733" watchObservedRunningTime="2025-03-17 18:42:58.986815157 +0000 UTC m=+24.312606021" Mar 17 18:42:59.013337 systemd[1]: run-containerd-runc-k8s.io-725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839-runc.kKDIje.mount: Deactivated successfully. Mar 17 18:43:00.382382 systemd-networkd[1397]: cilium_host: Link UP Mar 17 18:43:00.390184 systemd-networkd[1397]: cilium_net: Link UP Mar 17 18:43:00.390592 systemd-networkd[1397]: cilium_net: Gained carrier Mar 17 18:43:00.390906 systemd-networkd[1397]: cilium_host: Gained carrier Mar 17 18:43:00.391205 systemd-networkd[1397]: cilium_net: Gained IPv6LL Mar 17 18:43:00.391500 systemd-networkd[1397]: cilium_host: Gained IPv6LL Mar 17 18:43:00.486946 systemd-networkd[1397]: cilium_vxlan: Link UP Mar 17 18:43:00.486953 systemd-networkd[1397]: cilium_vxlan: Gained carrier Mar 17 18:43:00.819093 kernel: NET: Registered PF_ALG protocol family Mar 17 18:43:01.524635 systemd-networkd[1397]: lxc_health: Link UP Mar 17 18:43:01.526433 systemd-networkd[1397]: lxc_health: Gained carrier Mar 17 18:43:01.688246 kernel: eth0: renamed from tmpcaa0b Mar 17 18:43:01.695562 systemd-networkd[1397]: lxc9b624c907087: Link UP Mar 17 18:43:01.708119 systemd-networkd[1397]: lxce9a0f4bbd623: Link UP Mar 17 18:43:01.708453 systemd-networkd[1397]: lxc9b624c907087: Gained carrier Mar 17 18:43:01.710464 kernel: eth0: renamed from tmpfdb0e Mar 17 18:43:01.715909 systemd-networkd[1397]: lxce9a0f4bbd623: Gained carrier Mar 17 18:43:01.881214 systemd-networkd[1397]: cilium_vxlan: Gained IPv6LL Mar 17 18:43:03.033365 systemd-networkd[1397]: lxc9b624c907087: Gained IPv6LL Mar 17 18:43:03.235251 kubelet[2662]: I0317 18:43:03.234251 2662 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:43:03.353241 systemd-networkd[1397]: lxc_health: Gained IPv6LL Mar 17 18:43:03.673263 systemd-networkd[1397]: lxce9a0f4bbd623: Gained IPv6LL Mar 17 18:43:06.050525 containerd[1483]: time="2025-03-17T18:43:06.050269070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:06.050525 containerd[1483]: time="2025-03-17T18:43:06.050332860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:06.050525 containerd[1483]: time="2025-03-17T18:43:06.050348269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:06.050525 containerd[1483]: time="2025-03-17T18:43:06.050469968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:06.085228 systemd[1]: Started cri-containerd-fdb0e5f0633d15d470c887c7fbf61f6698f99334040982e44b8e726e3494375f.scope - libcontainer container fdb0e5f0633d15d470c887c7fbf61f6698f99334040982e44b8e726e3494375f. Mar 17 18:43:06.147520 containerd[1483]: time="2025-03-17T18:43:06.147457374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gzlgs,Uid:789bbd26-c649-4412-adf9-8493685fd0b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdb0e5f0633d15d470c887c7fbf61f6698f99334040982e44b8e726e3494375f\"" Mar 17 18:43:06.156027 containerd[1483]: time="2025-03-17T18:43:06.155900198Z" level=info msg="CreateContainer within sandbox \"fdb0e5f0633d15d470c887c7fbf61f6698f99334040982e44b8e726e3494375f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:43:06.164851 containerd[1483]: time="2025-03-17T18:43:06.164629909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:06.164851 containerd[1483]: time="2025-03-17T18:43:06.164687037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:06.164851 containerd[1483]: time="2025-03-17T18:43:06.164713927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:06.164851 containerd[1483]: time="2025-03-17T18:43:06.164803306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:06.200197 systemd[1]: Started cri-containerd-caa0b2aaea76707b971c6128794a77f68058a039e877228855e37cf521454740.scope - libcontainer container caa0b2aaea76707b971c6128794a77f68058a039e877228855e37cf521454740. Mar 17 18:43:06.203059 containerd[1483]: time="2025-03-17T18:43:06.202965684Z" level=info msg="CreateContainer within sandbox \"fdb0e5f0633d15d470c887c7fbf61f6698f99334040982e44b8e726e3494375f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0b7132434b10db7650afe5e2106ce44799bdf4a732f0cca88f8b38542199f42\"" Mar 17 18:43:06.208508 containerd[1483]: time="2025-03-17T18:43:06.208374681Z" level=info msg="StartContainer for \"d0b7132434b10db7650afe5e2106ce44799bdf4a732f0cca88f8b38542199f42\"" Mar 17 18:43:06.250203 systemd[1]: Started cri-containerd-d0b7132434b10db7650afe5e2106ce44799bdf4a732f0cca88f8b38542199f42.scope - libcontainer container d0b7132434b10db7650afe5e2106ce44799bdf4a732f0cca88f8b38542199f42. Mar 17 18:43:06.264983 containerd[1483]: time="2025-03-17T18:43:06.264863171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4slvw,Uid:d9a21df2-0224-4c18-b106-739c3b23a224,Namespace:kube-system,Attempt:0,} returns sandbox id \"caa0b2aaea76707b971c6128794a77f68058a039e877228855e37cf521454740\"" Mar 17 18:43:06.269686 containerd[1483]: time="2025-03-17T18:43:06.269595527Z" level=info msg="CreateContainer within sandbox \"caa0b2aaea76707b971c6128794a77f68058a039e877228855e37cf521454740\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:43:06.299384 containerd[1483]: time="2025-03-17T18:43:06.299265059Z" level=info msg="CreateContainer within sandbox \"caa0b2aaea76707b971c6128794a77f68058a039e877228855e37cf521454740\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b1a9c0aba1c7ef7c1d6fbbee76dbf3423a8a38e73a65ec8372009c68467fe6a0\"" Mar 17 18:43:06.301842 containerd[1483]: time="2025-03-17T18:43:06.301222705Z" level=info msg="StartContainer for \"b1a9c0aba1c7ef7c1d6fbbee76dbf3423a8a38e73a65ec8372009c68467fe6a0\"" Mar 17 18:43:06.306411 containerd[1483]: time="2025-03-17T18:43:06.306377254Z" level=info msg="StartContainer for \"d0b7132434b10db7650afe5e2106ce44799bdf4a732f0cca88f8b38542199f42\" returns successfully" Mar 17 18:43:06.341206 systemd[1]: Started cri-containerd-b1a9c0aba1c7ef7c1d6fbbee76dbf3423a8a38e73a65ec8372009c68467fe6a0.scope - libcontainer container b1a9c0aba1c7ef7c1d6fbbee76dbf3423a8a38e73a65ec8372009c68467fe6a0. Mar 17 18:43:06.382902 containerd[1483]: time="2025-03-17T18:43:06.382850310Z" level=info msg="StartContainer for \"b1a9c0aba1c7ef7c1d6fbbee76dbf3423a8a38e73a65ec8372009c68467fe6a0\" returns successfully" Mar 17 18:43:06.997612 kubelet[2662]: I0317 18:43:06.997343 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-gzlgs" podStartSLOduration=28.9972127 podStartE2EDuration="28.9972127s" podCreationTimestamp="2025-03-17 18:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:06.995120982 +0000 UTC m=+32.320911836" watchObservedRunningTime="2025-03-17 18:43:06.9972127 +0000 UTC m=+32.323003554" Mar 17 18:43:07.040424 kubelet[2662]: I0317 18:43:07.040273 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4slvw" podStartSLOduration=29.040240369 podStartE2EDuration="29.040240369s" podCreationTimestamp="2025-03-17 18:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:07.039398658 +0000 UTC m=+32.365189563" watchObservedRunningTime="2025-03-17 18:43:07.040240369 +0000 UTC m=+32.366031223" Mar 17 18:44:35.323610 systemd[1]: Started sshd@7-172.24.4.236:22-172.24.4.1:37134.service - OpenSSH per-connection server daemon (172.24.4.1:37134). Mar 17 18:44:36.569437 sshd[4047]: Accepted publickey for core from 172.24.4.1 port 37134 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:44:36.573130 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:44:36.586100 systemd-logind[1463]: New session 10 of user core. Mar 17 18:44:36.593708 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 18:44:37.363260 sshd[4049]: Connection closed by 172.24.4.1 port 37134 Mar 17 18:44:37.363855 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:37.371432 systemd[1]: sshd@7-172.24.4.236:22-172.24.4.1:37134.service: Deactivated successfully. Mar 17 18:44:37.374978 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:44:37.376417 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:44:37.378462 systemd-logind[1463]: Removed session 10. Mar 17 18:44:42.395533 systemd[1]: Started sshd@8-172.24.4.236:22-172.24.4.1:37136.service - OpenSSH per-connection server daemon (172.24.4.1:37136). Mar 17 18:44:43.830496 sshd[4064]: Accepted publickey for core from 172.24.4.1 port 37136 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:44:43.833346 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:44:43.845179 systemd-logind[1463]: New session 11 of user core. Mar 17 18:44:43.851990 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 18:44:44.616119 sshd[4066]: Connection closed by 172.24.4.1 port 37136 Mar 17 18:44:44.617289 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:44.623092 systemd[1]: sshd@8-172.24.4.236:22-172.24.4.1:37136.service: Deactivated successfully. Mar 17 18:44:44.628070 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:44:44.631864 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:44:44.635957 systemd-logind[1463]: Removed session 11. Mar 17 18:44:49.676898 systemd[1]: Started sshd@9-172.24.4.236:22-172.24.4.1:34696.service - OpenSSH per-connection server daemon (172.24.4.1:34696). Mar 17 18:44:50.896323 sshd[4079]: Accepted publickey for core from 172.24.4.1 port 34696 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:44:50.900129 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:44:50.914312 systemd-logind[1463]: New session 12 of user core. Mar 17 18:44:50.927409 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 18:44:51.783322 sshd[4081]: Connection closed by 172.24.4.1 port 34696 Mar 17 18:44:51.783201 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:51.787733 systemd[1]: sshd@9-172.24.4.236:22-172.24.4.1:34696.service: Deactivated successfully. Mar 17 18:44:51.791698 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:44:51.795565 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:44:51.796888 systemd-logind[1463]: Removed session 12. Mar 17 18:44:56.811623 systemd[1]: Started sshd@10-172.24.4.236:22-172.24.4.1:57494.service - OpenSSH per-connection server daemon (172.24.4.1:57494). Mar 17 18:44:58.165897 sshd[4093]: Accepted publickey for core from 172.24.4.1 port 57494 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:44:58.168050 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:44:58.176612 systemd-logind[1463]: New session 13 of user core. Mar 17 18:44:58.179236 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 18:44:58.951082 sshd[4095]: Connection closed by 172.24.4.1 port 57494 Mar 17 18:44:58.951299 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:58.969926 systemd[1]: sshd@10-172.24.4.236:22-172.24.4.1:57494.service: Deactivated successfully. Mar 17 18:44:58.974339 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:44:58.977011 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:44:58.985680 systemd[1]: Started sshd@11-172.24.4.236:22-172.24.4.1:57506.service - OpenSSH per-connection server daemon (172.24.4.1:57506). Mar 17 18:44:58.990184 systemd-logind[1463]: Removed session 13. Mar 17 18:45:00.542186 sshd[4108]: Accepted publickey for core from 172.24.4.1 port 57506 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:45:00.545008 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:45:00.557531 systemd-logind[1463]: New session 14 of user core. Mar 17 18:45:00.564457 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 18:45:01.546458 sshd[4111]: Connection closed by 172.24.4.1 port 57506 Mar 17 18:45:01.547110 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:01.561307 systemd[1]: sshd@11-172.24.4.236:22-172.24.4.1:57506.service: Deactivated successfully. Mar 17 18:45:01.565013 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:45:01.568987 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:45:01.578670 systemd[1]: Started sshd@12-172.24.4.236:22-172.24.4.1:57508.service - OpenSSH per-connection server daemon (172.24.4.1:57508). Mar 17 18:45:01.583009 systemd-logind[1463]: Removed session 14. Mar 17 18:45:02.682945 sshd[4120]: Accepted publickey for core from 172.24.4.1 port 57508 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:45:02.687461 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:45:02.700542 systemd-logind[1463]: New session 15 of user core. Mar 17 18:45:02.707339 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 18:45:03.313093 sshd[4123]: Connection closed by 172.24.4.1 port 57508 Mar 17 18:45:03.312852 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:03.320370 systemd[1]: sshd@12-172.24.4.236:22-172.24.4.1:57508.service: Deactivated successfully. Mar 17 18:45:03.329735 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:45:03.335621 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:45:03.338418 systemd-logind[1463]: Removed session 15. Mar 17 18:45:08.339622 systemd[1]: Started sshd@13-172.24.4.236:22-172.24.4.1:46214.service - OpenSSH per-connection server daemon (172.24.4.1:46214). Mar 17 18:45:09.681206 sshd[4135]: Accepted publickey for core from 172.24.4.1 port 46214 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:45:09.683845 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:45:09.694885 systemd-logind[1463]: New session 16 of user core. Mar 17 18:45:09.703338 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 18:45:10.409382 sshd[4139]: Connection closed by 172.24.4.1 port 46214 Mar 17 18:45:10.410458 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:10.430194 systemd[1]: sshd@13-172.24.4.236:22-172.24.4.1:46214.service: Deactivated successfully. Mar 17 18:45:10.434496 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:45:10.437588 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:45:10.444838 systemd[1]: Started sshd@14-172.24.4.236:22-172.24.4.1:46226.service - OpenSSH per-connection server daemon (172.24.4.1:46226). Mar 17 18:45:10.448715 systemd-logind[1463]: Removed session 16. Mar 17 18:45:11.803524 sshd[4150]: Accepted publickey for core from 172.24.4.1 port 46226 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:45:11.806374 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:45:11.819110 systemd-logind[1463]: New session 17 of user core. Mar 17 18:45:11.823601 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 18:45:12.565425 sshd[4153]: Connection closed by 172.24.4.1 port 46226 Mar 17 18:45:12.566661 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:12.585742 systemd[1]: sshd@14-172.24.4.236:22-172.24.4.1:46226.service: Deactivated successfully. Mar 17 18:45:12.590974 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:45:12.593674 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:45:12.604642 systemd[1]: Started sshd@15-172.24.4.236:22-172.24.4.1:46238.service - OpenSSH per-connection server daemon (172.24.4.1:46238). Mar 17 18:45:12.610594 systemd-logind[1463]: Removed session 17. Mar 17 18:45:14.036535 sshd[4161]: Accepted publickey for core from 172.24.4.1 port 46238 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:45:14.043660 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:45:14.055588 systemd-logind[1463]: New session 18 of user core. Mar 17 18:45:14.062339 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 18:45:16.929487 sshd[4164]: Connection closed by 172.24.4.1 port 46238 Mar 17 18:45:16.928368 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:16.959943 systemd[1]: sshd@15-172.24.4.236:22-172.24.4.1:46238.service: Deactivated successfully. Mar 17 18:45:16.963699 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:45:16.966085 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:45:16.978778 systemd[1]: Started sshd@16-172.24.4.236:22-172.24.4.1:58070.service - OpenSSH per-connection server daemon (172.24.4.1:58070). Mar 17 18:45:16.988227 systemd-logind[1463]: Removed session 18. Mar 17 18:45:18.551752 sshd[4180]: Accepted publickey for core from 172.24.4.1 port 58070 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:45:18.554759 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:45:18.562912 systemd-logind[1463]: New session 19 of user core. Mar 17 18:45:18.571358 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 18:45:19.419070 sshd[4183]: Connection closed by 172.24.4.1 port 58070 Mar 17 18:45:19.419761 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:19.434432 systemd[1]: sshd@16-172.24.4.236:22-172.24.4.1:58070.service: Deactivated successfully. Mar 17 18:45:19.437869 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:45:19.441314 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:45:19.452445 systemd[1]: Started sshd@17-172.24.4.236:22-172.24.4.1:58076.service - OpenSSH per-connection server daemon (172.24.4.1:58076). Mar 17 18:45:19.456377 systemd-logind[1463]: Removed session 19. Mar 17 18:45:20.627720 sshd[4192]: Accepted publickey for core from 172.24.4.1 port 58076 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:45:20.630293 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:45:20.641572 systemd-logind[1463]: New session 20 of user core. Mar 17 18:45:20.651388 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 18:45:21.316094 sshd[4195]: Connection closed by 172.24.4.1 port 58076 Mar 17 18:45:21.317478 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:21.333432 systemd[1]: sshd@17-172.24.4.236:22-172.24.4.1:58076.service: Deactivated successfully. Mar 17 18:45:21.339983 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:45:21.344641 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:45:21.347837 systemd-logind[1463]: Removed session 20. Mar 17 18:45:26.339593 systemd[1]: Started sshd@18-172.24.4.236:22-172.24.4.1:55002.service - OpenSSH per-connection server daemon (172.24.4.1:55002). Mar 17 18:45:27.701703 sshd[4210]: Accepted publickey for core from 172.24.4.1 port 55002 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:45:27.704918 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:45:27.718393 systemd-logind[1463]: New session 21 of user core. Mar 17 18:45:27.725440 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 18:45:28.531305 sshd[4212]: Connection closed by 172.24.4.1 port 55002 Mar 17 18:45:28.532417 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:28.538407 systemd[1]: sshd@18-172.24.4.236:22-172.24.4.1:55002.service: Deactivated successfully. Mar 17 18:45:28.545794 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:45:28.551811 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:45:28.554563 systemd-logind[1463]: Removed session 21. Mar 17 18:45:33.561748 systemd[1]: Started sshd@19-172.24.4.236:22-172.24.4.1:47076.service - OpenSSH per-connection server daemon (172.24.4.1:47076). Mar 17 18:45:34.847085 sshd[4224]: Accepted publickey for core from 172.24.4.1 port 47076 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:45:34.849294 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:45:34.867462 systemd-logind[1463]: New session 22 of user core. Mar 17 18:45:34.876359 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 18:45:35.724293 sshd[4228]: Connection closed by 172.24.4.1 port 47076 Mar 17 18:45:35.725672 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:35.731922 systemd[1]: sshd@19-172.24.4.236:22-172.24.4.1:47076.service: Deactivated successfully. Mar 17 18:45:35.736588 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:45:35.742094 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:45:35.745878 systemd-logind[1463]: Removed session 22. Mar 17 18:45:40.754631 systemd[1]: Started sshd@20-172.24.4.236:22-172.24.4.1:47086.service - OpenSSH per-connection server daemon (172.24.4.1:47086). Mar 17 18:45:42.136161 sshd[4241]: Accepted publickey for core from 172.24.4.1 port 47086 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:45:42.138396 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:45:42.150874 systemd-logind[1463]: New session 23 of user core. Mar 17 18:45:42.159344 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 18:45:42.864245 sshd[4243]: Connection closed by 172.24.4.1 port 47086 Mar 17 18:45:42.865346 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:42.888012 systemd[1]: sshd@20-172.24.4.236:22-172.24.4.1:47086.service: Deactivated successfully. Mar 17 18:45:42.891739 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:45:42.895322 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:45:42.903620 systemd[1]: Started sshd@21-172.24.4.236:22-172.24.4.1:47098.service - OpenSSH per-connection server daemon (172.24.4.1:47098). Mar 17 18:45:42.908098 systemd-logind[1463]: Removed session 23. Mar 17 18:45:44.366634 sshd[4254]: Accepted publickey for core from 172.24.4.1 port 47098 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:45:44.367852 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:45:44.381593 systemd-logind[1463]: New session 24 of user core. Mar 17 18:45:44.390454 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 18:45:46.626612 containerd[1483]: time="2025-03-17T18:45:46.626573618Z" level=info msg="StopContainer for \"06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b\" with timeout 30 (s)" Mar 17 18:45:46.632227 systemd[1]: run-containerd-runc-k8s.io-725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839-runc.BF6Jbc.mount: Deactivated successfully. Mar 17 18:45:46.632568 containerd[1483]: time="2025-03-17T18:45:46.632449940Z" level=info msg="Stop container \"06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b\" with signal terminated" Mar 17 18:45:46.650833 containerd[1483]: time="2025-03-17T18:45:46.650759504Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:45:46.651268 systemd[1]: cri-containerd-06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b.scope: Deactivated successfully. Mar 17 18:45:46.665350 containerd[1483]: time="2025-03-17T18:45:46.664665553Z" level=info msg="StopContainer for \"725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839\" with timeout 2 (s)" Mar 17 18:45:46.665350 containerd[1483]: time="2025-03-17T18:45:46.665283964Z" level=info msg="Stop container \"725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839\" with signal terminated" Mar 17 18:45:46.675493 systemd-networkd[1397]: lxc_health: Link DOWN Mar 17 18:45:46.675503 systemd-networkd[1397]: lxc_health: Lost carrier Mar 17 18:45:46.692794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b-rootfs.mount: Deactivated successfully. Mar 17 18:45:46.695127 systemd[1]: cri-containerd-725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839.scope: Deactivated successfully. Mar 17 18:45:46.695618 systemd[1]: cri-containerd-725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839.scope: Consumed 8.653s CPU time, 125.9M memory peak, 144K read from disk, 13.3M written to disk. Mar 17 18:45:46.708326 containerd[1483]: time="2025-03-17T18:45:46.708122393Z" level=info msg="shim disconnected" id=06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b namespace=k8s.io Mar 17 18:45:46.708326 containerd[1483]: time="2025-03-17T18:45:46.708316952Z" level=warning msg="cleaning up after shim disconnected" id=06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b namespace=k8s.io Mar 17 18:45:46.708326 containerd[1483]: time="2025-03-17T18:45:46.708329816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:45:46.732235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839-rootfs.mount: Deactivated successfully. Mar 17 18:45:46.744531 containerd[1483]: time="2025-03-17T18:45:46.744448047Z" level=info msg="shim disconnected" id=725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839 namespace=k8s.io Mar 17 18:45:46.744531 containerd[1483]: time="2025-03-17T18:45:46.744525132Z" level=warning msg="cleaning up after shim disconnected" id=725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839 namespace=k8s.io Mar 17 18:45:46.744531 containerd[1483]: time="2025-03-17T18:45:46.744537636Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:45:46.757562 containerd[1483]: time="2025-03-17T18:45:46.757455676Z" level=info msg="StopContainer for \"06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b\" returns successfully" Mar 17 18:45:46.758517 containerd[1483]: time="2025-03-17T18:45:46.758304723Z" level=info msg="StopPodSandbox for \"8fd2874cc55dcad0e52b2334235d28c9e1b913bfbd90052cb3cf1da6abf9ce1b\"" Mar 17 18:45:46.758517 containerd[1483]: time="2025-03-17T18:45:46.758367742Z" level=info msg="Container to stop \"06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:45:46.764415 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fd2874cc55dcad0e52b2334235d28c9e1b913bfbd90052cb3cf1da6abf9ce1b-shm.mount: Deactivated successfully. Mar 17 18:45:46.771785 systemd[1]: cri-containerd-8fd2874cc55dcad0e52b2334235d28c9e1b913bfbd90052cb3cf1da6abf9ce1b.scope: Deactivated successfully. Mar 17 18:45:46.783890 containerd[1483]: time="2025-03-17T18:45:46.783564532Z" level=info msg="StopContainer for \"725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839\" returns successfully" Mar 17 18:45:46.784068 containerd[1483]: time="2025-03-17T18:45:46.784014063Z" level=info msg="StopPodSandbox for \"c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814\"" Mar 17 18:45:46.784124 containerd[1483]: time="2025-03-17T18:45:46.784082542Z" level=info msg="Container to stop \"725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:45:46.784156 containerd[1483]: time="2025-03-17T18:45:46.784122067Z" level=info msg="Container to stop \"528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:45:46.784156 containerd[1483]: time="2025-03-17T18:45:46.784133689Z" level=info msg="Container to stop \"e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:45:46.784156 containerd[1483]: time="2025-03-17T18:45:46.784145461Z" level=info msg="Container to stop \"60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:45:46.784258 containerd[1483]: time="2025-03-17T18:45:46.784155821Z" level=info msg="Container to stop \"bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:45:46.794852 systemd[1]: cri-containerd-c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814.scope: Deactivated successfully. Mar 17 18:45:46.820157 containerd[1483]: time="2025-03-17T18:45:46.819815784Z" level=info msg="shim disconnected" id=8fd2874cc55dcad0e52b2334235d28c9e1b913bfbd90052cb3cf1da6abf9ce1b namespace=k8s.io Mar 17 18:45:46.820157 containerd[1483]: time="2025-03-17T18:45:46.819869135Z" level=warning msg="cleaning up after shim disconnected" id=8fd2874cc55dcad0e52b2334235d28c9e1b913bfbd90052cb3cf1da6abf9ce1b namespace=k8s.io Mar 17 18:45:46.820157 containerd[1483]: time="2025-03-17T18:45:46.819880797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:45:46.821179 containerd[1483]: time="2025-03-17T18:45:46.820009800Z" level=info msg="shim disconnected" id=c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814 namespace=k8s.io Mar 17 18:45:46.821179 containerd[1483]: time="2025-03-17T18:45:46.820933970Z" level=warning msg="cleaning up after shim disconnected" id=c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814 namespace=k8s.io Mar 17 18:45:46.821179 containerd[1483]: time="2025-03-17T18:45:46.820945502Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:45:46.840078 containerd[1483]: time="2025-03-17T18:45:46.839691973Z" level=info msg="TearDown network for sandbox \"8fd2874cc55dcad0e52b2334235d28c9e1b913bfbd90052cb3cf1da6abf9ce1b\" successfully" Mar 17 18:45:46.840078 containerd[1483]: time="2025-03-17T18:45:46.839739413Z" level=info msg="StopPodSandbox for \"8fd2874cc55dcad0e52b2334235d28c9e1b913bfbd90052cb3cf1da6abf9ce1b\" returns successfully" Mar 17 18:45:46.843515 containerd[1483]: time="2025-03-17T18:45:46.843426792Z" level=info msg="TearDown network for sandbox \"c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814\" successfully" Mar 17 18:45:46.843621 containerd[1483]: time="2025-03-17T18:45:46.843512744Z" level=info msg="StopPodSandbox for \"c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814\" returns successfully" Mar 17 18:45:47.006104 kubelet[2662]: I0317 18:45:47.004388 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-host-proc-sys-net\") pod \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " Mar 17 18:45:47.006104 kubelet[2662]: I0317 18:45:47.004491 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-bpf-maps\") pod \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " Mar 17 18:45:47.006104 kubelet[2662]: I0317 18:45:47.004533 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-etc-cni-netd\") pod \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " Mar 17 18:45:47.006104 kubelet[2662]: I0317 18:45:47.004587 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-cilium-config-path\") pod \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " Mar 17 18:45:47.006104 kubelet[2662]: I0317 18:45:47.004567 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9661ec8e-e1bb-4f46-a46e-995b7d287c8b" (UID: "9661ec8e-e1bb-4f46-a46e-995b7d287c8b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:47.006104 kubelet[2662]: I0317 18:45:47.004631 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-cni-path\") pod \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " Mar 17 18:45:47.007283 kubelet[2662]: I0317 18:45:47.004672 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-hostproc\") pod \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " Mar 17 18:45:47.007283 kubelet[2662]: I0317 18:45:47.004695 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9661ec8e-e1bb-4f46-a46e-995b7d287c8b" (UID: "9661ec8e-e1bb-4f46-a46e-995b7d287c8b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:47.007283 kubelet[2662]: I0317 18:45:47.004712 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-xtables-lock\") pod \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " Mar 17 18:45:47.007283 kubelet[2662]: I0317 18:45:47.004741 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9661ec8e-e1bb-4f46-a46e-995b7d287c8b" (UID: "9661ec8e-e1bb-4f46-a46e-995b7d287c8b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:47.007283 kubelet[2662]: I0317 18:45:47.004753 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-host-proc-sys-kernel\") pod \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " Mar 17 18:45:47.007597 kubelet[2662]: I0317 18:45:47.004787 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-cni-path" (OuterVolumeSpecName: "cni-path") pod "9661ec8e-e1bb-4f46-a46e-995b7d287c8b" (UID: "9661ec8e-e1bb-4f46-a46e-995b7d287c8b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:47.007597 kubelet[2662]: I0317 18:45:47.004801 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8db7f485-3c3a-4691-ac92-9548382d0f9e-cilium-config-path\") pod \"8db7f485-3c3a-4691-ac92-9548382d0f9e\" (UID: \"8db7f485-3c3a-4691-ac92-9548382d0f9e\") " Mar 17 18:45:47.007597 kubelet[2662]: I0317 18:45:47.004924 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh7cb\" (UniqueName: \"kubernetes.io/projected/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-kube-api-access-nh7cb\") pod \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " Mar 17 18:45:47.007597 kubelet[2662]: I0317 18:45:47.004971 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-hubble-tls\") pod \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " Mar 17 18:45:47.007597 kubelet[2662]: I0317 18:45:47.005019 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-clustermesh-secrets\") pod \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " Mar 17 18:45:47.007597 kubelet[2662]: I0317 18:45:47.005147 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-cilium-run\") pod \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " Mar 17 18:45:47.007982 kubelet[2662]: I0317 18:45:47.005201 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66f2v\" (UniqueName: \"kubernetes.io/projected/8db7f485-3c3a-4691-ac92-9548382d0f9e-kube-api-access-66f2v\") pod \"8db7f485-3c3a-4691-ac92-9548382d0f9e\" (UID: \"8db7f485-3c3a-4691-ac92-9548382d0f9e\") " Mar 17 18:45:47.007982 kubelet[2662]: I0317 18:45:47.005242 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-cilium-cgroup\") pod \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " Mar 17 18:45:47.007982 kubelet[2662]: I0317 18:45:47.005280 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-lib-modules\") pod \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\" (UID: \"9661ec8e-e1bb-4f46-a46e-995b7d287c8b\") " Mar 17 18:45:47.007982 kubelet[2662]: I0317 18:45:47.005352 2662 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-host-proc-sys-net\") on node \"ci-4230-1-0-9-731388c134.novalocal\" DevicePath \"\"" Mar 17 18:45:47.007982 kubelet[2662]: I0317 18:45:47.005380 2662 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-bpf-maps\") on node \"ci-4230-1-0-9-731388c134.novalocal\" DevicePath \"\"" Mar 17 18:45:47.007982 kubelet[2662]: I0317 18:45:47.005404 2662 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-etc-cni-netd\") on node \"ci-4230-1-0-9-731388c134.novalocal\" DevicePath \"\"" Mar 17 18:45:47.007982 kubelet[2662]: I0317 18:45:47.005427 2662 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-cni-path\") on node \"ci-4230-1-0-9-731388c134.novalocal\" DevicePath \"\"" Mar 17 18:45:47.009017 kubelet[2662]: I0317 18:45:47.005505 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9661ec8e-e1bb-4f46-a46e-995b7d287c8b" (UID: "9661ec8e-e1bb-4f46-a46e-995b7d287c8b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:47.009017 kubelet[2662]: I0317 18:45:47.005558 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-hostproc" (OuterVolumeSpecName: "hostproc") pod "9661ec8e-e1bb-4f46-a46e-995b7d287c8b" (UID: "9661ec8e-e1bb-4f46-a46e-995b7d287c8b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:47.009017 kubelet[2662]: I0317 18:45:47.005594 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9661ec8e-e1bb-4f46-a46e-995b7d287c8b" (UID: "9661ec8e-e1bb-4f46-a46e-995b7d287c8b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:47.009017 kubelet[2662]: I0317 18:45:47.005629 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9661ec8e-e1bb-4f46-a46e-995b7d287c8b" (UID: "9661ec8e-e1bb-4f46-a46e-995b7d287c8b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:47.009017 kubelet[2662]: I0317 18:45:47.006883 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9661ec8e-e1bb-4f46-a46e-995b7d287c8b" (UID: "9661ec8e-e1bb-4f46-a46e-995b7d287c8b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:47.011825 kubelet[2662]: I0317 18:45:47.011002 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9661ec8e-e1bb-4f46-a46e-995b7d287c8b" (UID: "9661ec8e-e1bb-4f46-a46e-995b7d287c8b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:45:47.017998 kubelet[2662]: I0317 18:45:47.017928 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db7f485-3c3a-4691-ac92-9548382d0f9e-kube-api-access-66f2v" (OuterVolumeSpecName: "kube-api-access-66f2v") pod "8db7f485-3c3a-4691-ac92-9548382d0f9e" (UID: "8db7f485-3c3a-4691-ac92-9548382d0f9e"). InnerVolumeSpecName "kube-api-access-66f2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:45:47.019866 kubelet[2662]: I0317 18:45:47.019818 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9661ec8e-e1bb-4f46-a46e-995b7d287c8b" (UID: "9661ec8e-e1bb-4f46-a46e-995b7d287c8b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:45:47.023321 kubelet[2662]: I0317 18:45:47.023256 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9661ec8e-e1bb-4f46-a46e-995b7d287c8b" (UID: "9661ec8e-e1bb-4f46-a46e-995b7d287c8b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:45:47.027729 kubelet[2662]: I0317 18:45:47.027127 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9661ec8e-e1bb-4f46-a46e-995b7d287c8b" (UID: "9661ec8e-e1bb-4f46-a46e-995b7d287c8b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:45:47.028007 kubelet[2662]: I0317 18:45:47.027960 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8db7f485-3c3a-4691-ac92-9548382d0f9e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8db7f485-3c3a-4691-ac92-9548382d0f9e" (UID: "8db7f485-3c3a-4691-ac92-9548382d0f9e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:45:47.028254 kubelet[2662]: I0317 18:45:47.028167 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-kube-api-access-nh7cb" (OuterVolumeSpecName: "kube-api-access-nh7cb") pod "9661ec8e-e1bb-4f46-a46e-995b7d287c8b" (UID: "9661ec8e-e1bb-4f46-a46e-995b7d287c8b"). InnerVolumeSpecName "kube-api-access-nh7cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:45:47.105929 kubelet[2662]: I0317 18:45:47.105819 2662 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-cilium-run\") on node \"ci-4230-1-0-9-731388c134.novalocal\" DevicePath \"\"" Mar 17 18:45:47.105929 kubelet[2662]: I0317 18:45:47.105900 2662 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-66f2v\" (UniqueName: \"kubernetes.io/projected/8db7f485-3c3a-4691-ac92-9548382d0f9e-kube-api-access-66f2v\") on node \"ci-4230-1-0-9-731388c134.novalocal\" DevicePath \"\"" Mar 17 18:45:47.105929 kubelet[2662]: I0317 18:45:47.105933 2662 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-cilium-cgroup\") on node \"ci-4230-1-0-9-731388c134.novalocal\" DevicePath \"\"" Mar 17 18:45:47.106544 kubelet[2662]: I0317 18:45:47.105973 2662 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-lib-modules\") on node \"ci-4230-1-0-9-731388c134.novalocal\" DevicePath \"\"" Mar 17 18:45:47.106544 kubelet[2662]: I0317 18:45:47.106011 2662 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-cilium-config-path\") on node \"ci-4230-1-0-9-731388c134.novalocal\" DevicePath \"\"" Mar 17 18:45:47.106544 kubelet[2662]: I0317 18:45:47.106110 2662 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-hostproc\") on node \"ci-4230-1-0-9-731388c134.novalocal\" DevicePath \"\"" Mar 17 18:45:47.106544 kubelet[2662]: I0317 18:45:47.106139 2662 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-xtables-lock\") on node \"ci-4230-1-0-9-731388c134.novalocal\" DevicePath \"\"" Mar 17 18:45:47.106544 kubelet[2662]: I0317 18:45:47.106164 2662 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-host-proc-sys-kernel\") on node \"ci-4230-1-0-9-731388c134.novalocal\" DevicePath \"\"" Mar 17 18:45:47.106544 kubelet[2662]: I0317 18:45:47.106190 2662 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8db7f485-3c3a-4691-ac92-9548382d0f9e-cilium-config-path\") on node \"ci-4230-1-0-9-731388c134.novalocal\" DevicePath \"\"" Mar 17 18:45:47.106544 kubelet[2662]: I0317 18:45:47.106216 2662 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nh7cb\" (UniqueName: \"kubernetes.io/projected/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-kube-api-access-nh7cb\") on node \"ci-4230-1-0-9-731388c134.novalocal\" DevicePath \"\"" Mar 17 18:45:47.106959 kubelet[2662]: I0317 18:45:47.106241 2662 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-hubble-tls\") on node \"ci-4230-1-0-9-731388c134.novalocal\" DevicePath \"\"" Mar 17 18:45:47.106959 kubelet[2662]: I0317 18:45:47.106266 2662 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9661ec8e-e1bb-4f46-a46e-995b7d287c8b-clustermesh-secrets\") on node \"ci-4230-1-0-9-731388c134.novalocal\" DevicePath \"\"" Mar 17 18:45:47.489989 kubelet[2662]: I0317 18:45:47.489792 2662 scope.go:117] "RemoveContainer" containerID="06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b" Mar 17 18:45:47.498453 containerd[1483]: time="2025-03-17T18:45:47.498294616Z" level=info msg="RemoveContainer for \"06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b\"" Mar 17 18:45:47.518111 containerd[1483]: time="2025-03-17T18:45:47.517987343Z" level=info msg="RemoveContainer for \"06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b\" returns successfully" Mar 17 18:45:47.520087 kubelet[2662]: I0317 18:45:47.519352 2662 scope.go:117] "RemoveContainer" containerID="06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b" Mar 17 18:45:47.520245 containerd[1483]: time="2025-03-17T18:45:47.519744699Z" level=error msg="ContainerStatus for \"06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b\": not found" Mar 17 18:45:47.522690 kubelet[2662]: E0317 18:45:47.521380 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b\": not found" containerID="06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b" Mar 17 18:45:47.522690 kubelet[2662]: I0317 18:45:47.521477 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b"} err="failed to get container status \"06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b\": rpc error: code = NotFound desc = an error occurred when try to find container \"06aa79f10d96f516062bfd2618af9a77412aac7a5119b9df861b6c85fcbf327b\": not found" Mar 17 18:45:47.522690 kubelet[2662]: I0317 18:45:47.521642 2662 scope.go:117] "RemoveContainer" containerID="725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839" Mar 17 18:45:47.526784 systemd[1]: Removed slice kubepods-besteffort-pod8db7f485_3c3a_4691_ac92_9548382d0f9e.slice - libcontainer container kubepods-besteffort-pod8db7f485_3c3a_4691_ac92_9548382d0f9e.slice. Mar 17 18:45:47.528422 containerd[1483]: time="2025-03-17T18:45:47.527268117Z" level=info msg="RemoveContainer for \"725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839\"" Mar 17 18:45:47.545602 containerd[1483]: time="2025-03-17T18:45:47.545467268Z" level=info msg="RemoveContainer for \"725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839\" returns successfully" Mar 17 18:45:47.546559 kubelet[2662]: I0317 18:45:47.546514 2662 scope.go:117] "RemoveContainer" containerID="528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa" Mar 17 18:45:47.549489 systemd[1]: Removed slice kubepods-burstable-pod9661ec8e_e1bb_4f46_a46e_995b7d287c8b.slice - libcontainer container kubepods-burstable-pod9661ec8e_e1bb_4f46_a46e_995b7d287c8b.slice. Mar 17 18:45:47.549764 systemd[1]: kubepods-burstable-pod9661ec8e_e1bb_4f46_a46e_995b7d287c8b.slice: Consumed 8.748s CPU time, 126.3M memory peak, 144K read from disk, 16.6M written to disk. Mar 17 18:45:47.554692 containerd[1483]: time="2025-03-17T18:45:47.554349667Z" level=info msg="RemoveContainer for \"528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa\"" Mar 17 18:45:47.563364 containerd[1483]: time="2025-03-17T18:45:47.563296338Z" level=info msg="RemoveContainer for \"528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa\" returns successfully" Mar 17 18:45:47.566132 kubelet[2662]: I0317 18:45:47.565839 2662 scope.go:117] "RemoveContainer" containerID="bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d" Mar 17 18:45:47.568566 containerd[1483]: time="2025-03-17T18:45:47.568529171Z" level=info msg="RemoveContainer for \"bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d\"" Mar 17 18:45:47.577720 containerd[1483]: time="2025-03-17T18:45:47.577610527Z" level=info msg="RemoveContainer for \"bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d\" returns successfully" Mar 17 18:45:47.578097 kubelet[2662]: I0317 18:45:47.577885 2662 scope.go:117] "RemoveContainer" containerID="60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765" Mar 17 18:45:47.579546 containerd[1483]: time="2025-03-17T18:45:47.579487489Z" level=info msg="RemoveContainer for \"60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765\"" Mar 17 18:45:47.584741 containerd[1483]: time="2025-03-17T18:45:47.584690716Z" level=info msg="RemoveContainer for \"60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765\" returns successfully" Mar 17 18:45:47.585247 kubelet[2662]: I0317 18:45:47.585101 2662 scope.go:117] "RemoveContainer" containerID="e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d" Mar 17 18:45:47.589297 containerd[1483]: time="2025-03-17T18:45:47.589161998Z" level=info msg="RemoveContainer for \"e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d\"" Mar 17 18:45:47.593022 containerd[1483]: time="2025-03-17T18:45:47.592983850Z" level=info msg="RemoveContainer for \"e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d\" returns successfully" Mar 17 18:45:47.594115 kubelet[2662]: I0317 18:45:47.593318 2662 scope.go:117] "RemoveContainer" containerID="725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839" Mar 17 18:45:47.594549 containerd[1483]: time="2025-03-17T18:45:47.593671101Z" level=error msg="ContainerStatus for \"725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839\": not found" Mar 17 18:45:47.594732 kubelet[2662]: E0317 18:45:47.594692 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839\": not found" containerID="725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839" Mar 17 18:45:47.594826 kubelet[2662]: I0317 18:45:47.594801 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839"} err="failed to get container status \"725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839\": rpc error: code = NotFound desc = an error occurred when try to find container \"725c40b27080226c74727d7693c6700381b4f69aaae001ee0c68578ee874b839\": not found" Mar 17 18:45:47.594890 kubelet[2662]: I0317 18:45:47.594880 2662 scope.go:117] "RemoveContainer" containerID="528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa" Mar 17 18:45:47.595267 containerd[1483]: time="2025-03-17T18:45:47.595234980Z" level=error msg="ContainerStatus for \"528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa\": not found" Mar 17 18:45:47.595556 kubelet[2662]: E0317 18:45:47.595492 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa\": not found" containerID="528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa" Mar 17 18:45:47.595608 kubelet[2662]: I0317 18:45:47.595558 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa"} err="failed to get container status \"528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa\": rpc error: code = NotFound desc = an error occurred when try to find container \"528f936d2406c3511249cd2894ea82e588c81b2e669eceab35b3400056058ffa\": not found" Mar 17 18:45:47.595640 kubelet[2662]: I0317 18:45:47.595587 2662 scope.go:117] "RemoveContainer" containerID="bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d" Mar 17 18:45:47.595878 containerd[1483]: time="2025-03-17T18:45:47.595811862Z" level=error msg="ContainerStatus for \"bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d\": not found" Mar 17 18:45:47.595963 kubelet[2662]: E0317 18:45:47.595936 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d\": not found" containerID="bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d" Mar 17 18:45:47.596002 kubelet[2662]: I0317 18:45:47.595965 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d"} err="failed to get container status \"bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"bbaab7f6b0197b2c6bfc1e7977bc5422bf182107faba2486372832cfca44da9d\": not found" Mar 17 18:45:47.596002 kubelet[2662]: I0317 18:45:47.595983 2662 scope.go:117] "RemoveContainer" containerID="60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765" Mar 17 18:45:47.596219 containerd[1483]: time="2025-03-17T18:45:47.596170872Z" level=error msg="ContainerStatus for \"60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765\": not found" Mar 17 18:45:47.596333 kubelet[2662]: E0317 18:45:47.596298 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765\": not found" containerID="60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765" Mar 17 18:45:47.596416 kubelet[2662]: I0317 18:45:47.596374 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765"} err="failed to get container status \"60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765\": rpc error: code = NotFound desc = an error occurred when try to find container \"60c506aca3b0787ee7dd3058819ed0dc5eb7db5bf725e6dbd5e92cc4d3cee765\": not found" Mar 17 18:45:47.596416 kubelet[2662]: I0317 18:45:47.596394 2662 scope.go:117] "RemoveContainer" containerID="e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d" Mar 17 18:45:47.596692 containerd[1483]: time="2025-03-17T18:45:47.596610724Z" level=error msg="ContainerStatus for \"e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d\": not found" Mar 17 18:45:47.596824 kubelet[2662]: E0317 18:45:47.596745 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d\": not found" containerID="e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d" Mar 17 18:45:47.596824 kubelet[2662]: I0317 18:45:47.596768 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d"} err="failed to get container status \"e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e85079ffbdb187f1c3cc0816901f450ec285e4002af7989a487a2b1fb99f136d\": not found" Mar 17 18:45:47.621369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fd2874cc55dcad0e52b2334235d28c9e1b913bfbd90052cb3cf1da6abf9ce1b-rootfs.mount: Deactivated successfully. Mar 17 18:45:47.621513 systemd[1]: var-lib-kubelet-pods-8db7f485\x2d3c3a\x2d4691\x2dac92\x2d9548382d0f9e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d66f2v.mount: Deactivated successfully. Mar 17 18:45:47.621617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814-rootfs.mount: Deactivated successfully. Mar 17 18:45:47.621740 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c549e2f5528e3dd0fea922b9f7afe44962eb79606050a611e6c984ac30a01814-shm.mount: Deactivated successfully. Mar 17 18:45:47.621862 systemd[1]: var-lib-kubelet-pods-9661ec8e\x2de1bb\x2d4f46\x2da46e\x2d995b7d287c8b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnh7cb.mount: Deactivated successfully. Mar 17 18:45:47.621954 systemd[1]: var-lib-kubelet-pods-9661ec8e\x2de1bb\x2d4f46\x2da46e\x2d995b7d287c8b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:45:47.622061 systemd[1]: var-lib-kubelet-pods-9661ec8e\x2de1bb\x2d4f46\x2da46e\x2d995b7d287c8b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:45:48.742395 sshd[4257]: Connection closed by 172.24.4.1 port 47098 Mar 17 18:45:48.743133 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:48.764708 systemd[1]: sshd@21-172.24.4.236:22-172.24.4.1:47098.service: Deactivated successfully. Mar 17 18:45:48.769595 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:45:48.770009 systemd[1]: session-24.scope: Consumed 1.081s CPU time, 23.8M memory peak. Mar 17 18:45:48.772712 systemd-logind[1463]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:45:48.781713 systemd[1]: Started sshd@22-172.24.4.236:22-172.24.4.1:52362.service - OpenSSH per-connection server daemon (172.24.4.1:52362). Mar 17 18:45:48.785801 systemd-logind[1463]: Removed session 24. Mar 17 18:45:48.789557 kubelet[2662]: I0317 18:45:48.786428 2662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8db7f485-3c3a-4691-ac92-9548382d0f9e" path="/var/lib/kubelet/pods/8db7f485-3c3a-4691-ac92-9548382d0f9e/volumes" Mar 17 18:45:48.789557 kubelet[2662]: I0317 18:45:48.788943 2662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9661ec8e-e1bb-4f46-a46e-995b7d287c8b" path="/var/lib/kubelet/pods/9661ec8e-e1bb-4f46-a46e-995b7d287c8b/volumes" Mar 17 18:45:49.949303 kubelet[2662]: E0317 18:45:49.949141 2662 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:45:50.047394 sshd[4414]: Accepted publickey for core from 172.24.4.1 port 52362 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:45:50.050350 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:45:50.063573 systemd-logind[1463]: New session 25 of user core. Mar 17 18:45:50.070381 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 18:45:51.589225 kubelet[2662]: E0317 18:45:51.588499 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9661ec8e-e1bb-4f46-a46e-995b7d287c8b" containerName="mount-cgroup" Mar 17 18:45:51.589225 kubelet[2662]: E0317 18:45:51.588556 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9661ec8e-e1bb-4f46-a46e-995b7d287c8b" containerName="mount-bpf-fs" Mar 17 18:45:51.589225 kubelet[2662]: E0317 18:45:51.588576 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9661ec8e-e1bb-4f46-a46e-995b7d287c8b" containerName="cilium-agent" Mar 17 18:45:51.589225 kubelet[2662]: E0317 18:45:51.588595 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9661ec8e-e1bb-4f46-a46e-995b7d287c8b" containerName="apply-sysctl-overwrites" Mar 17 18:45:51.589225 kubelet[2662]: E0317 18:45:51.588610 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8db7f485-3c3a-4691-ac92-9548382d0f9e" containerName="cilium-operator" Mar 17 18:45:51.589225 kubelet[2662]: E0317 18:45:51.588626 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9661ec8e-e1bb-4f46-a46e-995b7d287c8b" containerName="clean-cilium-state" Mar 17 18:45:51.589225 kubelet[2662]: I0317 18:45:51.588682 2662 memory_manager.go:354] "RemoveStaleState removing state" podUID="9661ec8e-e1bb-4f46-a46e-995b7d287c8b" containerName="cilium-agent" Mar 17 18:45:51.589225 kubelet[2662]: I0317 18:45:51.588697 2662 memory_manager.go:354] "RemoveStaleState removing state" podUID="8db7f485-3c3a-4691-ac92-9548382d0f9e" containerName="cilium-operator" Mar 17 18:45:51.602239 systemd[1]: Created slice kubepods-burstable-pod2e37a18f_4a0b_4600_be76_432d11d954ae.slice - libcontainer container kubepods-burstable-pod2e37a18f_4a0b_4600_be76_432d11d954ae.slice. Mar 17 18:45:51.705082 sshd[4417]: Connection closed by 172.24.4.1 port 52362 Mar 17 18:45:51.706507 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:51.722743 systemd[1]: sshd@22-172.24.4.236:22-172.24.4.1:52362.service: Deactivated successfully. Mar 17 18:45:51.727883 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:45:51.728844 systemd[1]: session-25.scope: Consumed 1.029s CPU time, 23.5M memory peak. Mar 17 18:45:51.730703 systemd-logind[1463]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:45:51.734384 kubelet[2662]: I0317 18:45:51.734306 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e37a18f-4a0b-4600-be76-432d11d954ae-etc-cni-netd\") pod \"cilium-rzf67\" (UID: \"2e37a18f-4a0b-4600-be76-432d11d954ae\") " pod="kube-system/cilium-rzf67" Mar 17 18:45:51.734528 kubelet[2662]: I0317 18:45:51.734392 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2e37a18f-4a0b-4600-be76-432d11d954ae-cilium-ipsec-secrets\") pod \"cilium-rzf67\" (UID: \"2e37a18f-4a0b-4600-be76-432d11d954ae\") " pod="kube-system/cilium-rzf67" Mar 17 18:45:51.734528 kubelet[2662]: I0317 18:45:51.734445 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e37a18f-4a0b-4600-be76-432d11d954ae-cilium-cgroup\") pod \"cilium-rzf67\" (UID: \"2e37a18f-4a0b-4600-be76-432d11d954ae\") " pod="kube-system/cilium-rzf67" Mar 17 18:45:51.734528 kubelet[2662]: I0317 18:45:51.734487 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e37a18f-4a0b-4600-be76-432d11d954ae-cni-path\") pod \"cilium-rzf67\" (UID: \"2e37a18f-4a0b-4600-be76-432d11d954ae\") " pod="kube-system/cilium-rzf67" Mar 17 18:45:51.734730 kubelet[2662]: I0317 18:45:51.734532 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e37a18f-4a0b-4600-be76-432d11d954ae-host-proc-sys-kernel\") pod \"cilium-rzf67\" (UID: \"2e37a18f-4a0b-4600-be76-432d11d954ae\") " pod="kube-system/cilium-rzf67" Mar 17 18:45:51.734730 kubelet[2662]: I0317 18:45:51.734574 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn4bd\" (UniqueName: \"kubernetes.io/projected/2e37a18f-4a0b-4600-be76-432d11d954ae-kube-api-access-kn4bd\") pod \"cilium-rzf67\" (UID: \"2e37a18f-4a0b-4600-be76-432d11d954ae\") " pod="kube-system/cilium-rzf67" Mar 17 18:45:51.734730 kubelet[2662]: I0317 18:45:51.734648 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e37a18f-4a0b-4600-be76-432d11d954ae-hostproc\") pod \"cilium-rzf67\" (UID: \"2e37a18f-4a0b-4600-be76-432d11d954ae\") " pod="kube-system/cilium-rzf67" Mar 17 18:45:51.734730 kubelet[2662]: I0317 18:45:51.734692 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e37a18f-4a0b-4600-be76-432d11d954ae-clustermesh-secrets\") pod \"cilium-rzf67\" (UID: \"2e37a18f-4a0b-4600-be76-432d11d954ae\") " pod="kube-system/cilium-rzf67" Mar 17 18:45:51.734961 kubelet[2662]: I0317 18:45:51.734734 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e37a18f-4a0b-4600-be76-432d11d954ae-hubble-tls\") pod \"cilium-rzf67\" (UID: \"2e37a18f-4a0b-4600-be76-432d11d954ae\") " pod="kube-system/cilium-rzf67" Mar 17 18:45:51.734961 kubelet[2662]: I0317 18:45:51.734775 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e37a18f-4a0b-4600-be76-432d11d954ae-cilium-run\") pod \"cilium-rzf67\" (UID: \"2e37a18f-4a0b-4600-be76-432d11d954ae\") " pod="kube-system/cilium-rzf67" Mar 17 18:45:51.734961 kubelet[2662]: I0317 18:45:51.734813 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e37a18f-4a0b-4600-be76-432d11d954ae-bpf-maps\") pod \"cilium-rzf67\" (UID: \"2e37a18f-4a0b-4600-be76-432d11d954ae\") " pod="kube-system/cilium-rzf67" Mar 17 18:45:51.734961 kubelet[2662]: I0317 18:45:51.734854 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e37a18f-4a0b-4600-be76-432d11d954ae-lib-modules\") pod \"cilium-rzf67\" (UID: \"2e37a18f-4a0b-4600-be76-432d11d954ae\") " pod="kube-system/cilium-rzf67" Mar 17 18:45:51.734961 kubelet[2662]: I0317 18:45:51.734898 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e37a18f-4a0b-4600-be76-432d11d954ae-xtables-lock\") pod \"cilium-rzf67\" (UID: \"2e37a18f-4a0b-4600-be76-432d11d954ae\") " pod="kube-system/cilium-rzf67" Mar 17 18:45:51.734961 kubelet[2662]: I0317 18:45:51.734939 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e37a18f-4a0b-4600-be76-432d11d954ae-cilium-config-path\") pod \"cilium-rzf67\" (UID: \"2e37a18f-4a0b-4600-be76-432d11d954ae\") " pod="kube-system/cilium-rzf67" Mar 17 18:45:51.735486 kubelet[2662]: I0317 18:45:51.734980 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e37a18f-4a0b-4600-be76-432d11d954ae-host-proc-sys-net\") pod \"cilium-rzf67\" (UID: \"2e37a18f-4a0b-4600-be76-432d11d954ae\") " pod="kube-system/cilium-rzf67" Mar 17 18:45:51.741275 systemd[1]: Started sshd@23-172.24.4.236:22-172.24.4.1:52366.service - OpenSSH per-connection server daemon (172.24.4.1:52366). Mar 17 18:45:51.746468 systemd-logind[1463]: Removed session 25. Mar 17 18:45:51.913221 containerd[1483]: time="2025-03-17T18:45:51.911917678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rzf67,Uid:2e37a18f-4a0b-4600-be76-432d11d954ae,Namespace:kube-system,Attempt:0,}" Mar 17 18:45:51.943863 containerd[1483]: time="2025-03-17T18:45:51.943621008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:45:51.943863 containerd[1483]: time="2025-03-17T18:45:51.943686041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:45:51.943863 containerd[1483]: time="2025-03-17T18:45:51.943705798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:45:51.943863 containerd[1483]: time="2025-03-17T18:45:51.943807911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:45:51.971321 systemd[1]: Started cri-containerd-62bb7eb42e60fe942a85d8bc4e403b97c425ee0ad0ad7510dff448a034d03c27.scope - libcontainer container 62bb7eb42e60fe942a85d8bc4e403b97c425ee0ad0ad7510dff448a034d03c27. Mar 17 18:45:51.998587 containerd[1483]: time="2025-03-17T18:45:51.998484707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rzf67,Uid:2e37a18f-4a0b-4600-be76-432d11d954ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"62bb7eb42e60fe942a85d8bc4e403b97c425ee0ad0ad7510dff448a034d03c27\"" Mar 17 18:45:52.002551 containerd[1483]: time="2025-03-17T18:45:52.002477229Z" level=info msg="CreateContainer within sandbox \"62bb7eb42e60fe942a85d8bc4e403b97c425ee0ad0ad7510dff448a034d03c27\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:45:52.020447 containerd[1483]: time="2025-03-17T18:45:52.020377675Z" level=info msg="CreateContainer within sandbox \"62bb7eb42e60fe942a85d8bc4e403b97c425ee0ad0ad7510dff448a034d03c27\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"929042e048193f117dd251a46118e26d400983992c2e2d24511b7e7292ef6c97\"" Mar 17 18:45:52.020972 containerd[1483]: time="2025-03-17T18:45:52.020946411Z" level=info msg="StartContainer for \"929042e048193f117dd251a46118e26d400983992c2e2d24511b7e7292ef6c97\"" Mar 17 18:45:52.053221 systemd[1]: Started cri-containerd-929042e048193f117dd251a46118e26d400983992c2e2d24511b7e7292ef6c97.scope - libcontainer container 929042e048193f117dd251a46118e26d400983992c2e2d24511b7e7292ef6c97. Mar 17 18:45:52.093513 containerd[1483]: time="2025-03-17T18:45:52.093455302Z" level=info msg="StartContainer for \"929042e048193f117dd251a46118e26d400983992c2e2d24511b7e7292ef6c97\" returns successfully" Mar 17 18:45:52.101283 systemd[1]: cri-containerd-929042e048193f117dd251a46118e26d400983992c2e2d24511b7e7292ef6c97.scope: Deactivated successfully. Mar 17 18:45:52.149055 containerd[1483]: time="2025-03-17T18:45:52.148950906Z" level=info msg="shim disconnected" id=929042e048193f117dd251a46118e26d400983992c2e2d24511b7e7292ef6c97 namespace=k8s.io Mar 17 18:45:52.149679 containerd[1483]: time="2025-03-17T18:45:52.149399314Z" level=warning msg="cleaning up after shim disconnected" id=929042e048193f117dd251a46118e26d400983992c2e2d24511b7e7292ef6c97 namespace=k8s.io Mar 17 18:45:52.149679 containerd[1483]: time="2025-03-17T18:45:52.149424221Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:45:52.551138 containerd[1483]: time="2025-03-17T18:45:52.550881497Z" level=info msg="CreateContainer within sandbox \"62bb7eb42e60fe942a85d8bc4e403b97c425ee0ad0ad7510dff448a034d03c27\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:45:52.599954 containerd[1483]: time="2025-03-17T18:45:52.599824598Z" level=info msg="CreateContainer within sandbox \"62bb7eb42e60fe942a85d8bc4e403b97c425ee0ad0ad7510dff448a034d03c27\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d8d9c35af1629545abfd7dd6913cbca70856763ff1e1b0e027f3936835d982e0\"" Mar 17 18:45:52.601645 containerd[1483]: time="2025-03-17T18:45:52.601561784Z" level=info msg="StartContainer for \"d8d9c35af1629545abfd7dd6913cbca70856763ff1e1b0e027f3936835d982e0\"" Mar 17 18:45:52.675271 systemd[1]: Started cri-containerd-d8d9c35af1629545abfd7dd6913cbca70856763ff1e1b0e027f3936835d982e0.scope - libcontainer container d8d9c35af1629545abfd7dd6913cbca70856763ff1e1b0e027f3936835d982e0. Mar 17 18:45:52.705822 containerd[1483]: time="2025-03-17T18:45:52.705776664Z" level=info msg="StartContainer for \"d8d9c35af1629545abfd7dd6913cbca70856763ff1e1b0e027f3936835d982e0\" returns successfully" Mar 17 18:45:52.708518 systemd[1]: cri-containerd-d8d9c35af1629545abfd7dd6913cbca70856763ff1e1b0e027f3936835d982e0.scope: Deactivated successfully. Mar 17 18:45:52.736371 containerd[1483]: time="2025-03-17T18:45:52.736303841Z" level=info msg="shim disconnected" id=d8d9c35af1629545abfd7dd6913cbca70856763ff1e1b0e027f3936835d982e0 namespace=k8s.io Mar 17 18:45:52.736371 containerd[1483]: time="2025-03-17T18:45:52.736358184Z" level=warning msg="cleaning up after shim disconnected" id=d8d9c35af1629545abfd7dd6913cbca70856763ff1e1b0e027f3936835d982e0 namespace=k8s.io Mar 17 18:45:52.736371 containerd[1483]: time="2025-03-17T18:45:52.736369295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:45:53.026676 sshd[4426]: Accepted publickey for core from 172.24.4.1 port 52366 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:45:53.028407 sshd-session[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:45:53.047617 systemd-logind[1463]: New session 26 of user core. Mar 17 18:45:53.051772 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 18:45:53.560875 containerd[1483]: time="2025-03-17T18:45:53.560068697Z" level=info msg="CreateContainer within sandbox \"62bb7eb42e60fe942a85d8bc4e403b97c425ee0ad0ad7510dff448a034d03c27\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:45:53.601113 containerd[1483]: time="2025-03-17T18:45:53.599947975Z" level=info msg="CreateContainer within sandbox \"62bb7eb42e60fe942a85d8bc4e403b97c425ee0ad0ad7510dff448a034d03c27\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"39ce26d59f90efc8ba40c830dbca4d8061fc1def23caf216197733c4a776f75a\"" Mar 17 18:45:53.602485 containerd[1483]: time="2025-03-17T18:45:53.602410493Z" level=info msg="StartContainer for \"39ce26d59f90efc8ba40c830dbca4d8061fc1def23caf216197733c4a776f75a\"" Mar 17 18:45:53.662305 systemd[1]: Started cri-containerd-39ce26d59f90efc8ba40c830dbca4d8061fc1def23caf216197733c4a776f75a.scope - libcontainer container 39ce26d59f90efc8ba40c830dbca4d8061fc1def23caf216197733c4a776f75a. Mar 17 18:45:53.696420 systemd[1]: cri-containerd-39ce26d59f90efc8ba40c830dbca4d8061fc1def23caf216197733c4a776f75a.scope: Deactivated successfully. Mar 17 18:45:53.698721 containerd[1483]: time="2025-03-17T18:45:53.698311749Z" level=info msg="StartContainer for \"39ce26d59f90efc8ba40c830dbca4d8061fc1def23caf216197733c4a776f75a\" returns successfully" Mar 17 18:45:53.727561 containerd[1483]: time="2025-03-17T18:45:53.727485371Z" level=info msg="shim disconnected" id=39ce26d59f90efc8ba40c830dbca4d8061fc1def23caf216197733c4a776f75a namespace=k8s.io Mar 17 18:45:53.727561 containerd[1483]: time="2025-03-17T18:45:53.727556585Z" level=warning msg="cleaning up after shim disconnected" id=39ce26d59f90efc8ba40c830dbca4d8061fc1def23caf216197733c4a776f75a namespace=k8s.io Mar 17 18:45:53.727561 containerd[1483]: time="2025-03-17T18:45:53.727566123Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:45:53.741075 containerd[1483]: time="2025-03-17T18:45:53.740433237Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:45:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 18:45:53.811207 sshd[4597]: Connection closed by 172.24.4.1 port 52366 Mar 17 18:45:53.811282 sshd-session[4426]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:53.826295 systemd[1]: sshd@23-172.24.4.236:22-172.24.4.1:52366.service: Deactivated successfully. Mar 17 18:45:53.830634 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:45:53.833673 systemd-logind[1463]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:45:53.846599 systemd[1]: Started sshd@24-172.24.4.236:22-172.24.4.1:34352.service - OpenSSH per-connection server daemon (172.24.4.1:34352). Mar 17 18:45:53.857182 systemd[1]: run-containerd-runc-k8s.io-39ce26d59f90efc8ba40c830dbca4d8061fc1def23caf216197733c4a776f75a-runc.cogQAd.mount: Deactivated successfully. Mar 17 18:45:53.857531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39ce26d59f90efc8ba40c830dbca4d8061fc1def23caf216197733c4a776f75a-rootfs.mount: Deactivated successfully. Mar 17 18:45:53.867344 systemd-logind[1463]: Removed session 26. Mar 17 18:45:54.583672 containerd[1483]: time="2025-03-17T18:45:54.583355841Z" level=info msg="CreateContainer within sandbox \"62bb7eb42e60fe942a85d8bc4e403b97c425ee0ad0ad7510dff448a034d03c27\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:45:54.828010 containerd[1483]: time="2025-03-17T18:45:54.827705418Z" level=info msg="CreateContainer within sandbox \"62bb7eb42e60fe942a85d8bc4e403b97c425ee0ad0ad7510dff448a034d03c27\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"32c01e9e98081459e3de1273c4d6311dd45f62056ae43191e636e69d5281367e\"" Mar 17 18:45:54.829682 containerd[1483]: time="2025-03-17T18:45:54.829497387Z" level=info msg="StartContainer for \"32c01e9e98081459e3de1273c4d6311dd45f62056ae43191e636e69d5281367e\"" Mar 17 18:45:54.910811 systemd[1]: run-containerd-runc-k8s.io-32c01e9e98081459e3de1273c4d6311dd45f62056ae43191e636e69d5281367e-runc.lyejDF.mount: Deactivated successfully. Mar 17 18:45:54.926205 systemd[1]: Started cri-containerd-32c01e9e98081459e3de1273c4d6311dd45f62056ae43191e636e69d5281367e.scope - libcontainer container 32c01e9e98081459e3de1273c4d6311dd45f62056ae43191e636e69d5281367e. Mar 17 18:45:54.950441 kubelet[2662]: E0317 18:45:54.950394 2662 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:45:54.955292 systemd[1]: cri-containerd-32c01e9e98081459e3de1273c4d6311dd45f62056ae43191e636e69d5281367e.scope: Deactivated successfully. Mar 17 18:45:54.961424 containerd[1483]: time="2025-03-17T18:45:54.961274633Z" level=info msg="StartContainer for \"32c01e9e98081459e3de1273c4d6311dd45f62056ae43191e636e69d5281367e\" returns successfully" Mar 17 18:45:54.986918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32c01e9e98081459e3de1273c4d6311dd45f62056ae43191e636e69d5281367e-rootfs.mount: Deactivated successfully. Mar 17 18:45:54.993810 containerd[1483]: time="2025-03-17T18:45:54.993567930Z" level=info msg="shim disconnected" id=32c01e9e98081459e3de1273c4d6311dd45f62056ae43191e636e69d5281367e namespace=k8s.io Mar 17 18:45:54.993810 containerd[1483]: time="2025-03-17T18:45:54.993635167Z" level=warning msg="cleaning up after shim disconnected" id=32c01e9e98081459e3de1273c4d6311dd45f62056ae43191e636e69d5281367e namespace=k8s.io Mar 17 18:45:54.993810 containerd[1483]: time="2025-03-17T18:45:54.993652860Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 18:45:55.006290 containerd[1483]: time="2025-03-17T18:45:55.006146624Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:45:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 18:45:55.273086 sshd[4660]: Accepted publickey for core from 172.24.4.1 port 34352 ssh2: RSA SHA256:gEYPMllUZLGQM0dbqmCj76cKqE6l7cF6D8vnHTtWCo4 Mar 17 18:45:55.275583 sshd-session[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 18:45:55.286795 systemd-logind[1463]: New session 27 of user core. Mar 17 18:45:55.303456 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 18:45:55.593608 containerd[1483]: time="2025-03-17T18:45:55.590987203Z" level=info msg="CreateContainer within sandbox \"62bb7eb42e60fe942a85d8bc4e403b97c425ee0ad0ad7510dff448a034d03c27\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:45:55.631487 containerd[1483]: time="2025-03-17T18:45:55.631398622Z" level=info msg="CreateContainer within sandbox \"62bb7eb42e60fe942a85d8bc4e403b97c425ee0ad0ad7510dff448a034d03c27\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ca43b1fff00fc26c5ee4eeb9d3cf6a2ff295e33f5b5077af57c1dc18060601ab\"" Mar 17 18:45:55.633765 containerd[1483]: time="2025-03-17T18:45:55.632572351Z" level=info msg="StartContainer for \"ca43b1fff00fc26c5ee4eeb9d3cf6a2ff295e33f5b5077af57c1dc18060601ab\"" Mar 17 18:45:55.696772 systemd[1]: Started cri-containerd-ca43b1fff00fc26c5ee4eeb9d3cf6a2ff295e33f5b5077af57c1dc18060601ab.scope - libcontainer container ca43b1fff00fc26c5ee4eeb9d3cf6a2ff295e33f5b5077af57c1dc18060601ab. Mar 17 18:45:55.738086 containerd[1483]: time="2025-03-17T18:45:55.738020810Z" level=info msg="StartContainer for \"ca43b1fff00fc26c5ee4eeb9d3cf6a2ff295e33f5b5077af57c1dc18060601ab\" returns successfully" Mar 17 18:45:55.868882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4166941833.mount: Deactivated successfully. Mar 17 18:45:56.197109 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:45:56.254188 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Mar 17 18:45:56.668131 kubelet[2662]: I0317 18:45:56.666491 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rzf67" podStartSLOduration=5.666470244 podStartE2EDuration="5.666470244s" podCreationTimestamp="2025-03-17 18:45:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:45:56.661948305 +0000 UTC m=+201.987739139" watchObservedRunningTime="2025-03-17 18:45:56.666470244 +0000 UTC m=+201.992261048" Mar 17 18:45:57.985323 kubelet[2662]: E0317 18:45:57.985284 2662 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47420->127.0.0.1:38955: write tcp 127.0.0.1:47420->127.0.0.1:38955: write: broken pipe Mar 17 18:45:58.870047 kubelet[2662]: I0317 18:45:58.869957 2662 setters.go:600] "Node became not ready" node="ci-4230-1-0-9-731388c134.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:45:58Z","lastTransitionTime":"2025-03-17T18:45:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:45:59.702447 systemd-networkd[1397]: lxc_health: Link UP Mar 17 18:45:59.711949 systemd-networkd[1397]: lxc_health: Gained carrier Mar 17 18:46:01.209218 systemd-networkd[1397]: lxc_health: Gained IPv6LL Mar 17 18:46:07.288734 sshd[4718]: Connection closed by 172.24.4.1 port 34352 Mar 17 18:46:07.288487 sshd-session[4660]: pam_unix(sshd:session): session closed for user core Mar 17 18:46:07.296107 systemd[1]: sshd@24-172.24.4.236:22-172.24.4.1:34352.service: Deactivated successfully. Mar 17 18:46:07.302533 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 18:46:07.307726 systemd-logind[1463]: Session 27 logged out. Waiting for processes to exit. Mar 17 18:46:07.310760 systemd-logind[1463]: Removed session 27.