Jan 17 12:08:54.032775 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:08:54.032803 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:08:54.032814 kernel: BIOS-provided physical RAM map: Jan 17 12:08:54.032822 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:08:54.032829 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:08:54.032840 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:08:54.032849 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jan 17 12:08:54.032857 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jan 17 12:08:54.032865 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:08:54.032872 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:08:54.032880 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jan 17 12:08:54.032888 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 17 12:08:54.032896 kernel: NX (Execute Disable) protection: active Jan 17 12:08:54.032904 kernel: APIC: Static calls initialized Jan 17 12:08:54.032915 kernel: SMBIOS 3.0.0 present. Jan 17 12:08:54.032924 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jan 17 12:08:54.032932 kernel: Hypervisor detected: KVM Jan 17 12:08:54.032940 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:08:54.032948 kernel: kvm-clock: using sched offset of 4582968582 cycles Jan 17 12:08:54.032959 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:08:54.032967 kernel: tsc: Detected 1996.249 MHz processor Jan 17 12:08:54.032976 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:08:54.032985 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:08:54.032993 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jan 17 12:08:54.033002 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:08:54.033011 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:08:54.033019 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jan 17 12:08:54.033027 kernel: ACPI: Early table checksum verification disabled Jan 17 12:08:54.033038 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jan 17 12:08:54.033046 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:08:54.033055 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:08:54.033063 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:08:54.033072 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jan 17 12:08:54.033080 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:08:54.033088 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:08:54.033097 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jan 17 12:08:54.033105 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jan 17 12:08:54.033116 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jan 17 12:08:54.033124 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jan 17 12:08:54.033132 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jan 17 12:08:54.033144 kernel: No NUMA configuration found Jan 17 12:08:54.033152 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jan 17 12:08:54.033161 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jan 17 12:08:54.033172 kernel: Zone ranges: Jan 17 12:08:54.033181 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:08:54.033189 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 12:08:54.033198 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jan 17 12:08:54.033206 kernel: Movable zone start for each node Jan 17 12:08:54.033215 kernel: Early memory node ranges Jan 17 12:08:54.033224 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:08:54.033232 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jan 17 12:08:54.033241 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jan 17 12:08:54.033252 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jan 17 12:08:54.033260 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:08:54.033269 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:08:54.033278 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 17 12:08:54.033286 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:08:54.033295 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:08:54.033304 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:08:54.033313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:08:54.033322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:08:54.033351 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:08:54.033360 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:08:54.033369 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:08:54.033378 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:08:54.033386 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:08:54.033395 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:08:54.033404 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jan 17 12:08:54.033412 kernel: Booting paravirtualized kernel on KVM Jan 17 12:08:54.033421 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:08:54.033433 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:08:54.033442 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:08:54.033450 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:08:54.033459 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:08:54.033467 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 17 12:08:54.033478 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:08:54.033487 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:08:54.033498 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:08:54.033507 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:08:54.033516 kernel: Fallback order for Node 0: 0 Jan 17 12:08:54.033524 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 17 12:08:54.033533 kernel: Policy zone: Normal Jan 17 12:08:54.033542 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:08:54.033551 kernel: software IO TLB: area num 2. Jan 17 12:08:54.033560 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 227308K reserved, 0K cma-reserved) Jan 17 12:08:54.033569 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:08:54.033579 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:08:54.033588 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:08:54.033596 kernel: Dynamic Preempt: voluntary Jan 17 12:08:54.033605 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:08:54.033615 kernel: rcu: RCU event tracing is enabled. Jan 17 12:08:54.033624 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:08:54.033633 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:08:54.033642 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:08:54.033650 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:08:54.033659 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:08:54.033670 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:08:54.033678 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:08:54.033687 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:08:54.033696 kernel: Console: colour VGA+ 80x25 Jan 17 12:08:54.033705 kernel: printk: console [tty0] enabled Jan 17 12:08:54.033714 kernel: printk: console [ttyS0] enabled Jan 17 12:08:54.033722 kernel: ACPI: Core revision 20230628 Jan 17 12:08:54.033732 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:08:54.033740 kernel: x2apic enabled Jan 17 12:08:54.033751 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:08:54.033760 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:08:54.033768 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 12:08:54.033777 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jan 17 12:08:54.033786 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 12:08:54.033795 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 12:08:54.033804 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:08:54.033812 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:08:54.033821 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:08:54.033832 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:08:54.033840 kernel: Speculative Store Bypass: Vulnerable Jan 17 12:08:54.033849 kernel: x86/fpu: x87 FPU will use FXSAVE Jan 17 12:08:54.033858 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:08:54.033873 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:08:54.033883 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:08:54.033893 kernel: landlock: Up and running. Jan 17 12:08:54.033902 kernel: SELinux: Initializing. Jan 17 12:08:54.033911 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:08:54.033920 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:08:54.033930 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jan 17 12:08:54.033941 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:08:54.033951 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:08:54.033960 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:08:54.033970 kernel: Performance Events: AMD PMU driver. Jan 17 12:08:54.033979 kernel: ... version: 0 Jan 17 12:08:54.033990 kernel: ... bit width: 48 Jan 17 12:08:54.033999 kernel: ... generic registers: 4 Jan 17 12:08:54.034008 kernel: ... value mask: 0000ffffffffffff Jan 17 12:08:54.034018 kernel: ... max period: 00007fffffffffff Jan 17 12:08:54.034027 kernel: ... fixed-purpose events: 0 Jan 17 12:08:54.034036 kernel: ... event mask: 000000000000000f Jan 17 12:08:54.034045 kernel: signal: max sigframe size: 1440 Jan 17 12:08:54.034055 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:08:54.034064 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:08:54.034075 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:08:54.034084 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:08:54.034093 kernel: .... node #0, CPUs: #1 Jan 17 12:08:54.034102 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:08:54.034112 kernel: smpboot: Max logical packages: 2 Jan 17 12:08:54.034121 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jan 17 12:08:54.034130 kernel: devtmpfs: initialized Jan 17 12:08:54.034139 kernel: x86/mm: Memory block size: 128MB Jan 17 12:08:54.034149 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:08:54.034159 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:08:54.034169 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:08:54.034179 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:08:54.034188 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:08:54.034197 kernel: audit: type=2000 audit(1737115732.624:1): state=initialized audit_enabled=0 res=1 Jan 17 12:08:54.034206 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:08:54.034216 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:08:54.034225 kernel: cpuidle: using governor menu Jan 17 12:08:54.034234 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:08:54.034244 kernel: dca service started, version 1.12.1 Jan 17 12:08:54.034255 kernel: PCI: Using configuration type 1 for base access Jan 17 12:08:54.034264 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:08:54.034274 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:08:54.034283 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:08:54.034292 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:08:54.034302 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:08:54.034311 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:08:54.034320 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:08:54.036376 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:08:54.036393 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:08:54.036403 kernel: ACPI: Interpreter enabled Jan 17 12:08:54.036413 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:08:54.036422 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:08:54.036432 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:08:54.036442 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:08:54.036451 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 12:08:54.036461 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:08:54.036609 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:08:54.036716 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:08:54.036816 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:08:54.036830 kernel: acpiphp: Slot [3] registered Jan 17 12:08:54.036840 kernel: acpiphp: Slot [4] registered Jan 17 12:08:54.036849 kernel: acpiphp: Slot [5] registered Jan 17 12:08:54.036858 kernel: acpiphp: Slot [6] registered Jan 17 12:08:54.036868 kernel: acpiphp: Slot [7] registered Jan 17 12:08:54.036880 kernel: acpiphp: Slot [8] registered Jan 17 12:08:54.036889 kernel: acpiphp: Slot [9] registered Jan 17 12:08:54.036898 kernel: acpiphp: Slot [10] registered Jan 17 12:08:54.036908 kernel: acpiphp: Slot [11] registered Jan 17 12:08:54.036917 kernel: acpiphp: Slot [12] registered Jan 17 12:08:54.036926 kernel: acpiphp: Slot [13] registered Jan 17 12:08:54.036935 kernel: acpiphp: Slot [14] registered Jan 17 12:08:54.036944 kernel: acpiphp: Slot [15] registered Jan 17 12:08:54.036953 kernel: acpiphp: Slot [16] registered Jan 17 12:08:54.036964 kernel: acpiphp: Slot [17] registered Jan 17 12:08:54.036973 kernel: acpiphp: Slot [18] registered Jan 17 12:08:54.036982 kernel: acpiphp: Slot [19] registered Jan 17 12:08:54.036991 kernel: acpiphp: Slot [20] registered Jan 17 12:08:54.037000 kernel: acpiphp: Slot [21] registered Jan 17 12:08:54.037009 kernel: acpiphp: Slot [22] registered Jan 17 12:08:54.037018 kernel: acpiphp: Slot [23] registered Jan 17 12:08:54.037027 kernel: acpiphp: Slot [24] registered Jan 17 12:08:54.037037 kernel: acpiphp: Slot [25] registered Jan 17 12:08:54.037359 kernel: acpiphp: Slot [26] registered Jan 17 12:08:54.037374 kernel: acpiphp: Slot [27] registered Jan 17 12:08:54.037383 kernel: acpiphp: Slot [28] registered Jan 17 12:08:54.037392 kernel: acpiphp: Slot [29] registered Jan 17 12:08:54.037402 kernel: acpiphp: Slot [30] registered Jan 17 12:08:54.037411 kernel: acpiphp: Slot [31] registered Jan 17 12:08:54.037420 kernel: PCI host bridge to bus 0000:00 Jan 17 12:08:54.037526 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:08:54.037615 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:08:54.037707 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:08:54.037792 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:08:54.037876 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jan 17 12:08:54.037960 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:08:54.038073 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:08:54.038180 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 12:08:54.038284 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 17 12:08:54.039458 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jan 17 12:08:54.039562 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 17 12:08:54.039659 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 17 12:08:54.039756 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 17 12:08:54.039852 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 17 12:08:54.039955 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:08:54.040058 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 17 12:08:54.040148 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 17 12:08:54.040246 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 17 12:08:54.040360 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 17 12:08:54.040457 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jan 17 12:08:54.040547 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jan 17 12:08:54.040638 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jan 17 12:08:54.040734 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:08:54.040838 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:08:54.040930 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jan 17 12:08:54.041021 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jan 17 12:08:54.047042 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jan 17 12:08:54.047220 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jan 17 12:08:54.047380 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:08:54.047566 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 12:08:54.047685 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jan 17 12:08:54.047801 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jan 17 12:08:54.047926 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jan 17 12:08:54.048034 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jan 17 12:08:54.048137 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jan 17 12:08:54.048245 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:08:54.048463 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jan 17 12:08:54.048570 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jan 17 12:08:54.048674 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jan 17 12:08:54.048689 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:08:54.048700 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:08:54.048710 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:08:54.048720 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:08:54.048730 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:08:54.048745 kernel: iommu: Default domain type: Translated Jan 17 12:08:54.048755 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:08:54.048765 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:08:54.048775 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:08:54.048785 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:08:54.048795 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jan 17 12:08:54.048892 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 17 12:08:54.048990 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 17 12:08:54.049092 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:08:54.049107 kernel: vgaarb: loaded Jan 17 12:08:54.049119 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:08:54.049130 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:08:54.049141 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:08:54.049151 kernel: pnp: PnP ACPI init Jan 17 12:08:54.049257 kernel: pnp 00:03: [dma 2] Jan 17 12:08:54.049274 kernel: pnp: PnP ACPI: found 5 devices Jan 17 12:08:54.049285 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:08:54.049300 kernel: NET: Registered PF_INET protocol family Jan 17 12:08:54.049311 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:08:54.049322 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:08:54.052905 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:08:54.052919 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:08:54.052930 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:08:54.052941 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:08:54.052952 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:08:54.052963 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:08:54.052978 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:08:54.052989 kernel: NET: Registered PF_XDP protocol family Jan 17 12:08:54.053098 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:08:54.053201 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:08:54.053290 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:08:54.053401 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jan 17 12:08:54.053492 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jan 17 12:08:54.053600 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 17 12:08:54.053712 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:08:54.053728 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:08:54.053739 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 12:08:54.053750 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jan 17 12:08:54.053761 kernel: Initialise system trusted keyrings Jan 17 12:08:54.053773 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:08:54.053783 kernel: Key type asymmetric registered Jan 17 12:08:54.053794 kernel: Asymmetric key parser 'x509' registered Jan 17 12:08:54.053808 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:08:54.053819 kernel: io scheduler mq-deadline registered Jan 17 12:08:54.053830 kernel: io scheduler kyber registered Jan 17 12:08:54.053840 kernel: io scheduler bfq registered Jan 17 12:08:54.053851 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:08:54.053862 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 17 12:08:54.053873 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:08:54.053884 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 12:08:54.053895 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:08:54.053908 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:08:54.053919 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:08:54.053929 kernel: random: crng init done Jan 17 12:08:54.053940 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:08:54.053951 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:08:54.053961 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:08:54.054065 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 12:08:54.054081 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:08:54.054171 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 12:08:54.054300 kernel: rtc_cmos 00:04: setting system clock to 2025-01-17T12:08:53 UTC (1737115733) Jan 17 12:08:54.054509 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 17 12:08:54.054526 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 12:08:54.054537 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:08:54.054547 kernel: Segment Routing with IPv6 Jan 17 12:08:54.054558 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:08:54.054569 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:08:54.054580 kernel: Key type dns_resolver registered Jan 17 12:08:54.054617 kernel: IPI shorthand broadcast: enabled Jan 17 12:08:54.054630 kernel: sched_clock: Marking stable (997007812, 172750907)->(1211986050, -42227331) Jan 17 12:08:54.054640 kernel: registered taskstats version 1 Jan 17 12:08:54.054651 kernel: Loading compiled-in X.509 certificates Jan 17 12:08:54.054662 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:08:54.054673 kernel: Key type .fscrypt registered Jan 17 12:08:54.054683 kernel: Key type fscrypt-provisioning registered Jan 17 12:08:54.054694 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:08:54.054704 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:08:54.054718 kernel: ima: No architecture policies found Jan 17 12:08:54.054729 kernel: clk: Disabling unused clocks Jan 17 12:08:54.054739 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:08:54.054762 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:08:54.054777 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:08:54.054792 kernel: Run /init as init process Jan 17 12:08:54.054806 kernel: with arguments: Jan 17 12:08:54.054817 kernel: /init Jan 17 12:08:54.054827 kernel: with environment: Jan 17 12:08:54.054840 kernel: HOME=/ Jan 17 12:08:54.054850 kernel: TERM=linux Jan 17 12:08:54.054861 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:08:54.054875 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:08:54.054890 systemd[1]: Detected virtualization kvm. Jan 17 12:08:54.054902 systemd[1]: Detected architecture x86-64. Jan 17 12:08:54.054913 systemd[1]: Running in initrd. Jan 17 12:08:54.054926 systemd[1]: No hostname configured, using default hostname. Jan 17 12:08:54.054937 systemd[1]: Hostname set to . Jan 17 12:08:54.054949 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:08:54.054960 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:08:54.054972 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:08:54.054983 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:08:54.054996 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:08:54.055018 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:08:54.055032 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:08:54.055044 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:08:54.055058 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:08:54.055070 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:08:54.055082 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:08:54.055096 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:08:54.055107 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:08:54.055119 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:08:54.055131 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:08:54.055142 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:08:54.055154 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:08:54.055166 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:08:54.055178 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:08:54.055192 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:08:54.055204 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:08:54.055216 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:08:54.055228 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:08:54.055239 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:08:54.055251 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:08:54.055263 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:08:54.055275 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:08:54.055287 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:08:54.055301 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:08:54.055313 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:08:54.056007 systemd-journald[184]: Collecting audit messages is disabled. Jan 17 12:08:54.056042 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:08:54.056059 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:08:54.056071 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:08:54.056083 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:08:54.056100 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:08:54.056113 systemd-journald[184]: Journal started Jan 17 12:08:54.056139 systemd-journald[184]: Runtime Journal (/run/log/journal/63186cfa351c4a8bb554615e91af0186) is 8.0M, max 78.3M, 70.3M free. Jan 17 12:08:54.054640 systemd-modules-load[185]: Inserted module 'overlay' Jan 17 12:08:54.058974 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:08:54.070481 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:08:54.110339 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:08:54.110362 kernel: Bridge firewalling registered Jan 17 12:08:54.093472 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 17 12:08:54.110899 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:08:54.117497 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:08:54.118284 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:08:54.131507 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:08:54.133490 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:08:54.138084 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:08:54.140429 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:08:54.148273 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:08:54.157588 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:08:54.159134 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:08:54.159847 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:08:54.163446 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:08:54.189750 systemd-resolved[216]: Positive Trust Anchors: Jan 17 12:08:54.189768 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:08:54.189810 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:08:54.199878 dracut-cmdline[220]: dracut-dracut-053 Jan 17 12:08:54.193556 systemd-resolved[216]: Defaulting to hostname 'linux'. Jan 17 12:08:54.201038 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:08:54.194634 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:08:54.196574 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:08:54.293354 kernel: SCSI subsystem initialized Jan 17 12:08:54.304420 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:08:54.317389 kernel: iscsi: registered transport (tcp) Jan 17 12:08:54.340995 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:08:54.341055 kernel: QLogic iSCSI HBA Driver Jan 17 12:08:54.394279 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:08:54.399629 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:08:54.450812 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:08:54.450903 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:08:54.452869 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:08:54.513408 kernel: raid6: sse2x4 gen() 5213 MB/s Jan 17 12:08:54.531440 kernel: raid6: sse2x2 gen() 6183 MB/s Jan 17 12:08:54.549718 kernel: raid6: sse2x1 gen() 9993 MB/s Jan 17 12:08:54.549777 kernel: raid6: using algorithm sse2x1 gen() 9993 MB/s Jan 17 12:08:54.568787 kernel: raid6: .... xor() 7406 MB/s, rmw enabled Jan 17 12:08:54.568856 kernel: raid6: using ssse3x2 recovery algorithm Jan 17 12:08:54.591115 kernel: xor: measuring software checksum speed Jan 17 12:08:54.591183 kernel: prefetch64-sse : 18495 MB/sec Jan 17 12:08:54.591599 kernel: generic_sse : 16814 MB/sec Jan 17 12:08:54.592711 kernel: xor: using function: prefetch64-sse (18495 MB/sec) Jan 17 12:08:54.780394 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:08:54.796138 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:08:54.805685 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:08:54.819540 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 17 12:08:54.823944 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:08:54.833580 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:08:54.851939 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Jan 17 12:08:54.889174 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:08:54.897586 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:08:54.943501 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:08:54.955656 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:08:54.991720 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:08:54.999189 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:08:55.002360 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:08:55.004303 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:08:55.012575 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:08:55.021346 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jan 17 12:08:55.071366 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jan 17 12:08:55.071514 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:08:55.071530 kernel: GPT:17805311 != 20971519 Jan 17 12:08:55.071543 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:08:55.071556 kernel: GPT:17805311 != 20971519 Jan 17 12:08:55.071567 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:08:55.071579 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:08:55.029487 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:08:55.075965 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:08:55.076051 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:08:55.078136 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:08:55.080614 kernel: libata version 3.00 loaded. Jan 17 12:08:55.079712 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:08:55.079770 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:08:55.081247 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:08:55.086980 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 17 12:08:55.094616 kernel: scsi host0: ata_piix Jan 17 12:08:55.094739 kernel: scsi host1: ata_piix Jan 17 12:08:55.094924 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jan 17 12:08:55.094939 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jan 17 12:08:55.091087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:08:55.114344 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (465) Jan 17 12:08:55.121360 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) Jan 17 12:08:55.123734 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:08:55.175107 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:08:55.176613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:08:55.188043 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:08:55.196787 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:08:55.197921 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:08:55.208699 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:08:55.215567 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:08:55.252487 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:08:55.257451 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:08:55.258032 disk-uuid[502]: Primary Header is updated. Jan 17 12:08:55.258032 disk-uuid[502]: Secondary Entries is updated. Jan 17 12:08:55.258032 disk-uuid[502]: Secondary Header is updated. Jan 17 12:08:56.291675 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:08:56.293883 disk-uuid[511]: The operation has completed successfully. Jan 17 12:08:56.371268 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:08:56.371555 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:08:56.406483 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:08:56.410281 sh[525]: Success Jan 17 12:08:56.437117 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jan 17 12:08:56.544065 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:08:56.547576 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:08:56.552527 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:08:56.644602 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:08:56.644697 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:08:56.649358 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:08:56.654216 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:08:56.657936 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:08:57.100009 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:08:57.102733 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:08:57.112709 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:08:57.119723 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:08:57.265525 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:08:57.265632 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:08:57.271625 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:08:57.319708 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:08:57.328397 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:08:57.332802 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:08:57.388034 systemd-networkd[698]: lo: Link UP Jan 17 12:08:57.388047 systemd-networkd[698]: lo: Gained carrier Jan 17 12:08:57.389262 systemd-networkd[698]: Enumeration completed Jan 17 12:08:57.389596 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:08:57.390237 systemd[1]: Reached target network.target - Network. Jan 17 12:08:57.390425 systemd-networkd[698]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:08:57.390429 systemd-networkd[698]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:08:57.391275 systemd-networkd[698]: eth0: Link UP Jan 17 12:08:57.391280 systemd-networkd[698]: eth0: Gained carrier Jan 17 12:08:57.391288 systemd-networkd[698]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:08:57.409430 systemd-networkd[698]: eth0: DHCPv4 address 172.24.4.251/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 17 12:08:57.420719 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:08:57.427377 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:08:57.622557 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:08:57.628646 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:08:57.749211 systemd-resolved[216]: Detected conflict on linux IN A 172.24.4.251 Jan 17 12:08:57.749249 systemd-resolved[216]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jan 17 12:08:58.066678 systemd-resolved[216]: Detected conflict on linux8 IN A 172.24.4.251 Jan 17 12:08:58.066722 systemd-resolved[216]: Hostname conflict, changing published hostname from 'linux8' to 'linux13'. Jan 17 12:08:58.341274 ignition[709]: Ignition 2.19.0 Jan 17 12:08:58.341289 ignition[709]: Stage: fetch-offline Jan 17 12:08:58.341351 ignition[709]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:08:58.341363 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:08:58.343387 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:08:58.341466 ignition[709]: parsed url from cmdline: "" Jan 17 12:08:58.341470 ignition[709]: no config URL provided Jan 17 12:08:58.341477 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:08:58.341487 ignition[709]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:08:58.341493 ignition[709]: failed to fetch config: resource requires networking Jan 17 12:08:58.341796 ignition[709]: Ignition finished successfully Jan 17 12:08:58.353816 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:08:58.374144 ignition[718]: Ignition 2.19.0 Jan 17 12:08:58.374166 ignition[718]: Stage: fetch Jan 17 12:08:58.374518 ignition[718]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:08:58.374539 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:08:58.374706 ignition[718]: parsed url from cmdline: "" Jan 17 12:08:58.374713 ignition[718]: no config URL provided Jan 17 12:08:58.374724 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:08:58.374740 ignition[718]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:08:58.374976 ignition[718]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 17 12:08:58.375041 ignition[718]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 17 12:08:58.376763 ignition[718]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 17 12:08:58.556947 ignition[718]: GET result: OK Jan 17 12:08:58.557135 ignition[718]: parsing config with SHA512: 8a0a44b3101aae0262a8ce748edc88dd2b7ce3eb8165b7dcf9027857f111632c0ba567e62ef3d852beb1758449e85fdaab816dc871b99fd6599d198fb5ec189d Jan 17 12:08:58.568938 unknown[718]: fetched base config from "system" Jan 17 12:08:58.568963 unknown[718]: fetched base config from "system" Jan 17 12:08:58.569958 ignition[718]: fetch: fetch complete Jan 17 12:08:58.568975 unknown[718]: fetched user config from "openstack" Jan 17 12:08:58.569968 ignition[718]: fetch: fetch passed Jan 17 12:08:58.573265 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:08:58.570038 ignition[718]: Ignition finished successfully Jan 17 12:08:58.584118 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:08:58.618908 ignition[724]: Ignition 2.19.0 Jan 17 12:08:58.618927 ignition[724]: Stage: kargs Jan 17 12:08:58.619384 ignition[724]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:08:58.619414 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:08:58.621924 ignition[724]: kargs: kargs passed Jan 17 12:08:58.624033 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:08:58.622025 ignition[724]: Ignition finished successfully Jan 17 12:08:58.638913 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:08:58.661032 ignition[730]: Ignition 2.19.0 Jan 17 12:08:58.661051 ignition[730]: Stage: disks Jan 17 12:08:58.661304 ignition[730]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:08:58.665587 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:08:58.661317 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:08:58.668127 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:08:58.664088 ignition[730]: disks: disks passed Jan 17 12:08:58.669068 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:08:58.664281 ignition[730]: Ignition finished successfully Jan 17 12:08:58.671071 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:08:58.673379 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:08:58.675665 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:08:58.684485 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:08:58.710723 systemd-fsck[739]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 12:08:58.720617 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:08:58.730550 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:08:58.902566 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:08:58.903125 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:08:58.904243 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:08:58.912502 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:08:58.916561 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:08:58.918226 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:08:58.923550 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 17 12:08:58.936457 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (747) Jan 17 12:08:58.936486 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:08:58.936500 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:08:58.936513 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:08:58.933463 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:08:58.933502 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:08:58.941860 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:08:58.946872 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:08:58.956824 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:08:58.961969 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:08:59.089379 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:08:59.090474 systemd-networkd[698]: eth0: Gained IPv6LL Jan 17 12:08:59.097300 initrd-setup-root[782]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:08:59.103262 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:08:59.108884 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:08:59.229931 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:08:59.236554 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:08:59.244390 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:08:59.241645 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:08:59.251514 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:08:59.290479 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:08:59.299740 ignition[863]: INFO : Ignition 2.19.0 Jan 17 12:08:59.299740 ignition[863]: INFO : Stage: mount Jan 17 12:08:59.301006 ignition[863]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:08:59.301006 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:08:59.302476 ignition[863]: INFO : mount: mount passed Jan 17 12:08:59.302476 ignition[863]: INFO : Ignition finished successfully Jan 17 12:08:59.304285 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:09:06.176631 coreos-metadata[749]: Jan 17 12:09:06.174 WARN failed to locate config-drive, using the metadata service API instead Jan 17 12:09:06.226074 coreos-metadata[749]: Jan 17 12:09:06.225 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 12:09:06.243622 coreos-metadata[749]: Jan 17 12:09:06.243 INFO Fetch successful Jan 17 12:09:06.243622 coreos-metadata[749]: Jan 17 12:09:06.243 INFO wrote hostname ci-4081-3-0-0-25eb0cd39e.novalocal to /sysroot/etc/hostname Jan 17 12:09:06.246996 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 17 12:09:06.247199 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 17 12:09:06.260527 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:09:06.300985 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:09:06.318407 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (881) Jan 17 12:09:06.325930 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:09:06.326017 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:09:06.330348 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:09:06.346389 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:09:06.351298 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:09:06.398594 ignition[899]: INFO : Ignition 2.19.0 Jan 17 12:09:06.398594 ignition[899]: INFO : Stage: files Jan 17 12:09:06.402046 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:06.402046 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:09:06.402046 ignition[899]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:09:06.407155 ignition[899]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:09:06.407155 ignition[899]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:09:06.410981 ignition[899]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:09:06.410981 ignition[899]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:09:06.410981 ignition[899]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:09:06.410981 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:09:06.408213 unknown[899]: wrote ssh authorized keys file for user: core Jan 17 12:09:06.419784 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:09:06.419784 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:09:06.419784 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:09:06.476249 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:09:06.799456 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:09:06.799456 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:09:06.799456 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 12:09:07.396238 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 17 12:09:07.815762 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:09:07.815762 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:09:07.820296 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:09:07.820296 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:09:07.820296 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:09:07.820296 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:09:07.820296 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:09:07.820296 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:09:07.820296 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:09:07.820296 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:09:07.820296 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:09:07.820296 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:09:07.820296 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:09:07.820296 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:09:07.820296 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:09:08.245233 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 17 12:09:11.246963 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:09:11.246963 ignition[899]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 17 12:09:11.277606 ignition[899]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:09:11.283190 ignition[899]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:09:11.283190 ignition[899]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 17 12:09:11.283190 ignition[899]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 17 12:09:11.283190 ignition[899]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:09:11.283190 ignition[899]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:09:11.283190 ignition[899]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 17 12:09:11.283190 ignition[899]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:09:11.283190 ignition[899]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:09:11.283190 ignition[899]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:09:11.283190 ignition[899]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:09:11.283190 ignition[899]: INFO : files: files passed Jan 17 12:09:11.283190 ignition[899]: INFO : Ignition finished successfully Jan 17 12:09:11.284707 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:09:11.297005 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:09:11.311543 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:09:11.339402 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:09:11.339525 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:09:11.352148 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:09:11.352148 initrd-setup-root-after-ignition[927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:09:11.354164 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:09:11.355816 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:09:11.359060 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:09:11.366651 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:09:11.401551 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:09:11.401800 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:09:11.405952 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:09:11.407314 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:09:11.409538 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:09:11.419577 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:09:11.445989 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:09:11.451566 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:09:11.500656 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:09:11.505229 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:09:11.507176 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:09:11.510165 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:09:11.510523 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:09:11.513832 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:09:11.515825 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:09:11.518843 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:09:11.521425 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:09:11.524019 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:09:11.527249 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:09:11.530016 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:09:11.533173 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:09:11.536126 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:09:11.539124 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:09:11.541903 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:09:11.542262 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:09:11.545258 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:09:11.547177 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:09:11.549649 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:09:11.549920 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:09:11.553556 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:09:11.554209 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:09:11.558116 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:09:11.558525 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:09:11.561492 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:09:11.561767 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:09:11.572921 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:09:11.574455 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:09:11.575615 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:09:11.585554 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:09:11.586089 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:09:11.586258 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:09:11.587512 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:09:11.587625 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:09:11.597859 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:09:11.598145 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:09:11.609347 ignition[951]: INFO : Ignition 2.19.0 Jan 17 12:09:11.609347 ignition[951]: INFO : Stage: umount Jan 17 12:09:11.609347 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:09:11.609347 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 12:09:11.613669 ignition[951]: INFO : umount: umount passed Jan 17 12:09:11.615202 ignition[951]: INFO : Ignition finished successfully Jan 17 12:09:11.615967 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:09:11.616076 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:09:11.618170 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:09:11.618236 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:09:11.618872 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:09:11.618913 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:09:11.619925 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:09:11.619964 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:09:11.620955 systemd[1]: Stopped target network.target - Network. Jan 17 12:09:11.621898 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:09:11.621941 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:09:11.622973 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:09:11.623902 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:09:11.627370 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:09:11.628072 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:09:11.630658 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:09:11.631236 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:09:11.631272 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:09:11.631837 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:09:11.631869 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:09:11.634457 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:09:11.634550 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:09:11.635662 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:09:11.635732 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:09:11.637298 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:09:11.638515 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:09:11.641375 systemd-networkd[698]: eth0: DHCPv6 lease lost Jan 17 12:09:11.644255 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:09:11.644397 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:09:11.646179 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:09:11.646235 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:09:11.653502 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:09:11.654115 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:09:11.654174 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:09:11.654878 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:09:11.657961 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:09:11.658059 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:09:11.660690 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:09:11.660859 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:09:11.668253 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:09:11.668344 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:09:11.672304 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:09:11.672375 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:09:11.673591 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:09:11.673650 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:09:11.676196 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:09:11.676246 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:09:11.677370 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:09:11.677417 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:09:11.687551 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:09:11.691198 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:09:11.691273 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:09:11.691926 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:09:11.691975 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:09:11.694407 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:09:11.694455 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:09:11.695311 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:09:11.695422 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:09:11.696055 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:09:11.696100 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:11.698383 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:09:11.699013 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:09:11.699137 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:09:11.700543 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:09:11.700648 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:09:11.702038 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:09:11.702137 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:09:11.704244 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:09:11.705533 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:09:11.705593 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:09:11.713909 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:09:11.724624 systemd[1]: Switching root. Jan 17 12:09:11.762535 systemd-journald[184]: Journal stopped Jan 17 12:09:14.059821 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 17 12:09:14.059890 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:09:14.059906 kernel: SELinux: policy capability open_perms=1 Jan 17 12:09:14.059923 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:09:14.059935 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:09:14.059947 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:09:14.059963 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:09:14.059980 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:09:14.059992 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:09:14.060004 kernel: audit: type=1403 audit(1737115752.786:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:09:14.060022 systemd[1]: Successfully loaded SELinux policy in 60.697ms. Jan 17 12:09:14.060048 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.068ms. Jan 17 12:09:14.060063 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:09:14.060077 systemd[1]: Detected virtualization kvm. Jan 17 12:09:14.060090 systemd[1]: Detected architecture x86-64. Jan 17 12:09:14.060106 systemd[1]: Detected first boot. Jan 17 12:09:14.060118 systemd[1]: Hostname set to . Jan 17 12:09:14.060131 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:09:14.060144 zram_generator::config[1010]: No configuration found. Jan 17 12:09:14.060158 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:09:14.060170 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:09:14.060183 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:09:14.060197 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:09:14.060212 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:09:14.060226 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:09:14.060239 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:09:14.060252 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:09:14.060265 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:09:14.060278 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:09:14.060292 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:09:14.060304 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:09:14.060317 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:09:14.064900 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:09:14.064922 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:09:14.064936 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:09:14.064950 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:09:14.064963 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:09:14.064976 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:09:14.064989 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:09:14.065002 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:09:14.065019 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:09:14.065034 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:09:14.065050 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:09:14.065062 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:09:14.065076 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:09:14.065089 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:09:14.065102 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:09:14.065116 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:09:14.065129 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:09:14.065142 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:09:14.065154 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:09:14.065168 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:09:14.065181 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:09:14.065193 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:09:14.065207 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:14.065220 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:09:14.065235 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:09:14.065247 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:09:14.065261 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:09:14.065274 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:09:14.065287 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:09:14.065299 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:09:14.065312 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:09:14.069101 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:09:14.069128 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:09:14.069146 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:09:14.069159 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:09:14.069173 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:09:14.069186 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 12:09:14.069199 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 12:09:14.069211 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:09:14.069224 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:09:14.069237 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:09:14.069251 kernel: loop: module loaded Jan 17 12:09:14.069264 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:09:14.069277 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:09:14.069290 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:14.069302 kernel: fuse: init (API version 7.39) Jan 17 12:09:14.069358 systemd-journald[1114]: Collecting audit messages is disabled. Jan 17 12:09:14.069392 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:09:14.069408 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:09:14.069422 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:09:14.069435 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:09:14.069449 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:09:14.069462 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:09:14.069475 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:09:14.069487 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:09:14.069499 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:09:14.069511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:09:14.069525 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:09:14.069537 kernel: ACPI: bus type drm_connector registered Jan 17 12:09:14.069549 systemd-journald[1114]: Journal started Jan 17 12:09:14.069573 systemd-journald[1114]: Runtime Journal (/run/log/journal/63186cfa351c4a8bb554615e91af0186) is 8.0M, max 78.3M, 70.3M free. Jan 17 12:09:14.073930 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:09:14.075807 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:09:14.076194 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:09:14.077043 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:09:14.077252 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:09:14.078523 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:09:14.078669 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:09:14.080800 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:09:14.080956 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:09:14.083795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:09:14.084628 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:09:14.087854 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:09:14.102852 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:09:14.105904 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:09:14.112551 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:09:14.114412 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:09:14.115446 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:09:14.132575 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:09:14.137487 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:09:14.138202 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:09:14.139425 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:09:14.141036 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:09:14.144460 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:09:14.155516 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:09:14.161146 systemd-journald[1114]: Time spent on flushing to /var/log/journal/63186cfa351c4a8bb554615e91af0186 is 19.654ms for 937 entries. Jan 17 12:09:14.161146 systemd-journald[1114]: System Journal (/var/log/journal/63186cfa351c4a8bb554615e91af0186) is 8.0M, max 584.8M, 576.8M free. Jan 17 12:09:14.218653 systemd-journald[1114]: Received client request to flush runtime journal. Jan 17 12:09:14.160222 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:09:14.163600 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:09:14.164243 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:09:14.173582 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:09:14.182657 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:09:14.183527 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:09:14.194766 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:09:14.220179 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:09:14.221104 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:09:14.238129 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jan 17 12:09:14.238149 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jan 17 12:09:14.243589 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:09:14.250979 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:09:14.464062 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:09:14.477635 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:09:14.516624 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 17 12:09:14.516675 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jan 17 12:09:14.525936 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:09:15.629632 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:09:15.636744 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:09:15.660186 systemd-udevd[1195]: Using default interface naming scheme 'v255'. Jan 17 12:09:15.690794 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:09:15.704253 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:09:15.750014 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 12:09:15.781352 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1214) Jan 17 12:09:15.777735 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:09:15.833031 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:09:15.876487 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:09:15.877367 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:09:15.886417 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:09:15.898430 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 17 12:09:15.908249 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:09:15.960858 systemd-networkd[1206]: lo: Link UP Jan 17 12:09:15.961182 systemd-networkd[1206]: lo: Gained carrier Jan 17 12:09:15.962753 systemd-networkd[1206]: Enumeration completed Jan 17 12:09:15.962937 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:09:15.965181 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:15.965256 systemd-networkd[1206]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:09:15.968367 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 17 12:09:15.968417 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 17 12:09:15.969051 systemd-networkd[1206]: eth0: Link UP Jan 17 12:09:15.969058 systemd-networkd[1206]: eth0: Gained carrier Jan 17 12:09:15.969077 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:09:15.972667 kernel: Console: switching to colour dummy device 80x25 Jan 17 12:09:15.972652 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:09:15.977865 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 12:09:15.977900 kernel: [drm] features: -context_init Jan 17 12:09:15.979355 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:09:15.980218 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:15.984730 kernel: [drm] number of scanouts: 1 Jan 17 12:09:15.984768 kernel: [drm] number of cap sets: 0 Jan 17 12:09:15.986384 systemd-networkd[1206]: eth0: DHCPv4 address 172.24.4.251/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jan 17 12:09:15.987512 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:09:15.987842 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:15.993350 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 17 12:09:15.996761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:16.002374 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 17 12:09:16.002428 kernel: Console: switching to colour frame buffer device 160x50 Jan 17 12:09:16.016371 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 12:09:16.018292 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:09:16.018703 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:16.032475 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:09:16.032845 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:09:16.035490 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:09:16.065470 lvm[1243]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:09:16.095843 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:09:16.097848 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:09:16.110483 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:09:16.123352 lvm[1248]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:09:16.129311 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:09:16.157101 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:09:16.157935 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:09:16.158045 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:09:16.158069 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:09:16.158152 systemd[1]: Reached target machines.target - Containers. Jan 17 12:09:16.160428 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:09:16.168647 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:09:16.172666 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:09:16.176178 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:09:16.178656 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:09:16.189588 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:09:16.197507 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:09:16.204702 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:09:16.231630 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:09:16.243104 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 12:09:16.275605 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:09:16.278408 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:09:16.331407 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:09:16.362420 kernel: loop1: detected capacity change from 0 to 211296 Jan 17 12:09:16.429313 kernel: loop2: detected capacity change from 0 to 8 Jan 17 12:09:16.459692 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 12:09:16.609646 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 12:09:16.639170 kernel: loop5: detected capacity change from 0 to 211296 Jan 17 12:09:16.682902 kernel: loop6: detected capacity change from 0 to 8 Jan 17 12:09:16.690419 kernel: loop7: detected capacity change from 0 to 142488 Jan 17 12:09:16.739231 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 17 12:09:16.740298 (sd-merge)[1275]: Merged extensions into '/usr'. Jan 17 12:09:16.778434 systemd[1]: Reloading requested from client PID 1260 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:09:16.779080 systemd[1]: Reloading... Jan 17 12:09:16.918898 zram_generator::config[1312]: No configuration found. Jan 17 12:09:17.093653 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:09:17.189899 systemd[1]: Reloading finished in 409 ms. Jan 17 12:09:17.208142 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:09:17.215577 systemd[1]: Starting ensure-sysext.service... Jan 17 12:09:17.229710 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:09:17.248354 systemd[1]: Reloading requested from client PID 1364 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:09:17.248376 systemd[1]: Reloading... Jan 17 12:09:17.268072 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:09:17.268443 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:09:17.269298 systemd-tmpfiles[1365]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:09:17.269659 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Jan 17 12:09:17.269726 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Jan 17 12:09:17.273357 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:09:17.273366 systemd-tmpfiles[1365]: Skipping /boot Jan 17 12:09:17.280475 systemd-tmpfiles[1365]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:09:17.280487 systemd-tmpfiles[1365]: Skipping /boot Jan 17 12:09:17.365367 zram_generator::config[1396]: No configuration found. Jan 17 12:09:17.376353 ldconfig[1256]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:09:17.394658 systemd-networkd[1206]: eth0: Gained IPv6LL Jan 17 12:09:17.535574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:09:17.610629 systemd[1]: Reloading finished in 361 ms. Jan 17 12:09:17.631052 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:09:17.632313 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:09:17.643638 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:09:17.682431 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:09:17.700887 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:09:17.717832 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:09:17.733501 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:09:17.752682 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:09:17.761266 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:17.761475 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:09:17.768797 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:09:17.784541 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:09:17.796561 augenrules[1485]: No rules Jan 17 12:09:17.800665 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:09:17.804545 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:09:17.804717 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:17.807410 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:09:17.813029 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:09:17.821540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:09:17.821741 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:09:17.822810 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:09:17.822959 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:09:17.826484 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:09:17.826661 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:09:17.842639 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:17.843016 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:09:17.847636 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:09:17.861620 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:09:17.874484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:09:17.876748 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:09:17.887687 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:09:17.890050 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:17.895179 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:09:17.896662 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:09:17.899243 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:09:17.900373 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:09:17.900518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:09:17.902865 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:09:17.903185 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:09:17.906855 systemd-resolved[1473]: Positive Trust Anchors: Jan 17 12:09:17.906877 systemd-resolved[1473]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:09:17.906921 systemd-resolved[1473]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:09:17.918914 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:17.919521 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:09:17.922222 systemd-resolved[1473]: Using system hostname 'ci-4081-3-0-0-25eb0cd39e.novalocal'. Jan 17 12:09:17.926583 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:09:17.931020 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:09:17.937713 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:09:17.950712 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:09:17.954072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:09:17.954247 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:09:17.955152 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:09:17.961025 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:09:17.965167 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:09:17.967486 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:09:17.967718 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:09:17.970042 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:09:17.970250 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:09:17.972685 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:09:17.972833 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:09:17.975482 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:09:17.975746 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:09:17.986977 systemd[1]: Finished ensure-sysext.service. Jan 17 12:09:17.991752 systemd[1]: Reached target network.target - Network. Jan 17 12:09:17.993386 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:09:17.995263 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:09:17.997082 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:09:17.997219 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:09:18.007569 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:09:18.011309 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:09:18.069805 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:09:18.070888 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:09:18.071610 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:09:18.072214 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:09:18.073952 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:09:18.074864 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:09:18.074935 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:09:18.078815 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:09:18.081131 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:09:18.083478 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:09:18.085675 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:09:18.089552 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:09:18.093462 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:09:18.096947 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:09:18.100259 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:09:18.102315 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:09:18.104283 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:09:18.105132 systemd[1]: System is tainted: cgroupsv1 Jan 17 12:09:18.105169 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:09:18.105189 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:09:18.111451 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:09:18.121437 systemd-timesyncd[1535]: Contacted time server 51.68.180.114:123 (0.flatcar.pool.ntp.org). Jan 17 12:09:18.121481 systemd-timesyncd[1535]: Initial clock synchronization to Fri 2025-01-17 12:09:18.168667 UTC. Jan 17 12:09:18.121938 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:09:18.137630 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:09:18.145439 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:09:18.157572 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:09:18.160840 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:09:18.162800 jq[1545]: false Jan 17 12:09:18.166000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:18.174555 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:09:18.183514 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:09:18.197541 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:09:18.210184 dbus-daemon[1542]: [system] SELinux support is enabled Jan 17 12:09:18.216564 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:09:18.217950 extend-filesystems[1546]: Found loop4 Jan 17 12:09:18.217950 extend-filesystems[1546]: Found loop5 Jan 17 12:09:18.217950 extend-filesystems[1546]: Found loop6 Jan 17 12:09:18.217950 extend-filesystems[1546]: Found loop7 Jan 17 12:09:18.217950 extend-filesystems[1546]: Found vda Jan 17 12:09:18.217950 extend-filesystems[1546]: Found vda1 Jan 17 12:09:18.217950 extend-filesystems[1546]: Found vda2 Jan 17 12:09:18.217950 extend-filesystems[1546]: Found vda3 Jan 17 12:09:18.217950 extend-filesystems[1546]: Found usr Jan 17 12:09:18.217950 extend-filesystems[1546]: Found vda4 Jan 17 12:09:18.217950 extend-filesystems[1546]: Found vda6 Jan 17 12:09:18.217950 extend-filesystems[1546]: Found vda7 Jan 17 12:09:18.217950 extend-filesystems[1546]: Found vda9 Jan 17 12:09:18.217950 extend-filesystems[1546]: Checking size of /dev/vda9 Jan 17 12:09:18.375048 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jan 17 12:09:18.375086 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jan 17 12:09:18.375113 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1204) Jan 17 12:09:18.375211 extend-filesystems[1546]: Resized partition /dev/vda9 Jan 17 12:09:18.237054 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:09:18.397889 extend-filesystems[1579]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:09:18.397889 extend-filesystems[1579]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:09:18.397889 extend-filesystems[1579]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:09:18.397889 extend-filesystems[1579]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jan 17 12:09:18.260584 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:09:18.412179 extend-filesystems[1546]: Resized filesystem in /dev/vda9 Jan 17 12:09:18.265825 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:09:18.274499 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:09:18.292994 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:09:18.412985 update_engine[1577]: I20250117 12:09:18.406924 1577 main.cc:92] Flatcar Update Engine starting Jan 17 12:09:18.308196 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:09:18.433639 jq[1580]: true Jan 17 12:09:18.433807 update_engine[1577]: I20250117 12:09:18.425288 1577 update_check_scheduler.cc:74] Next update check in 6m42s Jan 17 12:09:18.329798 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:09:18.330468 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:09:18.340924 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:09:18.341225 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:09:18.347878 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:09:18.361852 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:09:18.362817 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:09:18.385036 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:09:18.385302 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:09:18.428059 (ntainerd)[1591]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:09:18.450299 jq[1589]: true Jan 17 12:09:18.468046 tar[1587]: linux-amd64/helm Jan 17 12:09:18.474427 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:09:18.478632 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:09:18.478663 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:09:18.479253 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:09:18.479271 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:09:18.483244 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:09:18.488482 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:09:18.496107 systemd-logind[1572]: New seat seat0. Jan 17 12:09:18.502215 systemd-logind[1572]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 12:09:18.502241 systemd-logind[1572]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:09:18.503069 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:09:18.630806 bash[1620]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:09:18.632322 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:09:18.646245 systemd[1]: Starting sshkeys.service... Jan 17 12:09:18.675683 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:09:18.689650 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:09:18.788430 locksmithd[1606]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:09:18.843266 containerd[1591]: time="2025-01-17T12:09:18.843179146Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:09:18.914892 containerd[1591]: time="2025-01-17T12:09:18.914028814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:18.918609 containerd[1591]: time="2025-01-17T12:09:18.917414675Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:18.918609 containerd[1591]: time="2025-01-17T12:09:18.917448117Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:09:18.918609 containerd[1591]: time="2025-01-17T12:09:18.917465861Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:09:18.918609 containerd[1591]: time="2025-01-17T12:09:18.917625330Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:09:18.918609 containerd[1591]: time="2025-01-17T12:09:18.917645457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:18.918609 containerd[1591]: time="2025-01-17T12:09:18.917704548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:18.918609 containerd[1591]: time="2025-01-17T12:09:18.917719717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:18.918609 containerd[1591]: time="2025-01-17T12:09:18.917933067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:18.918609 containerd[1591]: time="2025-01-17T12:09:18.917952293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:18.918609 containerd[1591]: time="2025-01-17T12:09:18.917968483Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:18.918609 containerd[1591]: time="2025-01-17T12:09:18.917979704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:18.918909 containerd[1591]: time="2025-01-17T12:09:18.918056448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:18.918909 containerd[1591]: time="2025-01-17T12:09:18.918255612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:09:18.918909 containerd[1591]: time="2025-01-17T12:09:18.918427795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:09:18.918909 containerd[1591]: time="2025-01-17T12:09:18.918445999Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:09:18.918909 containerd[1591]: time="2025-01-17T12:09:18.918528624Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:09:18.918909 containerd[1591]: time="2025-01-17T12:09:18.918579018Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:09:18.929246 containerd[1591]: time="2025-01-17T12:09:18.929208103Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:09:18.930163 containerd[1591]: time="2025-01-17T12:09:18.930097972Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:09:18.930281 containerd[1591]: time="2025-01-17T12:09:18.930259936Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:09:18.930450 containerd[1591]: time="2025-01-17T12:09:18.930390190Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:09:18.930450 containerd[1591]: time="2025-01-17T12:09:18.930427700Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:09:18.931631 containerd[1591]: time="2025-01-17T12:09:18.931188167Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:09:18.932439 containerd[1591]: time="2025-01-17T12:09:18.932253274Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:09:18.933732 containerd[1591]: time="2025-01-17T12:09:18.933578099Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:09:18.933732 containerd[1591]: time="2025-01-17T12:09:18.933605280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:09:18.933732 containerd[1591]: time="2025-01-17T12:09:18.933623103Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:09:18.933732 containerd[1591]: time="2025-01-17T12:09:18.933658600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:09:18.933732 containerd[1591]: time="2025-01-17T12:09:18.933679098Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:09:18.933732 containerd[1591]: time="2025-01-17T12:09:18.933694417Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:09:18.933732 containerd[1591]: time="2025-01-17T12:09:18.933711559Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:09:18.934086 containerd[1591]: time="2025-01-17T12:09:18.933939777Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:09:18.934086 containerd[1591]: time="2025-01-17T12:09:18.933964804Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:09:18.934086 containerd[1591]: time="2025-01-17T12:09:18.933980824Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:09:18.934086 containerd[1591]: time="2025-01-17T12:09:18.934015199Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:09:18.934086 containerd[1591]: time="2025-01-17T12:09:18.934039494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.934086 containerd[1591]: time="2025-01-17T12:09:18.934055024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.934459 containerd[1591]: time="2025-01-17T12:09:18.934070503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.934459 containerd[1591]: time="2025-01-17T12:09:18.934281087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.934459 containerd[1591]: time="2025-01-17T12:09:18.934300304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.934459 containerd[1591]: time="2025-01-17T12:09:18.934384622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.934459 containerd[1591]: time="2025-01-17T12:09:18.934406292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.934459 containerd[1591]: time="2025-01-17T12:09:18.934421761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.934709 containerd[1591]: time="2025-01-17T12:09:18.934437090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.934709 containerd[1591]: time="2025-01-17T12:09:18.934652314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.934709 containerd[1591]: time="2025-01-17T12:09:18.934669716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.934709 containerd[1591]: time="2025-01-17T12:09:18.934686277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.934925 containerd[1591]: time="2025-01-17T12:09:18.934853822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.934925 containerd[1591]: time="2025-01-17T12:09:18.934882445Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:09:18.935021 containerd[1591]: time="2025-01-17T12:09:18.935006328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.936348 containerd[1591]: time="2025-01-17T12:09:18.936289254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.936348 containerd[1591]: time="2025-01-17T12:09:18.936311806Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:09:18.936728 containerd[1591]: time="2025-01-17T12:09:18.936606819Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:09:18.936728 containerd[1591]: time="2025-01-17T12:09:18.936659528Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:09:18.936728 containerd[1591]: time="2025-01-17T12:09:18.936675438Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:09:18.936728 containerd[1591]: time="2025-01-17T12:09:18.936692109Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:09:18.936879 containerd[1591]: time="2025-01-17T12:09:18.936706576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.937196 containerd[1591]: time="2025-01-17T12:09:18.936966354Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:09:18.937196 containerd[1591]: time="2025-01-17T12:09:18.936986311Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:09:18.937196 containerd[1591]: time="2025-01-17T12:09:18.936999085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:09:18.939050 containerd[1591]: time="2025-01-17T12:09:18.938950866Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:09:18.939251 containerd[1591]: time="2025-01-17T12:09:18.939235359Z" level=info msg="Connect containerd service" Jan 17 12:09:18.939382 containerd[1591]: time="2025-01-17T12:09:18.939366335Z" level=info msg="using legacy CRI server" Jan 17 12:09:18.939461 containerd[1591]: time="2025-01-17T12:09:18.939447217Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:09:18.939662 containerd[1591]: time="2025-01-17T12:09:18.939644947Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:09:18.941449 containerd[1591]: time="2025-01-17T12:09:18.940320615Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:09:18.941744 containerd[1591]: time="2025-01-17T12:09:18.941660858Z" level=info msg="Start subscribing containerd event" Jan 17 12:09:18.941806 containerd[1591]: time="2025-01-17T12:09:18.941765815Z" level=info msg="Start recovering state" Jan 17 12:09:18.941876 containerd[1591]: time="2025-01-17T12:09:18.941854031Z" level=info msg="Start event monitor" Jan 17 12:09:18.941927 containerd[1591]: time="2025-01-17T12:09:18.941883015Z" level=info msg="Start snapshots syncer" Jan 17 12:09:18.941927 containerd[1591]: time="2025-01-17T12:09:18.941896400Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:09:18.941927 containerd[1591]: time="2025-01-17T12:09:18.941905477Z" level=info msg="Start streaming server" Jan 17 12:09:18.943629 containerd[1591]: time="2025-01-17T12:09:18.943594174Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:09:18.943779 containerd[1591]: time="2025-01-17T12:09:18.943740378Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:09:18.945301 containerd[1591]: time="2025-01-17T12:09:18.945281789Z" level=info msg="containerd successfully booted in 0.103683s" Jan 17 12:09:18.945414 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:09:19.273174 sshd_keygen[1581]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:09:19.300720 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:09:19.316702 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:09:19.330780 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:09:19.331093 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:09:19.342033 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:09:19.364182 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:09:19.378013 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:09:19.392865 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:09:19.395914 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:09:19.448456 tar[1587]: linux-amd64/LICENSE Jan 17 12:09:19.448675 tar[1587]: linux-amd64/README.md Jan 17 12:09:19.460564 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:09:20.106705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:20.128115 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:09:21.665975 kubelet[1673]: E0117 12:09:21.665806 1673 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:09:21.669061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:09:21.669467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:09:24.462255 login[1658]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 17 12:09:24.463582 login[1657]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 12:09:24.488531 systemd-logind[1572]: New session 1 of user core. Jan 17 12:09:24.494736 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:09:24.505201 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:09:24.535989 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:09:24.548028 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:09:24.562759 (systemd)[1692]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:09:24.690312 systemd[1692]: Queued start job for default target default.target. Jan 17 12:09:24.691151 systemd[1692]: Created slice app.slice - User Application Slice. Jan 17 12:09:24.691193 systemd[1692]: Reached target paths.target - Paths. Jan 17 12:09:24.691215 systemd[1692]: Reached target timers.target - Timers. Jan 17 12:09:24.695428 systemd[1692]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:09:24.717809 systemd[1692]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:09:24.717866 systemd[1692]: Reached target sockets.target - Sockets. Jan 17 12:09:24.717881 systemd[1692]: Reached target basic.target - Basic System. Jan 17 12:09:24.717920 systemd[1692]: Reached target default.target - Main User Target. Jan 17 12:09:24.717946 systemd[1692]: Startup finished in 145ms. Jan 17 12:09:24.718580 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:09:24.728967 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:09:25.205393 coreos-metadata[1540]: Jan 17 12:09:25.205 WARN failed to locate config-drive, using the metadata service API instead Jan 17 12:09:25.253618 coreos-metadata[1540]: Jan 17 12:09:25.253 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 17 12:09:25.448796 coreos-metadata[1540]: Jan 17 12:09:25.448 INFO Fetch successful Jan 17 12:09:25.448796 coreos-metadata[1540]: Jan 17 12:09:25.448 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 12:09:25.461691 coreos-metadata[1540]: Jan 17 12:09:25.461 INFO Fetch successful Jan 17 12:09:25.461691 coreos-metadata[1540]: Jan 17 12:09:25.461 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 17 12:09:25.467507 login[1658]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 12:09:25.473726 coreos-metadata[1540]: Jan 17 12:09:25.473 INFO Fetch successful Jan 17 12:09:25.473726 coreos-metadata[1540]: Jan 17 12:09:25.473 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 17 12:09:25.479567 systemd-logind[1572]: New session 2 of user core. Jan 17 12:09:25.485707 coreos-metadata[1540]: Jan 17 12:09:25.485 INFO Fetch successful Jan 17 12:09:25.485923 coreos-metadata[1540]: Jan 17 12:09:25.485 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 17 12:09:25.487989 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:09:25.505235 coreos-metadata[1540]: Jan 17 12:09:25.505 INFO Fetch successful Jan 17 12:09:25.505235 coreos-metadata[1540]: Jan 17 12:09:25.505 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 17 12:09:25.521386 coreos-metadata[1540]: Jan 17 12:09:25.520 INFO Fetch successful Jan 17 12:09:25.580512 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:09:25.581657 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:09:25.801008 coreos-metadata[1627]: Jan 17 12:09:25.800 WARN failed to locate config-drive, using the metadata service API instead Jan 17 12:09:25.842950 coreos-metadata[1627]: Jan 17 12:09:25.842 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 17 12:09:25.856484 coreos-metadata[1627]: Jan 17 12:09:25.856 INFO Fetch successful Jan 17 12:09:25.856484 coreos-metadata[1627]: Jan 17 12:09:25.856 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 12:09:25.871016 coreos-metadata[1627]: Jan 17 12:09:25.870 INFO Fetch successful Jan 17 12:09:25.876481 unknown[1627]: wrote ssh authorized keys file for user: core Jan 17 12:09:25.911576 update-ssh-keys[1737]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:09:25.912731 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:09:25.921699 systemd[1]: Finished sshkeys.service. Jan 17 12:09:25.931007 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:09:25.931270 systemd[1]: Startup finished in 20.200s (kernel) + 13.203s (userspace) = 33.403s. Jan 17 12:09:27.558706 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:09:27.567789 systemd[1]: Started sshd@0-172.24.4.251:22-172.24.4.1:36458.service - OpenSSH per-connection server daemon (172.24.4.1:36458). Jan 17 12:09:28.791584 sshd[1743]: Accepted publickey for core from 172.24.4.1 port 36458 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:09:28.794801 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:28.808419 systemd-logind[1572]: New session 3 of user core. Jan 17 12:09:28.813859 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:09:29.280822 systemd[1]: Started sshd@1-172.24.4.251:22-172.24.4.1:36462.service - OpenSSH per-connection server daemon (172.24.4.1:36462). Jan 17 12:09:30.781220 sshd[1748]: Accepted publickey for core from 172.24.4.1 port 36462 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:09:30.784371 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:30.797814 systemd-logind[1572]: New session 4 of user core. Jan 17 12:09:30.804941 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:09:31.275691 sshd[1748]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:31.286922 systemd[1]: Started sshd@2-172.24.4.251:22-172.24.4.1:36466.service - OpenSSH per-connection server daemon (172.24.4.1:36466). Jan 17 12:09:31.288303 systemd[1]: sshd@1-172.24.4.251:22-172.24.4.1:36462.service: Deactivated successfully. Jan 17 12:09:31.297134 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:09:31.298962 systemd-logind[1572]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:09:31.304162 systemd-logind[1572]: Removed session 4. Jan 17 12:09:31.836720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:09:31.847690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:32.160675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:32.177045 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:09:32.269112 kubelet[1770]: E0117 12:09:32.269022 1770 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:09:32.278837 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:09:32.279267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:09:32.637558 sshd[1753]: Accepted publickey for core from 172.24.4.1 port 36466 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:09:32.640804 sshd[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:32.651917 systemd-logind[1572]: New session 5 of user core. Jan 17 12:09:32.658983 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:09:33.232697 sshd[1753]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:33.242961 systemd[1]: Started sshd@3-172.24.4.251:22-172.24.4.1:36482.service - OpenSSH per-connection server daemon (172.24.4.1:36482). Jan 17 12:09:33.244100 systemd[1]: sshd@2-172.24.4.251:22-172.24.4.1:36466.service: Deactivated successfully. Jan 17 12:09:33.252931 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:09:33.255055 systemd-logind[1572]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:09:33.259923 systemd-logind[1572]: Removed session 5. Jan 17 12:09:34.703430 sshd[1782]: Accepted publickey for core from 172.24.4.1 port 36482 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:09:34.706400 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:34.718462 systemd-logind[1572]: New session 6 of user core. Jan 17 12:09:34.723945 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:09:35.451724 sshd[1782]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:35.473035 systemd[1]: Started sshd@4-172.24.4.251:22-172.24.4.1:44702.service - OpenSSH per-connection server daemon (172.24.4.1:44702). Jan 17 12:09:35.476514 systemd[1]: sshd@3-172.24.4.251:22-172.24.4.1:36482.service: Deactivated successfully. Jan 17 12:09:35.484441 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:09:35.486323 systemd-logind[1572]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:09:35.491539 systemd-logind[1572]: Removed session 6. Jan 17 12:09:36.742234 sshd[1790]: Accepted publickey for core from 172.24.4.1 port 44702 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:09:36.745173 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:36.755144 systemd-logind[1572]: New session 7 of user core. Jan 17 12:09:36.767927 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:09:37.270972 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:09:37.272306 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:09:37.292021 sudo[1797]: pam_unix(sudo:session): session closed for user root Jan 17 12:09:37.443724 sshd[1790]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:37.453870 systemd[1]: Started sshd@5-172.24.4.251:22-172.24.4.1:44708.service - OpenSSH per-connection server daemon (172.24.4.1:44708). Jan 17 12:09:37.454790 systemd[1]: sshd@4-172.24.4.251:22-172.24.4.1:44702.service: Deactivated successfully. Jan 17 12:09:37.464938 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:09:37.469260 systemd-logind[1572]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:09:37.472203 systemd-logind[1572]: Removed session 7. Jan 17 12:09:38.632584 sshd[1799]: Accepted publickey for core from 172.24.4.1 port 44708 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:09:38.635601 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:38.646218 systemd-logind[1572]: New session 8 of user core. Jan 17 12:09:38.658843 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:09:39.145406 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:09:39.146052 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:09:39.154735 sudo[1807]: pam_unix(sudo:session): session closed for user root Jan 17 12:09:39.170142 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:09:39.171151 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:09:39.201578 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:09:39.206384 auditctl[1810]: No rules Jan 17 12:09:39.207171 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:09:39.207762 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:09:39.224476 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:09:39.272409 augenrules[1829]: No rules Jan 17 12:09:39.274919 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:09:39.279827 sudo[1806]: pam_unix(sudo:session): session closed for user root Jan 17 12:09:39.529120 sshd[1799]: pam_unix(sshd:session): session closed for user core Jan 17 12:09:39.550136 systemd[1]: Started sshd@6-172.24.4.251:22-172.24.4.1:44720.service - OpenSSH per-connection server daemon (172.24.4.1:44720). Jan 17 12:09:39.551692 systemd[1]: sshd@5-172.24.4.251:22-172.24.4.1:44708.service: Deactivated successfully. Jan 17 12:09:39.565740 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:09:39.568640 systemd-logind[1572]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:09:39.571500 systemd-logind[1572]: Removed session 8. Jan 17 12:09:40.776304 sshd[1835]: Accepted publickey for core from 172.24.4.1 port 44720 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:09:40.779868 sshd[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:09:40.792541 systemd-logind[1572]: New session 9 of user core. Jan 17 12:09:40.811060 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:09:41.244551 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:09:41.245194 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:09:41.864182 (dockerd)[1858]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:09:41.864578 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:09:42.335911 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:09:42.345635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:42.436515 dockerd[1858]: time="2025-01-17T12:09:42.436086993Z" level=info msg="Starting up" Jan 17 12:09:42.756503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:42.779658 (kubelet)[1889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:09:43.086048 dockerd[1858]: time="2025-01-17T12:09:43.085881872Z" level=info msg="Loading containers: start." Jan 17 12:09:43.113795 kubelet[1889]: E0117 12:09:43.113663 1889 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:09:43.116680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:09:43.116984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:09:43.231412 kernel: Initializing XFRM netlink socket Jan 17 12:09:43.353530 systemd-networkd[1206]: docker0: Link UP Jan 17 12:09:43.372974 dockerd[1858]: time="2025-01-17T12:09:43.372789338Z" level=info msg="Loading containers: done." Jan 17 12:09:43.406443 dockerd[1858]: time="2025-01-17T12:09:43.405989398Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:09:43.406443 dockerd[1858]: time="2025-01-17T12:09:43.406098020Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:09:43.406443 dockerd[1858]: time="2025-01-17T12:09:43.406193860Z" level=info msg="Daemon has completed initialization" Jan 17 12:09:43.483935 dockerd[1858]: time="2025-01-17T12:09:43.483818400Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:09:43.484453 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:09:45.676546 containerd[1591]: time="2025-01-17T12:09:45.676457758Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:09:46.531105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1191069585.mount: Deactivated successfully. Jan 17 12:09:49.425481 containerd[1591]: time="2025-01-17T12:09:49.424941980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:49.447000 containerd[1591]: time="2025-01-17T12:09:49.446875152Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35140738" Jan 17 12:09:49.485825 containerd[1591]: time="2025-01-17T12:09:49.485712745Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:49.522308 containerd[1591]: time="2025-01-17T12:09:49.522179175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:49.527181 containerd[1591]: time="2025-01-17T12:09:49.525549250Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 3.849013408s" Jan 17 12:09:49.527181 containerd[1591]: time="2025-01-17T12:09:49.525627290Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 17 12:09:49.583583 containerd[1591]: time="2025-01-17T12:09:49.583406783Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:09:52.024999 containerd[1591]: time="2025-01-17T12:09:52.022888295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:52.024999 containerd[1591]: time="2025-01-17T12:09:52.025158359Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32216649" Jan 17 12:09:52.029014 containerd[1591]: time="2025-01-17T12:09:52.026834160Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:52.030414 containerd[1591]: time="2025-01-17T12:09:52.030323488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:52.031772 containerd[1591]: time="2025-01-17T12:09:52.031725690Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 2.448218546s" Jan 17 12:09:52.031833 containerd[1591]: time="2025-01-17T12:09:52.031775411Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 17 12:09:52.059341 containerd[1591]: time="2025-01-17T12:09:52.059090292Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:09:53.335895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 12:09:53.348744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:09:53.475022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:09:53.488161 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:09:53.674517 kubelet[2104]: E0117 12:09:53.672703 2104 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:09:53.675552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:09:53.675737 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:09:53.780457 containerd[1591]: time="2025-01-17T12:09:53.780351237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:53.782350 containerd[1591]: time="2025-01-17T12:09:53.782095723Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17332849" Jan 17 12:09:53.783381 containerd[1591]: time="2025-01-17T12:09:53.783345518Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:53.789995 containerd[1591]: time="2025-01-17T12:09:53.789400535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:53.789995 containerd[1591]: time="2025-01-17T12:09:53.789826804Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 1.730687464s" Jan 17 12:09:53.789995 containerd[1591]: time="2025-01-17T12:09:53.789875652Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 17 12:09:53.812968 containerd[1591]: time="2025-01-17T12:09:53.812939414Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:09:55.183468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount485587028.mount: Deactivated successfully. Jan 17 12:09:56.123440 containerd[1591]: time="2025-01-17T12:09:56.123174825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:56.126777 containerd[1591]: time="2025-01-17T12:09:56.126244284Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620949" Jan 17 12:09:56.129115 containerd[1591]: time="2025-01-17T12:09:56.128389138Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:56.133520 containerd[1591]: time="2025-01-17T12:09:56.133449870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:56.135623 containerd[1591]: time="2025-01-17T12:09:56.135548145Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 2.322407488s" Jan 17 12:09:56.135743 containerd[1591]: time="2025-01-17T12:09:56.135621843Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:09:56.191800 containerd[1591]: time="2025-01-17T12:09:56.191708995Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:09:56.834370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1185501879.mount: Deactivated successfully. Jan 17 12:09:58.060818 containerd[1591]: time="2025-01-17T12:09:58.060287528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:58.060818 containerd[1591]: time="2025-01-17T12:09:58.063368280Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 17 12:09:58.066422 containerd[1591]: time="2025-01-17T12:09:58.065894129Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:58.070105 containerd[1591]: time="2025-01-17T12:09:58.070049943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:58.071239 containerd[1591]: time="2025-01-17T12:09:58.071210536Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.879129383s" Jan 17 12:09:58.071414 containerd[1591]: time="2025-01-17T12:09:58.071305256Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:09:58.095087 containerd[1591]: time="2025-01-17T12:09:58.095056359Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:09:59.098948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount724887330.mount: Deactivated successfully. Jan 17 12:09:59.113399 containerd[1591]: time="2025-01-17T12:09:59.113141552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:59.116531 containerd[1591]: time="2025-01-17T12:09:59.116443433Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 17 12:09:59.117785 containerd[1591]: time="2025-01-17T12:09:59.117664836Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:59.136193 containerd[1591]: time="2025-01-17T12:09:59.136082141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:09:59.138213 containerd[1591]: time="2025-01-17T12:09:59.138140602Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.043024646s" Jan 17 12:09:59.139078 containerd[1591]: time="2025-01-17T12:09:59.138211941Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:09:59.189625 containerd[1591]: time="2025-01-17T12:09:59.189573770Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:09:59.974929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2599449509.mount: Deactivated successfully. Jan 17 12:10:03.290695 containerd[1591]: time="2025-01-17T12:10:03.290616118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:03.292361 containerd[1591]: time="2025-01-17T12:10:03.292232017Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jan 17 12:10:03.293557 containerd[1591]: time="2025-01-17T12:10:03.293515216Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:03.297229 containerd[1591]: time="2025-01-17T12:10:03.297185362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:03.298643 containerd[1591]: time="2025-01-17T12:10:03.298530507Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.108916022s" Jan 17 12:10:03.298643 containerd[1591]: time="2025-01-17T12:10:03.298562492Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 17 12:10:03.546581 update_engine[1577]: I20250117 12:10:03.545511 1577 update_attempter.cc:509] Updating boot flags... Jan 17 12:10:03.595445 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2250) Jan 17 12:10:03.661665 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2253) Jan 17 12:10:03.678090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 12:10:03.687508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:10:04.030068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:10:04.035227 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:10:04.093855 kubelet[2275]: E0117 12:10:04.093796 2275 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:10:04.096404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:10:04.096587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:10:08.084246 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:10:08.106770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:10:08.138406 systemd[1]: Reloading requested from client PID 2333 ('systemctl') (unit session-9.scope)... Jan 17 12:10:08.138424 systemd[1]: Reloading... Jan 17 12:10:08.232369 zram_generator::config[2370]: No configuration found. Jan 17 12:10:08.412844 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:10:08.492957 systemd[1]: Reloading finished in 354 ms. Jan 17 12:10:08.537501 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:10:08.537586 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:10:08.537861 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:10:08.539964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:10:08.666842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:10:08.682633 (kubelet)[2448]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:10:08.755041 kubelet[2448]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:10:08.755041 kubelet[2448]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:10:08.755041 kubelet[2448]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:10:08.755477 kubelet[2448]: I0117 12:10:08.755406 2448 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:10:09.339241 kubelet[2448]: I0117 12:10:09.339182 2448 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:10:09.339241 kubelet[2448]: I0117 12:10:09.339212 2448 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:10:09.339527 kubelet[2448]: I0117 12:10:09.339481 2448 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:10:09.371409 kubelet[2448]: I0117 12:10:09.370230 2448 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:10:09.371409 kubelet[2448]: E0117 12:10:09.370543 2448 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.251:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:09.390124 kubelet[2448]: I0117 12:10:09.390040 2448 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:10:09.391582 kubelet[2448]: I0117 12:10:09.391549 2448 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:10:09.392170 kubelet[2448]: I0117 12:10:09.392104 2448 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:10:09.392907 kubelet[2448]: I0117 12:10:09.392508 2448 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:10:09.392907 kubelet[2448]: I0117 12:10:09.392550 2448 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:10:09.396911 kubelet[2448]: I0117 12:10:09.396674 2448 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:10:09.396911 kubelet[2448]: I0117 12:10:09.396892 2448 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:10:09.397065 kubelet[2448]: I0117 12:10:09.396941 2448 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:10:09.397065 kubelet[2448]: I0117 12:10:09.396998 2448 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:10:09.397065 kubelet[2448]: I0117 12:10:09.397039 2448 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:10:09.401460 kubelet[2448]: W0117 12:10:09.400530 2448 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.251:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:09.401460 kubelet[2448]: E0117 12:10:09.400634 2448 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.251:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:09.401460 kubelet[2448]: W0117 12:10:09.401228 2448 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-0-25eb0cd39e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:09.401460 kubelet[2448]: E0117 12:10:09.401324 2448 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-0-25eb0cd39e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:09.401647 kubelet[2448]: I0117 12:10:09.401616 2448 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:10:09.409028 kubelet[2448]: I0117 12:10:09.408952 2448 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:10:09.414226 kubelet[2448]: W0117 12:10:09.414148 2448 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:10:09.415608 kubelet[2448]: I0117 12:10:09.415471 2448 server.go:1256] "Started kubelet" Jan 17 12:10:09.418530 kubelet[2448]: I0117 12:10:09.418467 2448 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:10:09.431068 kubelet[2448]: I0117 12:10:09.430562 2448 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:10:09.433830 kubelet[2448]: I0117 12:10:09.433103 2448 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:10:09.436378 kubelet[2448]: E0117 12:10:09.434153 2448 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.251:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.251:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-0-25eb0cd39e.novalocal.181b79a001eba171 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-0-25eb0cd39e.novalocal,UID:ci-4081-3-0-0-25eb0cd39e.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-0-25eb0cd39e.novalocal,},FirstTimestamp:2025-01-17 12:10:09.415405937 +0000 UTC m=+0.725270261,LastTimestamp:2025-01-17 12:10:09.415405937 +0000 UTC m=+0.725270261,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-0-25eb0cd39e.novalocal,}" Jan 17 12:10:09.436378 kubelet[2448]: I0117 12:10:09.434739 2448 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:10:09.436378 kubelet[2448]: I0117 12:10:09.435015 2448 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:10:09.436378 kubelet[2448]: I0117 12:10:09.436181 2448 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:10:09.439548 kubelet[2448]: I0117 12:10:09.438714 2448 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:10:09.439548 kubelet[2448]: I0117 12:10:09.438927 2448 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:10:09.439548 kubelet[2448]: E0117 12:10:09.439311 2448 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-0-25eb0cd39e.novalocal?timeout=10s\": dial tcp 172.24.4.251:6443: connect: connection refused" interval="200ms" Jan 17 12:10:09.439877 kubelet[2448]: E0117 12:10:09.439835 2448 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:10:09.440919 kubelet[2448]: W0117 12:10:09.440827 2448 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:09.441006 kubelet[2448]: E0117 12:10:09.440929 2448 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:09.441360 kubelet[2448]: I0117 12:10:09.441288 2448 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:10:09.441575 kubelet[2448]: I0117 12:10:09.441528 2448 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:10:09.443954 kubelet[2448]: I0117 12:10:09.443913 2448 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:10:09.465246 kubelet[2448]: I0117 12:10:09.465179 2448 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:10:09.466323 kubelet[2448]: I0117 12:10:09.466293 2448 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:10:09.466323 kubelet[2448]: I0117 12:10:09.466317 2448 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:10:09.466482 kubelet[2448]: I0117 12:10:09.466376 2448 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:10:09.466482 kubelet[2448]: E0117 12:10:09.466420 2448 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:10:09.494046 kubelet[2448]: W0117 12:10:09.493985 2448 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:09.494046 kubelet[2448]: E0117 12:10:09.494046 2448 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:09.498556 kubelet[2448]: I0117 12:10:09.498411 2448 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:10:09.498556 kubelet[2448]: I0117 12:10:09.498430 2448 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:10:09.498556 kubelet[2448]: I0117 12:10:09.498443 2448 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:10:09.509628 kubelet[2448]: I0117 12:10:09.509575 2448 policy_none.go:49] "None policy: Start" Jan 17 12:10:09.510911 kubelet[2448]: I0117 12:10:09.510403 2448 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:10:09.510911 kubelet[2448]: I0117 12:10:09.510449 2448 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:10:09.521811 kubelet[2448]: I0117 12:10:09.521778 2448 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:10:09.522268 kubelet[2448]: I0117 12:10:09.522246 2448 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:10:09.527045 kubelet[2448]: E0117 12:10:09.527008 2448 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" not found" Jan 17 12:10:09.540138 kubelet[2448]: I0117 12:10:09.540080 2448 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:09.540845 kubelet[2448]: E0117 12:10:09.540811 2448 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.251:6443/api/v1/nodes\": dial tcp 172.24.4.251:6443: connect: connection refused" node="ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:09.567741 kubelet[2448]: I0117 12:10:09.567484 2448 topology_manager.go:215] "Topology Admit Handler" podUID="129c0057bcfa6160c0ea0e17d7668067" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:09.570424 kubelet[2448]: I0117 12:10:09.570200 2448 topology_manager.go:215] "Topology Admit Handler" podUID="29fa83747a74ac0c6d5cfc6bdc5a6e4f" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:09.573904 kubelet[2448]: I0117 12:10:09.573481 2448 topology_manager.go:215] "Topology Admit Handler" podUID="669122d89258157e68faa0b42bb1b8d5" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:09.640072 kubelet[2448]: E0117 12:10:09.639916 2448 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-0-25eb0cd39e.novalocal?timeout=10s\": dial tcp 172.24.4.251:6443: connect: connection refused" interval="400ms" Jan 17 12:10:09.740644 kubelet[2448]: I0117 12:10:09.740527 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/129c0057bcfa6160c0ea0e17d7668067-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"129c0057bcfa6160c0ea0e17d7668067\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:09.740851 kubelet[2448]: I0117 12:10:09.740665 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29fa83747a74ac0c6d5cfc6bdc5a6e4f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"29fa83747a74ac0c6d5cfc6bdc5a6e4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:09.740851 kubelet[2448]: I0117 12:10:09.740732 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/129c0057bcfa6160c0ea0e17d7668067-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"129c0057bcfa6160c0ea0e17d7668067\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:09.740851 kubelet[2448]: I0117 12:10:09.740798 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/129c0057bcfa6160c0ea0e17d7668067-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"129c0057bcfa6160c0ea0e17d7668067\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:09.741061 kubelet[2448]: I0117 12:10:09.740877 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29fa83747a74ac0c6d5cfc6bdc5a6e4f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"29fa83747a74ac0c6d5cfc6bdc5a6e4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:09.741061 kubelet[2448]: I0117 12:10:09.740943 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/29fa83747a74ac0c6d5cfc6bdc5a6e4f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"29fa83747a74ac0c6d5cfc6bdc5a6e4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:09.741061 kubelet[2448]: I0117 12:10:09.740998 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29fa83747a74ac0c6d5cfc6bdc5a6e4f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"29fa83747a74ac0c6d5cfc6bdc5a6e4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:09.741061 kubelet[2448]: I0117 12:10:09.741056 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29fa83747a74ac0c6d5cfc6bdc5a6e4f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"29fa83747a74ac0c6d5cfc6bdc5a6e4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:09.741300 kubelet[2448]: I0117 12:10:09.741117 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/669122d89258157e68faa0b42bb1b8d5-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"669122d89258157e68faa0b42bb1b8d5\") " pod="kube-system/kube-scheduler-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:09.745058 kubelet[2448]: I0117 12:10:09.744529 2448 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:09.745228 kubelet[2448]: E0117 12:10:09.745194 2448 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.251:6443/api/v1/nodes\": dial tcp 172.24.4.251:6443: connect: connection refused" node="ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:09.879323 containerd[1591]: time="2025-01-17T12:10:09.879220842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal,Uid:129c0057bcfa6160c0ea0e17d7668067,Namespace:kube-system,Attempt:0,}" Jan 17 12:10:09.885273 containerd[1591]: time="2025-01-17T12:10:09.885219535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal,Uid:29fa83747a74ac0c6d5cfc6bdc5a6e4f,Namespace:kube-system,Attempt:0,}" Jan 17 12:10:09.902818 containerd[1591]: time="2025-01-17T12:10:09.902216894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-0-25eb0cd39e.novalocal,Uid:669122d89258157e68faa0b42bb1b8d5,Namespace:kube-system,Attempt:0,}" Jan 17 12:10:10.041153 kubelet[2448]: E0117 12:10:10.041098 2448 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-0-25eb0cd39e.novalocal?timeout=10s\": dial tcp 172.24.4.251:6443: connect: connection refused" interval="800ms" Jan 17 12:10:10.149606 kubelet[2448]: I0117 12:10:10.149471 2448 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:10.150227 kubelet[2448]: E0117 12:10:10.150170 2448 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.251:6443/api/v1/nodes\": dial tcp 172.24.4.251:6443: connect: connection refused" node="ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:10.253405 kubelet[2448]: W0117 12:10:10.253116 2448 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:10.253405 kubelet[2448]: E0117 12:10:10.253237 2448 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:10.368986 kubelet[2448]: W0117 12:10:10.368858 2448 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.251:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:10.368986 kubelet[2448]: E0117 12:10:10.368974 2448 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.251:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:10.597726 kubelet[2448]: W0117 12:10:10.597551 2448 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-0-25eb0cd39e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:10.597726 kubelet[2448]: E0117 12:10:10.597694 2448 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-0-25eb0cd39e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:10.643962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4103789586.mount: Deactivated successfully. Jan 17 12:10:10.731451 containerd[1591]: time="2025-01-17T12:10:10.731155320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:10:10.737369 containerd[1591]: time="2025-01-17T12:10:10.737270200Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 17 12:10:10.739464 containerd[1591]: time="2025-01-17T12:10:10.739069171Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:10:10.743948 containerd[1591]: time="2025-01-17T12:10:10.743891846Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:10:10.746821 containerd[1591]: time="2025-01-17T12:10:10.746760130Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:10:10.748929 containerd[1591]: time="2025-01-17T12:10:10.748735152Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:10:10.752369 containerd[1591]: time="2025-01-17T12:10:10.751523267Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:10:10.759776 containerd[1591]: time="2025-01-17T12:10:10.759716082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:10:10.763822 containerd[1591]: time="2025-01-17T12:10:10.763739601Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 861.342086ms" Jan 17 12:10:10.765891 containerd[1591]: time="2025-01-17T12:10:10.765816233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 886.487096ms" Jan 17 12:10:10.768025 containerd[1591]: time="2025-01-17T12:10:10.767946361Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 882.640284ms" Jan 17 12:10:10.828368 kubelet[2448]: W0117 12:10:10.826085 2448 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:10.828368 kubelet[2448]: E0117 12:10:10.826175 2448 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.251:6443: connect: connection refused Jan 17 12:10:10.841922 kubelet[2448]: E0117 12:10:10.841867 2448 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-0-25eb0cd39e.novalocal?timeout=10s\": dial tcp 172.24.4.251:6443: connect: connection refused" interval="1.6s" Jan 17 12:10:10.953050 kubelet[2448]: I0117 12:10:10.952661 2448 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:10.953050 kubelet[2448]: E0117 12:10:10.952972 2448 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.251:6443/api/v1/nodes\": dial tcp 172.24.4.251:6443: connect: connection refused" node="ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:11.074820 containerd[1591]: time="2025-01-17T12:10:11.072643672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:11.074820 containerd[1591]: time="2025-01-17T12:10:11.072755052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:11.074820 containerd[1591]: time="2025-01-17T12:10:11.072816875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:11.074820 containerd[1591]: time="2025-01-17T12:10:11.073021600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:11.078469 containerd[1591]: time="2025-01-17T12:10:11.078123443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:11.078469 containerd[1591]: time="2025-01-17T12:10:11.078296596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:11.078469 containerd[1591]: time="2025-01-17T12:10:11.078396053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:11.079033 containerd[1591]: time="2025-01-17T12:10:11.078716346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:11.085739 containerd[1591]: time="2025-01-17T12:10:11.085444287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:11.085739 containerd[1591]: time="2025-01-17T12:10:11.085511801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:11.085739 containerd[1591]: time="2025-01-17T12:10:11.085526850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:11.085739 containerd[1591]: time="2025-01-17T12:10:11.085675264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:11.163643 containerd[1591]: time="2025-01-17T12:10:11.163047572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-0-25eb0cd39e.novalocal,Uid:669122d89258157e68faa0b42bb1b8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e9453fc257ccb6b2f867a0a4b24a57fe746ae39134f7437ccef5eaba745f637\"" Jan 17 12:10:11.171730 containerd[1591]: time="2025-01-17T12:10:11.171512750Z" level=info msg="CreateContainer within sandbox \"2e9453fc257ccb6b2f867a0a4b24a57fe746ae39134f7437ccef5eaba745f637\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:10:11.193412 containerd[1591]: time="2025-01-17T12:10:11.193243192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal,Uid:129c0057bcfa6160c0ea0e17d7668067,Namespace:kube-system,Attempt:0,} returns sandbox id \"d43730fc121ea5de68987232ce295adc8aac9abf9ceb6383f8a361fac11ea228\"" Jan 17 12:10:11.199738 containerd[1591]: time="2025-01-17T12:10:11.199670168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal,Uid:29fa83747a74ac0c6d5cfc6bdc5a6e4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ffe060b52daba6f0eefa49e874f20b7aa2816d2304cf10024941840438a4239\"" Jan 17 12:10:11.207365 containerd[1591]: time="2025-01-17T12:10:11.207259243Z" level=info msg="CreateContainer within sandbox \"d43730fc121ea5de68987232ce295adc8aac9abf9ceb6383f8a361fac11ea228\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:10:11.208404 containerd[1591]: time="2025-01-17T12:10:11.208297767Z" level=info msg="CreateContainer within sandbox \"7ffe060b52daba6f0eefa49e874f20b7aa2816d2304cf10024941840438a4239\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:10:11.216518 containerd[1591]: time="2025-01-17T12:10:11.216382533Z" level=info msg="CreateContainer within sandbox \"2e9453fc257ccb6b2f867a0a4b24a57fe746ae39134f7437ccef5eaba745f637\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"face5db40878ac287f2b3b2e865d911810243ee0b8fb106b3c5229cf5e40416c\"" Jan 17 12:10:11.217494 containerd[1591]: time="2025-01-17T12:10:11.217010836Z" level=info msg="StartContainer for \"face5db40878ac287f2b3b2e865d911810243ee0b8fb106b3c5229cf5e40416c\"" Jan 17 12:10:11.251980 containerd[1591]: time="2025-01-17T12:10:11.251901355Z" level=info msg="CreateContainer within sandbox \"d43730fc121ea5de68987232ce295adc8aac9abf9ceb6383f8a361fac11ea228\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b9774cd13ca3c90efb64a7ce0f394ff5ab6aa47d4da9f621102f45f27783c2da\"" Jan 17 12:10:11.253062 containerd[1591]: time="2025-01-17T12:10:11.253012754Z" level=info msg="StartContainer for \"b9774cd13ca3c90efb64a7ce0f394ff5ab6aa47d4da9f621102f45f27783c2da\"" Jan 17 12:10:11.260744 containerd[1591]: time="2025-01-17T12:10:11.260320382Z" level=info msg="CreateContainer within sandbox \"7ffe060b52daba6f0eefa49e874f20b7aa2816d2304cf10024941840438a4239\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1aafb147a2e01874d8b2525a135a1864bf3c89863562e69a1e072184b59e3f71\"" Jan 17 12:10:11.264927 containerd[1591]: time="2025-01-17T12:10:11.264804703Z" level=info msg="StartContainer for \"1aafb147a2e01874d8b2525a135a1864bf3c89863562e69a1e072184b59e3f71\"" Jan 17 12:10:11.347650 containerd[1591]: time="2025-01-17T12:10:11.345790460Z" level=info msg="StartContainer for \"face5db40878ac287f2b3b2e865d911810243ee0b8fb106b3c5229cf5e40416c\" returns successfully" Jan 17 12:10:11.374678 containerd[1591]: time="2025-01-17T12:10:11.374630729Z" level=info msg="StartContainer for \"b9774cd13ca3c90efb64a7ce0f394ff5ab6aa47d4da9f621102f45f27783c2da\" returns successfully" Jan 17 12:10:11.393889 containerd[1591]: time="2025-01-17T12:10:11.393509450Z" level=info msg="StartContainer for \"1aafb147a2e01874d8b2525a135a1864bf3c89863562e69a1e072184b59e3f71\" returns successfully" Jan 17 12:10:12.557406 kubelet[2448]: I0117 12:10:12.555735 2448 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:12.979748 kubelet[2448]: I0117 12:10:12.979711 2448 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:13.052352 kubelet[2448]: E0117 12:10:13.050427 2448 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 17 12:10:13.401338 kubelet[2448]: I0117 12:10:13.401295 2448 apiserver.go:52] "Watching apiserver" Jan 17 12:10:13.440340 kubelet[2448]: I0117 12:10:13.439867 2448 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:10:13.530928 kubelet[2448]: E0117 12:10:13.530887 2448 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:14.699562 kubelet[2448]: W0117 12:10:14.698856 2448 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:10:16.276234 systemd[1]: Reloading requested from client PID 2719 ('systemctl') (unit session-9.scope)... Jan 17 12:10:16.276251 systemd[1]: Reloading... Jan 17 12:10:16.367374 zram_generator::config[2756]: No configuration found. Jan 17 12:10:16.526431 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:10:16.620320 systemd[1]: Reloading finished in 343 ms. Jan 17 12:10:16.656406 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:10:16.668155 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:10:16.668474 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:10:16.674718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:10:16.832388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:10:16.839692 (kubelet)[2831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:10:16.895786 kubelet[2831]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:10:16.895786 kubelet[2831]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:10:16.895786 kubelet[2831]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:10:16.896348 kubelet[2831]: I0117 12:10:16.895753 2831 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:10:16.904381 kubelet[2831]: I0117 12:10:16.903071 2831 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:10:16.904381 kubelet[2831]: I0117 12:10:16.903116 2831 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:10:16.904807 kubelet[2831]: I0117 12:10:16.904677 2831 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:10:16.908011 kubelet[2831]: I0117 12:10:16.907975 2831 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:10:16.915480 kubelet[2831]: I0117 12:10:16.915447 2831 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:10:16.927282 kubelet[2831]: I0117 12:10:16.927254 2831 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:10:16.927773 kubelet[2831]: I0117 12:10:16.927748 2831 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:10:16.928105 kubelet[2831]: I0117 12:10:16.928056 2831 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:10:16.929211 kubelet[2831]: I0117 12:10:16.929190 2831 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:10:16.929211 kubelet[2831]: I0117 12:10:16.929216 2831 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:10:16.929296 kubelet[2831]: I0117 12:10:16.929254 2831 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:10:16.929414 kubelet[2831]: I0117 12:10:16.929400 2831 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:10:16.929468 kubelet[2831]: I0117 12:10:16.929420 2831 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:10:16.929468 kubelet[2831]: I0117 12:10:16.929445 2831 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:10:16.929468 kubelet[2831]: I0117 12:10:16.929460 2831 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:10:16.935791 kubelet[2831]: I0117 12:10:16.935775 2831 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:10:16.936197 kubelet[2831]: I0117 12:10:16.936116 2831 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:10:16.936661 kubelet[2831]: I0117 12:10:16.936650 2831 server.go:1256] "Started kubelet" Jan 17 12:10:16.939129 kubelet[2831]: I0117 12:10:16.939066 2831 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:10:16.944739 kubelet[2831]: I0117 12:10:16.944599 2831 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:10:16.945868 kubelet[2831]: I0117 12:10:16.945853 2831 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:10:16.947359 kubelet[2831]: I0117 12:10:16.947071 2831 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:10:16.947359 kubelet[2831]: I0117 12:10:16.947239 2831 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:10:16.948883 kubelet[2831]: I0117 12:10:16.948859 2831 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:10:16.950927 kubelet[2831]: I0117 12:10:16.950913 2831 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:10:16.951213 kubelet[2831]: I0117 12:10:16.951183 2831 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:10:16.954409 kubelet[2831]: I0117 12:10:16.954395 2831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:10:16.956148 kubelet[2831]: I0117 12:10:16.955877 2831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:10:16.956148 kubelet[2831]: I0117 12:10:16.955904 2831 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:10:16.956148 kubelet[2831]: I0117 12:10:16.955919 2831 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:10:16.956148 kubelet[2831]: E0117 12:10:16.955961 2831 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:10:16.963520 kubelet[2831]: I0117 12:10:16.963500 2831 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:10:16.964478 kubelet[2831]: I0117 12:10:16.964427 2831 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:10:16.967236 kubelet[2831]: E0117 12:10:16.967220 2831 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:10:16.973527 kubelet[2831]: I0117 12:10:16.972962 2831 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:10:17.055732 kubelet[2831]: I0117 12:10:17.055701 2831 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:17.059236 kubelet[2831]: E0117 12:10:17.056788 2831 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:10:17.060525 kubelet[2831]: I0117 12:10:17.060424 2831 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:10:17.060525 kubelet[2831]: I0117 12:10:17.060449 2831 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:10:17.060525 kubelet[2831]: I0117 12:10:17.060465 2831 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:10:17.060641 kubelet[2831]: I0117 12:10:17.060617 2831 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:10:17.060641 kubelet[2831]: I0117 12:10:17.060639 2831 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:10:17.060699 kubelet[2831]: I0117 12:10:17.060646 2831 policy_none.go:49] "None policy: Start" Jan 17 12:10:17.061592 kubelet[2831]: I0117 12:10:17.061500 2831 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:10:17.061592 kubelet[2831]: I0117 12:10:17.061526 2831 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:10:17.062014 kubelet[2831]: I0117 12:10:17.061964 2831 state_mem.go:75] "Updated machine memory state" Jan 17 12:10:17.063170 sudo[2862]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 12:10:17.063568 sudo[2862]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 12:10:17.067312 kubelet[2831]: I0117 12:10:17.064819 2831 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:10:17.068345 kubelet[2831]: I0117 12:10:17.068304 2831 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:10:17.069123 kubelet[2831]: I0117 12:10:17.068992 2831 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:17.069123 kubelet[2831]: I0117 12:10:17.069057 2831 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:17.258896 kubelet[2831]: I0117 12:10:17.258801 2831 topology_manager.go:215] "Topology Admit Handler" podUID="129c0057bcfa6160c0ea0e17d7668067" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:17.261370 kubelet[2831]: I0117 12:10:17.259111 2831 topology_manager.go:215] "Topology Admit Handler" podUID="29fa83747a74ac0c6d5cfc6bdc5a6e4f" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:17.261370 kubelet[2831]: I0117 12:10:17.259202 2831 topology_manager.go:215] "Topology Admit Handler" podUID="669122d89258157e68faa0b42bb1b8d5" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:17.274517 kubelet[2831]: W0117 12:10:17.274128 2831 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:10:17.274843 kubelet[2831]: W0117 12:10:17.274624 2831 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:10:17.274843 kubelet[2831]: W0117 12:10:17.274623 2831 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:10:17.274843 kubelet[2831]: E0117 12:10:17.274675 2831 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-0-0-25eb0cd39e.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:17.356393 kubelet[2831]: I0117 12:10:17.356359 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29fa83747a74ac0c6d5cfc6bdc5a6e4f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"29fa83747a74ac0c6d5cfc6bdc5a6e4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:17.356510 kubelet[2831]: I0117 12:10:17.356408 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29fa83747a74ac0c6d5cfc6bdc5a6e4f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"29fa83747a74ac0c6d5cfc6bdc5a6e4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:17.356510 kubelet[2831]: I0117 12:10:17.356433 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/129c0057bcfa6160c0ea0e17d7668067-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"129c0057bcfa6160c0ea0e17d7668067\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:17.356510 kubelet[2831]: I0117 12:10:17.356458 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/129c0057bcfa6160c0ea0e17d7668067-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"129c0057bcfa6160c0ea0e17d7668067\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:17.356510 kubelet[2831]: I0117 12:10:17.356494 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/129c0057bcfa6160c0ea0e17d7668067-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"129c0057bcfa6160c0ea0e17d7668067\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:17.356614 kubelet[2831]: I0117 12:10:17.356520 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/29fa83747a74ac0c6d5cfc6bdc5a6e4f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"29fa83747a74ac0c6d5cfc6bdc5a6e4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:17.356614 kubelet[2831]: I0117 12:10:17.356543 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29fa83747a74ac0c6d5cfc6bdc5a6e4f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"29fa83747a74ac0c6d5cfc6bdc5a6e4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:17.356614 kubelet[2831]: I0117 12:10:17.356569 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29fa83747a74ac0c6d5cfc6bdc5a6e4f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"29fa83747a74ac0c6d5cfc6bdc5a6e4f\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:17.356614 kubelet[2831]: I0117 12:10:17.356593 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/669122d89258157e68faa0b42bb1b8d5-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-0-25eb0cd39e.novalocal\" (UID: \"669122d89258157e68faa0b42bb1b8d5\") " pod="kube-system/kube-scheduler-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:17.623461 sudo[2862]: pam_unix(sudo:session): session closed for user root Jan 17 12:10:17.930434 kubelet[2831]: I0117 12:10:17.930349 2831 apiserver.go:52] "Watching apiserver" Jan 17 12:10:17.951910 kubelet[2831]: I0117 12:10:17.951812 2831 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:10:18.022481 kubelet[2831]: W0117 12:10:18.022088 2831 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:10:18.022481 kubelet[2831]: E0117 12:10:18.022144 2831 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal" Jan 17 12:10:18.044672 kubelet[2831]: I0117 12:10:18.044646 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-0-25eb0cd39e.novalocal" podStartSLOduration=4.044603247 podStartE2EDuration="4.044603247s" podCreationTimestamp="2025-01-17 12:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:10:18.044407998 +0000 UTC m=+1.198707325" watchObservedRunningTime="2025-01-17 12:10:18.044603247 +0000 UTC m=+1.198902564" Jan 17 12:10:18.066549 kubelet[2831]: I0117 12:10:18.066006 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-0-25eb0cd39e.novalocal" podStartSLOduration=1.065765333 podStartE2EDuration="1.065765333s" podCreationTimestamp="2025-01-17 12:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:10:18.054771975 +0000 UTC m=+1.209071292" watchObservedRunningTime="2025-01-17 12:10:18.065765333 +0000 UTC m=+1.220064660" Jan 17 12:10:18.067122 kubelet[2831]: I0117 12:10:18.066310 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-0-25eb0cd39e.novalocal" podStartSLOduration=1.066285153 podStartE2EDuration="1.066285153s" podCreationTimestamp="2025-01-17 12:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:10:18.064346499 +0000 UTC m=+1.218645836" watchObservedRunningTime="2025-01-17 12:10:18.066285153 +0000 UTC m=+1.220584480" Jan 17 12:10:20.669600 sudo[1842]: pam_unix(sudo:session): session closed for user root Jan 17 12:10:20.951800 sshd[1835]: pam_unix(sshd:session): session closed for user core Jan 17 12:10:20.958032 systemd-logind[1572]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:10:20.959169 systemd[1]: sshd@6-172.24.4.251:22-172.24.4.1:44720.service: Deactivated successfully. Jan 17 12:10:20.969219 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:10:20.974062 systemd-logind[1572]: Removed session 9. Jan 17 12:10:29.094938 kubelet[2831]: I0117 12:10:29.094896 2831 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:10:29.098471 containerd[1591]: time="2025-01-17T12:10:29.096311563Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:10:29.099146 kubelet[2831]: I0117 12:10:29.096649 2831 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:10:29.932418 kubelet[2831]: I0117 12:10:29.928433 2831 topology_manager.go:215] "Topology Admit Handler" podUID="f0603dc9-e71a-4ddb-aca2-203fcba6a641" podNamespace="kube-system" podName="kube-proxy-n88zw" Jan 17 12:10:29.955640 kubelet[2831]: I0117 12:10:29.954446 2831 topology_manager.go:215] "Topology Admit Handler" podUID="37103937-d9e7-479c-862b-5bcc5bd85c19" podNamespace="kube-system" podName="cilium-76pbp" Jan 17 12:10:30.033343 kubelet[2831]: I0117 12:10:30.033277 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0603dc9-e71a-4ddb-aca2-203fcba6a641-lib-modules\") pod \"kube-proxy-n88zw\" (UID: \"f0603dc9-e71a-4ddb-aca2-203fcba6a641\") " pod="kube-system/kube-proxy-n88zw" Jan 17 12:10:30.033343 kubelet[2831]: I0117 12:10:30.033344 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f0603dc9-e71a-4ddb-aca2-203fcba6a641-kube-proxy\") pod \"kube-proxy-n88zw\" (UID: \"f0603dc9-e71a-4ddb-aca2-203fcba6a641\") " pod="kube-system/kube-proxy-n88zw" Jan 17 12:10:30.033532 kubelet[2831]: I0117 12:10:30.033377 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5ztg\" (UniqueName: \"kubernetes.io/projected/f0603dc9-e71a-4ddb-aca2-203fcba6a641-kube-api-access-h5ztg\") pod \"kube-proxy-n88zw\" (UID: \"f0603dc9-e71a-4ddb-aca2-203fcba6a641\") " pod="kube-system/kube-proxy-n88zw" Jan 17 12:10:30.033532 kubelet[2831]: I0117 12:10:30.033404 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0603dc9-e71a-4ddb-aca2-203fcba6a641-xtables-lock\") pod \"kube-proxy-n88zw\" (UID: \"f0603dc9-e71a-4ddb-aca2-203fcba6a641\") " pod="kube-system/kube-proxy-n88zw" Jan 17 12:10:30.134137 kubelet[2831]: I0117 12:10:30.133968 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37103937-d9e7-479c-862b-5bcc5bd85c19-cilium-config-path\") pod \"cilium-76pbp\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " pod="kube-system/cilium-76pbp" Jan 17 12:10:30.134137 kubelet[2831]: I0117 12:10:30.134109 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-host-proc-sys-kernel\") pod \"cilium-76pbp\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " pod="kube-system/cilium-76pbp" Jan 17 12:10:30.135452 kubelet[2831]: I0117 12:10:30.134201 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-bpf-maps\") pod \"cilium-76pbp\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " pod="kube-system/cilium-76pbp" Jan 17 12:10:30.135452 kubelet[2831]: I0117 12:10:30.134742 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37103937-d9e7-479c-862b-5bcc5bd85c19-hubble-tls\") pod \"cilium-76pbp\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " pod="kube-system/cilium-76pbp" Jan 17 12:10:30.135452 kubelet[2831]: I0117 12:10:30.134955 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-lib-modules\") pod \"cilium-76pbp\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " pod="kube-system/cilium-76pbp" Jan 17 12:10:30.135452 kubelet[2831]: I0117 12:10:30.135022 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4nv7\" (UniqueName: \"kubernetes.io/projected/37103937-d9e7-479c-862b-5bcc5bd85c19-kube-api-access-l4nv7\") pod \"cilium-76pbp\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " pod="kube-system/cilium-76pbp" Jan 17 12:10:30.135452 kubelet[2831]: I0117 12:10:30.135103 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-cilium-run\") pod \"cilium-76pbp\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " pod="kube-system/cilium-76pbp" Jan 17 12:10:30.135452 kubelet[2831]: I0117 12:10:30.135170 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-xtables-lock\") pod \"cilium-76pbp\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " pod="kube-system/cilium-76pbp" Jan 17 12:10:30.137411 kubelet[2831]: I0117 12:10:30.135227 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-host-proc-sys-net\") pod \"cilium-76pbp\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " pod="kube-system/cilium-76pbp" Jan 17 12:10:30.137411 kubelet[2831]: I0117 12:10:30.135282 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37103937-d9e7-479c-862b-5bcc5bd85c19-clustermesh-secrets\") pod \"cilium-76pbp\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " pod="kube-system/cilium-76pbp" Jan 17 12:10:30.137411 kubelet[2831]: I0117 12:10:30.135375 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-etc-cni-netd\") pod \"cilium-76pbp\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " pod="kube-system/cilium-76pbp" Jan 17 12:10:30.137411 kubelet[2831]: I0117 12:10:30.135434 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-cilium-cgroup\") pod \"cilium-76pbp\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " pod="kube-system/cilium-76pbp" Jan 17 12:10:30.137411 kubelet[2831]: I0117 12:10:30.135488 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-cni-path\") pod \"cilium-76pbp\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " pod="kube-system/cilium-76pbp" Jan 17 12:10:30.137411 kubelet[2831]: I0117 12:10:30.135554 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-hostproc\") pod \"cilium-76pbp\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " pod="kube-system/cilium-76pbp" Jan 17 12:10:30.216737 kubelet[2831]: I0117 12:10:30.216621 2831 topology_manager.go:215] "Topology Admit Handler" podUID="408f15eb-f99b-4824-abbd-82aeb2cb028a" podNamespace="kube-system" podName="cilium-operator-5cc964979-k4xxk" Jan 17 12:10:30.271249 containerd[1591]: time="2025-01-17T12:10:30.271185307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n88zw,Uid:f0603dc9-e71a-4ddb-aca2-203fcba6a641,Namespace:kube-system,Attempt:0,}" Jan 17 12:10:30.281374 containerd[1591]: time="2025-01-17T12:10:30.281209733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-76pbp,Uid:37103937-d9e7-479c-862b-5bcc5bd85c19,Namespace:kube-system,Attempt:0,}" Jan 17 12:10:30.321292 containerd[1591]: time="2025-01-17T12:10:30.321204084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:30.321292 containerd[1591]: time="2025-01-17T12:10:30.321259438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:30.321858 containerd[1591]: time="2025-01-17T12:10:30.320849197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:30.321858 containerd[1591]: time="2025-01-17T12:10:30.321560171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:30.321858 containerd[1591]: time="2025-01-17T12:10:30.321581041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:30.321858 containerd[1591]: time="2025-01-17T12:10:30.321686873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:30.321975 containerd[1591]: time="2025-01-17T12:10:30.321832710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:30.322107 containerd[1591]: time="2025-01-17T12:10:30.322068590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:30.337669 kubelet[2831]: I0117 12:10:30.337460 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tlnh\" (UniqueName: \"kubernetes.io/projected/408f15eb-f99b-4824-abbd-82aeb2cb028a-kube-api-access-7tlnh\") pod \"cilium-operator-5cc964979-k4xxk\" (UID: \"408f15eb-f99b-4824-abbd-82aeb2cb028a\") " pod="kube-system/cilium-operator-5cc964979-k4xxk" Jan 17 12:10:30.337927 kubelet[2831]: I0117 12:10:30.337914 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/408f15eb-f99b-4824-abbd-82aeb2cb028a-cilium-config-path\") pod \"cilium-operator-5cc964979-k4xxk\" (UID: \"408f15eb-f99b-4824-abbd-82aeb2cb028a\") " pod="kube-system/cilium-operator-5cc964979-k4xxk" Jan 17 12:10:30.377842 containerd[1591]: time="2025-01-17T12:10:30.377802634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n88zw,Uid:f0603dc9-e71a-4ddb-aca2-203fcba6a641,Namespace:kube-system,Attempt:0,} returns sandbox id \"19c308cfcf67406472e72583f71843423163b084b5dee30b899369bbfde4c9bd\"" Jan 17 12:10:30.380385 containerd[1591]: time="2025-01-17T12:10:30.380191004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-76pbp,Uid:37103937-d9e7-479c-862b-5bcc5bd85c19,Namespace:kube-system,Attempt:0,} returns sandbox id \"baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1\"" Jan 17 12:10:30.382298 containerd[1591]: time="2025-01-17T12:10:30.382268301Z" level=info msg="CreateContainer within sandbox \"19c308cfcf67406472e72583f71843423163b084b5dee30b899369bbfde4c9bd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:10:30.383036 containerd[1591]: time="2025-01-17T12:10:30.383004915Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 12:10:30.405262 containerd[1591]: time="2025-01-17T12:10:30.405222670Z" level=info msg="CreateContainer within sandbox \"19c308cfcf67406472e72583f71843423163b084b5dee30b899369bbfde4c9bd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b350755dbc63dbebedfc92f1c3c556efc17e0823ed42fe4b7fcc32378b98fc32\"" Jan 17 12:10:30.406742 containerd[1591]: time="2025-01-17T12:10:30.406717758Z" level=info msg="StartContainer for \"b350755dbc63dbebedfc92f1c3c556efc17e0823ed42fe4b7fcc32378b98fc32\"" Jan 17 12:10:30.468146 containerd[1591]: time="2025-01-17T12:10:30.468038574Z" level=info msg="StartContainer for \"b350755dbc63dbebedfc92f1c3c556efc17e0823ed42fe4b7fcc32378b98fc32\" returns successfully" Jan 17 12:10:30.522745 containerd[1591]: time="2025-01-17T12:10:30.522706728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-k4xxk,Uid:408f15eb-f99b-4824-abbd-82aeb2cb028a,Namespace:kube-system,Attempt:0,}" Jan 17 12:10:30.556850 containerd[1591]: time="2025-01-17T12:10:30.556724372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:30.556850 containerd[1591]: time="2025-01-17T12:10:30.556778415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:30.556850 containerd[1591]: time="2025-01-17T12:10:30.556797812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:30.557200 containerd[1591]: time="2025-01-17T12:10:30.556887122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:30.612021 containerd[1591]: time="2025-01-17T12:10:30.611896235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-k4xxk,Uid:408f15eb-f99b-4824-abbd-82aeb2cb028a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3008990a528c93aa6ea26be0e12bfcc54a6ceb84b5a67e422670122dcf8c48f2\"" Jan 17 12:10:31.067236 kubelet[2831]: I0117 12:10:31.066908 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-n88zw" podStartSLOduration=2.066839605 podStartE2EDuration="2.066839605s" podCreationTimestamp="2025-01-17 12:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:10:31.065902411 +0000 UTC m=+14.220201758" watchObservedRunningTime="2025-01-17 12:10:31.066839605 +0000 UTC m=+14.221138953" Jan 17 12:10:36.810501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount845458743.mount: Deactivated successfully. Jan 17 12:10:39.160599 containerd[1591]: time="2025-01-17T12:10:39.159617919Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:39.161154 containerd[1591]: time="2025-01-17T12:10:39.161119240Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735335" Jan 17 12:10:39.162064 containerd[1591]: time="2025-01-17T12:10:39.162010362Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:39.163623 containerd[1591]: time="2025-01-17T12:10:39.163558922Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.780515364s" Jan 17 12:10:39.163623 containerd[1591]: time="2025-01-17T12:10:39.163597595Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 12:10:39.166836 containerd[1591]: time="2025-01-17T12:10:39.166698773Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 12:10:39.168433 containerd[1591]: time="2025-01-17T12:10:39.168090937Z" level=info msg="CreateContainer within sandbox \"baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:10:39.265112 containerd[1591]: time="2025-01-17T12:10:39.265009964Z" level=info msg="CreateContainer within sandbox \"baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9\"" Jan 17 12:10:39.266427 containerd[1591]: time="2025-01-17T12:10:39.265722817Z" level=info msg="StartContainer for \"107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9\"" Jan 17 12:10:39.361600 containerd[1591]: time="2025-01-17T12:10:39.361537979Z" level=info msg="StartContainer for \"107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9\" returns successfully" Jan 17 12:10:40.188397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9-rootfs.mount: Deactivated successfully. Jan 17 12:10:40.642381 containerd[1591]: time="2025-01-17T12:10:40.642072191Z" level=info msg="shim disconnected" id=107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9 namespace=k8s.io Jan 17 12:10:40.642381 containerd[1591]: time="2025-01-17T12:10:40.642180848Z" level=warning msg="cleaning up after shim disconnected" id=107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9 namespace=k8s.io Jan 17 12:10:40.642381 containerd[1591]: time="2025-01-17T12:10:40.642203621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:10:41.103311 containerd[1591]: time="2025-01-17T12:10:41.101046368Z" level=info msg="CreateContainer within sandbox \"baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:10:41.156980 containerd[1591]: time="2025-01-17T12:10:41.156500310Z" level=info msg="CreateContainer within sandbox \"baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8\"" Jan 17 12:10:41.157421 containerd[1591]: time="2025-01-17T12:10:41.157378267Z" level=info msg="StartContainer for \"48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8\"" Jan 17 12:10:41.216260 containerd[1591]: time="2025-01-17T12:10:41.216163380Z" level=info msg="StartContainer for \"48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8\" returns successfully" Jan 17 12:10:41.225465 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:10:41.226419 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:10:41.226493 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:10:41.233985 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:10:41.252605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8-rootfs.mount: Deactivated successfully. Jan 17 12:10:41.254511 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:10:41.261778 containerd[1591]: time="2025-01-17T12:10:41.261623971Z" level=info msg="shim disconnected" id=48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8 namespace=k8s.io Jan 17 12:10:41.261898 containerd[1591]: time="2025-01-17T12:10:41.261790638Z" level=warning msg="cleaning up after shim disconnected" id=48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8 namespace=k8s.io Jan 17 12:10:41.261898 containerd[1591]: time="2025-01-17T12:10:41.261808862Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:10:41.791143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2499942340.mount: Deactivated successfully. Jan 17 12:10:42.102847 containerd[1591]: time="2025-01-17T12:10:42.102783744Z" level=info msg="CreateContainer within sandbox \"baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:10:42.157883 containerd[1591]: time="2025-01-17T12:10:42.157234783Z" level=info msg="CreateContainer within sandbox \"baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561\"" Jan 17 12:10:42.160369 containerd[1591]: time="2025-01-17T12:10:42.159157951Z" level=info msg="StartContainer for \"f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561\"" Jan 17 12:10:42.198361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount276357819.mount: Deactivated successfully. Jan 17 12:10:42.239986 containerd[1591]: time="2025-01-17T12:10:42.239954686Z" level=info msg="StartContainer for \"f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561\" returns successfully" Jan 17 12:10:42.258215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561-rootfs.mount: Deactivated successfully. Jan 17 12:10:42.284001 containerd[1591]: time="2025-01-17T12:10:42.283915928Z" level=info msg="shim disconnected" id=f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561 namespace=k8s.io Jan 17 12:10:42.284001 containerd[1591]: time="2025-01-17T12:10:42.283994917Z" level=warning msg="cleaning up after shim disconnected" id=f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561 namespace=k8s.io Jan 17 12:10:42.284237 containerd[1591]: time="2025-01-17T12:10:42.284015006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:10:43.106632 containerd[1591]: time="2025-01-17T12:10:43.106560959Z" level=info msg="CreateContainer within sandbox \"baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:10:43.139843 containerd[1591]: time="2025-01-17T12:10:43.139780657Z" level=info msg="CreateContainer within sandbox \"baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70\"" Jan 17 12:10:43.144502 containerd[1591]: time="2025-01-17T12:10:43.140848281Z" level=info msg="StartContainer for \"ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70\"" Jan 17 12:10:43.158207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3208425092.mount: Deactivated successfully. Jan 17 12:10:43.227112 systemd[1]: run-containerd-runc-k8s.io-ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70-runc.GocMgG.mount: Deactivated successfully. Jan 17 12:10:43.323805 containerd[1591]: time="2025-01-17T12:10:43.323688856Z" level=info msg="StartContainer for \"ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70\" returns successfully" Jan 17 12:10:43.346677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70-rootfs.mount: Deactivated successfully. Jan 17 12:10:43.369446 containerd[1591]: time="2025-01-17T12:10:43.369204843Z" level=info msg="shim disconnected" id=ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70 namespace=k8s.io Jan 17 12:10:43.369446 containerd[1591]: time="2025-01-17T12:10:43.369324860Z" level=warning msg="cleaning up after shim disconnected" id=ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70 namespace=k8s.io Jan 17 12:10:43.369446 containerd[1591]: time="2025-01-17T12:10:43.369371438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:10:43.395287 containerd[1591]: time="2025-01-17T12:10:43.395153109Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:10:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:10:44.123508 containerd[1591]: time="2025-01-17T12:10:44.121298624Z" level=info msg="CreateContainer within sandbox \"baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:10:44.162162 containerd[1591]: time="2025-01-17T12:10:44.161710048Z" level=info msg="CreateContainer within sandbox \"baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57\"" Jan 17 12:10:44.164176 containerd[1591]: time="2025-01-17T12:10:44.164110519Z" level=info msg="StartContainer for \"759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57\"" Jan 17 12:10:44.198961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3780094221.mount: Deactivated successfully. Jan 17 12:10:44.248689 containerd[1591]: time="2025-01-17T12:10:44.248634925Z" level=info msg="StartContainer for \"759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57\" returns successfully" Jan 17 12:10:44.338056 kubelet[2831]: I0117 12:10:44.337807 2831 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:10:44.372870 kubelet[2831]: I0117 12:10:44.369249 2831 topology_manager.go:215] "Topology Admit Handler" podUID="97999bf8-c036-4cfd-8489-8a0e575f378b" podNamespace="kube-system" podName="coredns-76f75df574-9jbsr" Jan 17 12:10:44.374842 kubelet[2831]: I0117 12:10:44.374103 2831 topology_manager.go:215] "Topology Admit Handler" podUID="3cbae8b3-1ccf-41c6-8304-ea6d88d0300e" podNamespace="kube-system" podName="coredns-76f75df574-rwrnx" Jan 17 12:10:44.547365 kubelet[2831]: I0117 12:10:44.547298 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cbae8b3-1ccf-41c6-8304-ea6d88d0300e-config-volume\") pod \"coredns-76f75df574-rwrnx\" (UID: \"3cbae8b3-1ccf-41c6-8304-ea6d88d0300e\") " pod="kube-system/coredns-76f75df574-rwrnx" Jan 17 12:10:44.547804 kubelet[2831]: I0117 12:10:44.547756 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wljj4\" (UniqueName: \"kubernetes.io/projected/97999bf8-c036-4cfd-8489-8a0e575f378b-kube-api-access-wljj4\") pod \"coredns-76f75df574-9jbsr\" (UID: \"97999bf8-c036-4cfd-8489-8a0e575f378b\") " pod="kube-system/coredns-76f75df574-9jbsr" Jan 17 12:10:44.548080 kubelet[2831]: I0117 12:10:44.548041 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jgsv\" (UniqueName: \"kubernetes.io/projected/3cbae8b3-1ccf-41c6-8304-ea6d88d0300e-kube-api-access-4jgsv\") pod \"coredns-76f75df574-rwrnx\" (UID: \"3cbae8b3-1ccf-41c6-8304-ea6d88d0300e\") " pod="kube-system/coredns-76f75df574-rwrnx" Jan 17 12:10:44.548383 kubelet[2831]: I0117 12:10:44.548323 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97999bf8-c036-4cfd-8489-8a0e575f378b-config-volume\") pod \"coredns-76f75df574-9jbsr\" (UID: \"97999bf8-c036-4cfd-8489-8a0e575f378b\") " pod="kube-system/coredns-76f75df574-9jbsr" Jan 17 12:10:44.689499 containerd[1591]: time="2025-01-17T12:10:44.689300431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9jbsr,Uid:97999bf8-c036-4cfd-8489-8a0e575f378b,Namespace:kube-system,Attempt:0,}" Jan 17 12:10:44.693729 containerd[1591]: time="2025-01-17T12:10:44.693424099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rwrnx,Uid:3cbae8b3-1ccf-41c6-8304-ea6d88d0300e,Namespace:kube-system,Attempt:0,}" Jan 17 12:10:45.153682 kubelet[2831]: I0117 12:10:45.153635 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-76pbp" podStartSLOduration=7.372122102 podStartE2EDuration="16.153493375s" podCreationTimestamp="2025-01-17 12:10:29 +0000 UTC" firstStartedPulling="2025-01-17 12:10:30.382480104 +0000 UTC m=+13.536779421" lastFinishedPulling="2025-01-17 12:10:39.163851367 +0000 UTC m=+22.318150694" observedRunningTime="2025-01-17 12:10:45.14909022 +0000 UTC m=+28.303389607" watchObservedRunningTime="2025-01-17 12:10:45.153493375 +0000 UTC m=+28.307792743" Jan 17 12:10:47.175295 containerd[1591]: time="2025-01-17T12:10:47.175254478Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:47.177362 containerd[1591]: time="2025-01-17T12:10:47.176764237Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907217" Jan 17 12:10:47.178006 containerd[1591]: time="2025-01-17T12:10:47.177968008Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:10:47.182485 containerd[1591]: time="2025-01-17T12:10:47.182451572Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 8.015219085s" Jan 17 12:10:47.182641 containerd[1591]: time="2025-01-17T12:10:47.182533526Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 12:10:47.185166 containerd[1591]: time="2025-01-17T12:10:47.184796233Z" level=info msg="CreateContainer within sandbox \"3008990a528c93aa6ea26be0e12bfcc54a6ceb84b5a67e422670122dcf8c48f2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 12:10:47.208808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334439540.mount: Deactivated successfully. Jan 17 12:10:47.212924 containerd[1591]: time="2025-01-17T12:10:47.212885955Z" level=info msg="CreateContainer within sandbox \"3008990a528c93aa6ea26be0e12bfcc54a6ceb84b5a67e422670122dcf8c48f2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d\"" Jan 17 12:10:47.214811 containerd[1591]: time="2025-01-17T12:10:47.213886992Z" level=info msg="StartContainer for \"dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d\"" Jan 17 12:10:47.276160 containerd[1591]: time="2025-01-17T12:10:47.276113251Z" level=info msg="StartContainer for \"dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d\" returns successfully" Jan 17 12:10:48.152389 kubelet[2831]: I0117 12:10:48.150182 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-k4xxk" podStartSLOduration=1.580154912 podStartE2EDuration="18.150093779s" podCreationTimestamp="2025-01-17 12:10:30 +0000 UTC" firstStartedPulling="2025-01-17 12:10:30.61299085 +0000 UTC m=+13.767290167" lastFinishedPulling="2025-01-17 12:10:47.182929717 +0000 UTC m=+30.337229034" observedRunningTime="2025-01-17 12:10:48.147351325 +0000 UTC m=+31.301650652" watchObservedRunningTime="2025-01-17 12:10:48.150093779 +0000 UTC m=+31.304393146" Jan 17 12:10:51.391691 systemd-networkd[1206]: cilium_host: Link UP Jan 17 12:10:51.392097 systemd-networkd[1206]: cilium_net: Link UP Jan 17 12:10:51.392105 systemd-networkd[1206]: cilium_net: Gained carrier Jan 17 12:10:51.394432 systemd-networkd[1206]: cilium_host: Gained carrier Jan 17 12:10:51.394886 systemd-networkd[1206]: cilium_host: Gained IPv6LL Jan 17 12:10:51.727669 systemd-networkd[1206]: cilium_vxlan: Link UP Jan 17 12:10:51.727685 systemd-networkd[1206]: cilium_vxlan: Gained carrier Jan 17 12:10:51.817518 systemd-networkd[1206]: cilium_net: Gained IPv6LL Jan 17 12:10:52.066398 kernel: NET: Registered PF_ALG protocol family Jan 17 12:10:52.910200 systemd-networkd[1206]: lxc_health: Link UP Jan 17 12:10:52.915026 systemd-networkd[1206]: lxc_health: Gained carrier Jan 17 12:10:53.201517 systemd-networkd[1206]: cilium_vxlan: Gained IPv6LL Jan 17 12:10:53.273507 systemd-networkd[1206]: lxc17e956c460eb: Link UP Jan 17 12:10:53.283099 kernel: eth0: renamed from tmp83f60 Jan 17 12:10:53.293087 systemd-networkd[1206]: lxc17e956c460eb: Gained carrier Jan 17 12:10:53.319201 systemd-networkd[1206]: lxc46b4353de4cb: Link UP Jan 17 12:10:53.330561 kernel: eth0: renamed from tmp1fc0f Jan 17 12:10:53.340188 systemd-networkd[1206]: lxc46b4353de4cb: Gained carrier Jan 17 12:10:54.291998 systemd-networkd[1206]: lxc_health: Gained IPv6LL Jan 17 12:10:54.929535 systemd-networkd[1206]: lxc17e956c460eb: Gained IPv6LL Jan 17 12:10:55.121662 systemd-networkd[1206]: lxc46b4353de4cb: Gained IPv6LL Jan 17 12:10:58.011033 containerd[1591]: time="2025-01-17T12:10:58.010952115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:58.011993 containerd[1591]: time="2025-01-17T12:10:58.011488849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:58.011993 containerd[1591]: time="2025-01-17T12:10:58.011523475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:58.012244 containerd[1591]: time="2025-01-17T12:10:58.012128608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:58.049538 containerd[1591]: time="2025-01-17T12:10:58.049188802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:10:58.049538 containerd[1591]: time="2025-01-17T12:10:58.049286105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:10:58.049538 containerd[1591]: time="2025-01-17T12:10:58.049307867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:58.049538 containerd[1591]: time="2025-01-17T12:10:58.049419026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:10:58.125249 containerd[1591]: time="2025-01-17T12:10:58.125205311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9jbsr,Uid:97999bf8-c036-4cfd-8489-8a0e575f378b,Namespace:kube-system,Attempt:0,} returns sandbox id \"83f60ffd1bf34a7b8051f1e875df29d4a8f429691b953fdbb21e1a3c75ca0438\"" Jan 17 12:10:58.128317 containerd[1591]: time="2025-01-17T12:10:58.128139056Z" level=info msg="CreateContainer within sandbox \"83f60ffd1bf34a7b8051f1e875df29d4a8f429691b953fdbb21e1a3c75ca0438\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:10:58.163489 containerd[1591]: time="2025-01-17T12:10:58.163447077Z" level=info msg="CreateContainer within sandbox \"83f60ffd1bf34a7b8051f1e875df29d4a8f429691b953fdbb21e1a3c75ca0438\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8cdcdf0800cbffc1e7229e47fff7d8e9c92f4417018ab79fa713357e376e8470\"" Jan 17 12:10:58.165144 containerd[1591]: time="2025-01-17T12:10:58.165114038Z" level=info msg="StartContainer for \"8cdcdf0800cbffc1e7229e47fff7d8e9c92f4417018ab79fa713357e376e8470\"" Jan 17 12:10:58.180757 containerd[1591]: time="2025-01-17T12:10:58.180659779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rwrnx,Uid:3cbae8b3-1ccf-41c6-8304-ea6d88d0300e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fc0f9d5957c3fd7a28afac3bc50a1dc3465d6d7a80f58f77093eaf3d3a0f932\"" Jan 17 12:10:58.185293 containerd[1591]: time="2025-01-17T12:10:58.185092326Z" level=info msg="CreateContainer within sandbox \"1fc0f9d5957c3fd7a28afac3bc50a1dc3465d6d7a80f58f77093eaf3d3a0f932\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:10:58.212863 containerd[1591]: time="2025-01-17T12:10:58.212609743Z" level=info msg="CreateContainer within sandbox \"1fc0f9d5957c3fd7a28afac3bc50a1dc3465d6d7a80f58f77093eaf3d3a0f932\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70d0d5f949e069fb0188dc7bcb7a6306db443f533913c5cf3aa8f1198ec6e322\"" Jan 17 12:10:58.214921 containerd[1591]: time="2025-01-17T12:10:58.214885335Z" level=info msg="StartContainer for \"70d0d5f949e069fb0188dc7bcb7a6306db443f533913c5cf3aa8f1198ec6e322\"" Jan 17 12:10:58.252124 containerd[1591]: time="2025-01-17T12:10:58.252061988Z" level=info msg="StartContainer for \"8cdcdf0800cbffc1e7229e47fff7d8e9c92f4417018ab79fa713357e376e8470\" returns successfully" Jan 17 12:10:58.291263 containerd[1591]: time="2025-01-17T12:10:58.291124915Z" level=info msg="StartContainer for \"70d0d5f949e069fb0188dc7bcb7a6306db443f533913c5cf3aa8f1198ec6e322\" returns successfully" Jan 17 12:10:59.027246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3891220562.mount: Deactivated successfully. Jan 17 12:10:59.214723 kubelet[2831]: I0117 12:10:59.212046 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rwrnx" podStartSLOduration=29.211965404 podStartE2EDuration="29.211965404s" podCreationTimestamp="2025-01-17 12:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:10:59.210261094 +0000 UTC m=+42.364560471" watchObservedRunningTime="2025-01-17 12:10:59.211965404 +0000 UTC m=+42.366264822" Jan 17 12:10:59.271736 kubelet[2831]: I0117 12:10:59.269735 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9jbsr" podStartSLOduration=29.269653144 podStartE2EDuration="29.269653144s" podCreationTimestamp="2025-01-17 12:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:10:59.244554397 +0000 UTC m=+42.398853764" watchObservedRunningTime="2025-01-17 12:10:59.269653144 +0000 UTC m=+42.423952511" Jan 17 12:11:36.816940 systemd[1]: Started sshd@7-172.24.4.251:22-172.24.4.1:44870.service - OpenSSH per-connection server daemon (172.24.4.1:44870). Jan 17 12:11:37.960217 sshd[4197]: Accepted publickey for core from 172.24.4.1 port 44870 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:11:37.963182 sshd[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:37.971103 systemd-logind[1572]: New session 10 of user core. Jan 17 12:11:37.976896 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:11:38.841831 sshd[4197]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:38.848150 systemd[1]: sshd@7-172.24.4.251:22-172.24.4.1:44870.service: Deactivated successfully. Jan 17 12:11:38.852016 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:11:38.853432 systemd-logind[1572]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:11:38.854934 systemd-logind[1572]: Removed session 10. Jan 17 12:11:43.854208 systemd[1]: Started sshd@8-172.24.4.251:22-172.24.4.1:49446.service - OpenSSH per-connection server daemon (172.24.4.1:49446). Jan 17 12:11:45.240137 sshd[4212]: Accepted publickey for core from 172.24.4.1 port 49446 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:11:45.243008 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:45.253671 systemd-logind[1572]: New session 11 of user core. Jan 17 12:11:45.259027 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:11:45.844850 sshd[4212]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:45.852032 systemd[1]: sshd@8-172.24.4.251:22-172.24.4.1:49446.service: Deactivated successfully. Jan 17 12:11:45.861177 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:11:45.864263 systemd-logind[1572]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:11:45.866747 systemd-logind[1572]: Removed session 11. Jan 17 12:11:50.858994 systemd[1]: Started sshd@9-172.24.4.251:22-172.24.4.1:49456.service - OpenSSH per-connection server daemon (172.24.4.1:49456). Jan 17 12:11:52.073846 sshd[4227]: Accepted publickey for core from 172.24.4.1 port 49456 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:11:52.076735 sshd[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:52.087083 systemd-logind[1572]: New session 12 of user core. Jan 17 12:11:52.096912 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:11:52.880906 sshd[4227]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:52.888629 systemd-logind[1572]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:11:52.890530 systemd[1]: sshd@9-172.24.4.251:22-172.24.4.1:49456.service: Deactivated successfully. Jan 17 12:11:52.900998 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:11:52.909554 systemd-logind[1572]: Removed session 12. Jan 17 12:11:57.898009 systemd[1]: Started sshd@10-172.24.4.251:22-172.24.4.1:36480.service - OpenSSH per-connection server daemon (172.24.4.1:36480). Jan 17 12:11:59.240237 sshd[4242]: Accepted publickey for core from 172.24.4.1 port 36480 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:11:59.243017 sshd[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:59.253592 systemd-logind[1572]: New session 13 of user core. Jan 17 12:11:59.262012 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:12:00.068789 sshd[4242]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:00.084511 systemd[1]: Started sshd@11-172.24.4.251:22-172.24.4.1:36488.service - OpenSSH per-connection server daemon (172.24.4.1:36488). Jan 17 12:12:00.087745 systemd[1]: sshd@10-172.24.4.251:22-172.24.4.1:36480.service: Deactivated successfully. Jan 17 12:12:00.093090 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:12:00.095998 systemd-logind[1572]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:12:00.105253 systemd-logind[1572]: Removed session 13. Jan 17 12:12:01.482187 sshd[4253]: Accepted publickey for core from 172.24.4.1 port 36488 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:01.484980 sshd[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:01.496562 systemd-logind[1572]: New session 14 of user core. Jan 17 12:12:01.502412 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:12:02.280843 sshd[4253]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:02.299302 systemd[1]: Started sshd@12-172.24.4.251:22-172.24.4.1:36500.service - OpenSSH per-connection server daemon (172.24.4.1:36500). Jan 17 12:12:02.302709 systemd[1]: sshd@11-172.24.4.251:22-172.24.4.1:36488.service: Deactivated successfully. Jan 17 12:12:02.311861 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:12:02.319080 systemd-logind[1572]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:12:02.328649 systemd-logind[1572]: Removed session 14. Jan 17 12:12:03.417313 sshd[4268]: Accepted publickey for core from 172.24.4.1 port 36500 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:03.423906 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:03.434023 systemd-logind[1572]: New session 15 of user core. Jan 17 12:12:03.439320 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:12:04.317406 sshd[4268]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:04.324220 systemd-logind[1572]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:12:04.325532 systemd[1]: sshd@12-172.24.4.251:22-172.24.4.1:36500.service: Deactivated successfully. Jan 17 12:12:04.335176 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:12:04.340089 systemd-logind[1572]: Removed session 15. Jan 17 12:12:09.328033 systemd[1]: Started sshd@13-172.24.4.251:22-172.24.4.1:48576.service - OpenSSH per-connection server daemon (172.24.4.1:48576). Jan 17 12:12:10.491710 sshd[4284]: Accepted publickey for core from 172.24.4.1 port 48576 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:10.494526 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:10.505711 systemd-logind[1572]: New session 16 of user core. Jan 17 12:12:10.514906 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:12:11.201250 sshd[4284]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:11.209972 systemd[1]: Started sshd@14-172.24.4.251:22-172.24.4.1:48582.service - OpenSSH per-connection server daemon (172.24.4.1:48582). Jan 17 12:12:11.216321 systemd[1]: sshd@13-172.24.4.251:22-172.24.4.1:48576.service: Deactivated successfully. Jan 17 12:12:11.223396 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:12:11.226043 systemd-logind[1572]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:12:11.229182 systemd-logind[1572]: Removed session 16. Jan 17 12:12:12.582633 sshd[4295]: Accepted publickey for core from 172.24.4.1 port 48582 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:12.585431 sshd[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:12.594788 systemd-logind[1572]: New session 17 of user core. Jan 17 12:12:12.606983 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:12:13.349845 sshd[4295]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:13.364291 systemd[1]: Started sshd@15-172.24.4.251:22-172.24.4.1:48588.service - OpenSSH per-connection server daemon (172.24.4.1:48588). Jan 17 12:12:13.370965 systemd[1]: sshd@14-172.24.4.251:22-172.24.4.1:48582.service: Deactivated successfully. Jan 17 12:12:13.381104 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:12:13.386864 systemd-logind[1572]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:12:13.389701 systemd-logind[1572]: Removed session 17. Jan 17 12:12:14.643071 sshd[4307]: Accepted publickey for core from 172.24.4.1 port 48588 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:14.646299 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:14.658508 systemd-logind[1572]: New session 18 of user core. Jan 17 12:12:14.666223 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:12:17.649078 sshd[4307]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:17.660893 systemd[1]: Started sshd@16-172.24.4.251:22-172.24.4.1:51808.service - OpenSSH per-connection server daemon (172.24.4.1:51808). Jan 17 12:12:17.662662 systemd[1]: sshd@15-172.24.4.251:22-172.24.4.1:48588.service: Deactivated successfully. Jan 17 12:12:17.672635 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:12:17.675129 systemd-logind[1572]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:12:17.679429 systemd-logind[1572]: Removed session 18. Jan 17 12:12:19.142893 sshd[4329]: Accepted publickey for core from 172.24.4.1 port 51808 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:19.145711 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:19.157718 systemd-logind[1572]: New session 19 of user core. Jan 17 12:12:19.170048 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:12:20.646782 sshd[4329]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:20.658948 systemd[1]: Started sshd@17-172.24.4.251:22-172.24.4.1:51814.service - OpenSSH per-connection server daemon (172.24.4.1:51814). Jan 17 12:12:20.663748 systemd[1]: sshd@16-172.24.4.251:22-172.24.4.1:51808.service: Deactivated successfully. Jan 17 12:12:20.677295 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:12:20.682270 systemd-logind[1572]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:12:20.685456 systemd-logind[1572]: Removed session 19. Jan 17 12:12:21.780551 sshd[4341]: Accepted publickey for core from 172.24.4.1 port 51814 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:21.783402 sshd[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:21.796608 systemd-logind[1572]: New session 20 of user core. Jan 17 12:12:21.802875 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:12:22.379241 sshd[4341]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:22.386275 systemd[1]: sshd@17-172.24.4.251:22-172.24.4.1:51814.service: Deactivated successfully. Jan 17 12:12:22.394291 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:12:22.397632 systemd-logind[1572]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:12:22.400035 systemd-logind[1572]: Removed session 20. Jan 17 12:12:27.390893 systemd[1]: Started sshd@18-172.24.4.251:22-172.24.4.1:50992.service - OpenSSH per-connection server daemon (172.24.4.1:50992). Jan 17 12:12:28.775632 sshd[4361]: Accepted publickey for core from 172.24.4.1 port 50992 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:28.778801 sshd[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:28.789685 systemd-logind[1572]: New session 21 of user core. Jan 17 12:12:28.796892 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:12:29.551206 sshd[4361]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:29.556260 systemd[1]: sshd@18-172.24.4.251:22-172.24.4.1:50992.service: Deactivated successfully. Jan 17 12:12:29.559299 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:12:29.560296 systemd-logind[1572]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:12:29.562051 systemd-logind[1572]: Removed session 21. Jan 17 12:12:34.565000 systemd[1]: Started sshd@19-172.24.4.251:22-172.24.4.1:35840.service - OpenSSH per-connection server daemon (172.24.4.1:35840). Jan 17 12:12:36.073857 sshd[4377]: Accepted publickey for core from 172.24.4.1 port 35840 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:36.076821 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:36.087848 systemd-logind[1572]: New session 22 of user core. Jan 17 12:12:36.095833 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:12:36.847739 sshd[4377]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:36.858880 systemd[1]: Started sshd@20-172.24.4.251:22-172.24.4.1:35852.service - OpenSSH per-connection server daemon (172.24.4.1:35852). Jan 17 12:12:36.862419 systemd[1]: sshd@19-172.24.4.251:22-172.24.4.1:35840.service: Deactivated successfully. Jan 17 12:12:36.867978 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:12:36.872606 systemd-logind[1572]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:12:36.877694 systemd-logind[1572]: Removed session 22. Jan 17 12:12:37.985423 sshd[4389]: Accepted publickey for core from 172.24.4.1 port 35852 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:37.988382 sshd[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:37.999702 systemd-logind[1572]: New session 23 of user core. Jan 17 12:12:38.004885 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:12:40.013510 systemd[1]: run-containerd-runc-k8s.io-759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57-runc.abgz2f.mount: Deactivated successfully. Jan 17 12:12:40.017419 containerd[1591]: time="2025-01-17T12:12:40.016483308Z" level=info msg="StopContainer for \"dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d\" with timeout 30 (s)" Jan 17 12:12:40.020046 containerd[1591]: time="2025-01-17T12:12:40.019266207Z" level=info msg="Stop container \"dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d\" with signal terminated" Jan 17 12:12:40.028521 containerd[1591]: time="2025-01-17T12:12:40.028108216Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:12:40.037444 containerd[1591]: time="2025-01-17T12:12:40.037285964Z" level=info msg="StopContainer for \"759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57\" with timeout 2 (s)" Jan 17 12:12:40.038381 containerd[1591]: time="2025-01-17T12:12:40.037880860Z" level=info msg="Stop container \"759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57\" with signal terminated" Jan 17 12:12:40.050958 systemd-networkd[1206]: lxc_health: Link DOWN Jan 17 12:12:40.050964 systemd-networkd[1206]: lxc_health: Lost carrier Jan 17 12:12:40.067645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d-rootfs.mount: Deactivated successfully. Jan 17 12:12:40.097695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57-rootfs.mount: Deactivated successfully. Jan 17 12:12:40.118938 containerd[1591]: time="2025-01-17T12:12:40.118797740Z" level=info msg="shim disconnected" id=dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d namespace=k8s.io Jan 17 12:12:40.119165 containerd[1591]: time="2025-01-17T12:12:40.119033613Z" level=warning msg="cleaning up after shim disconnected" id=dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d namespace=k8s.io Jan 17 12:12:40.119165 containerd[1591]: time="2025-01-17T12:12:40.119049432Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:12:40.119165 containerd[1591]: time="2025-01-17T12:12:40.118892538Z" level=info msg="shim disconnected" id=759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57 namespace=k8s.io Jan 17 12:12:40.119165 containerd[1591]: time="2025-01-17T12:12:40.119138409Z" level=warning msg="cleaning up after shim disconnected" id=759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57 namespace=k8s.io Jan 17 12:12:40.119165 containerd[1591]: time="2025-01-17T12:12:40.119149089Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:12:40.151455 containerd[1591]: time="2025-01-17T12:12:40.151353362Z" level=info msg="StopContainer for \"759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57\" returns successfully" Jan 17 12:12:40.152736 containerd[1591]: time="2025-01-17T12:12:40.151837791Z" level=info msg="StopPodSandbox for \"baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1\"" Jan 17 12:12:40.152736 containerd[1591]: time="2025-01-17T12:12:40.151869510Z" level=info msg="Container to stop \"107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:12:40.152736 containerd[1591]: time="2025-01-17T12:12:40.151883146Z" level=info msg="Container to stop \"ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:12:40.152736 containerd[1591]: time="2025-01-17T12:12:40.151894026Z" level=info msg="Container to stop \"759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:12:40.152736 containerd[1591]: time="2025-01-17T12:12:40.151905818Z" level=info msg="Container to stop \"48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:12:40.152736 containerd[1591]: time="2025-01-17T12:12:40.151915867Z" level=info msg="Container to stop \"f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:12:40.154130 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1-shm.mount: Deactivated successfully. Jan 17 12:12:40.154684 containerd[1591]: time="2025-01-17T12:12:40.154580854Z" level=info msg="StopContainer for \"dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d\" returns successfully" Jan 17 12:12:40.156447 containerd[1591]: time="2025-01-17T12:12:40.155630272Z" level=info msg="StopPodSandbox for \"3008990a528c93aa6ea26be0e12bfcc54a6ceb84b5a67e422670122dcf8c48f2\"" Jan 17 12:12:40.156447 containerd[1591]: time="2025-01-17T12:12:40.155661580Z" level=info msg="Container to stop \"dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:12:40.245094 containerd[1591]: time="2025-01-17T12:12:40.244947673Z" level=info msg="shim disconnected" id=baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1 namespace=k8s.io Jan 17 12:12:40.245094 containerd[1591]: time="2025-01-17T12:12:40.245017324Z" level=warning msg="cleaning up after shim disconnected" id=baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1 namespace=k8s.io Jan 17 12:12:40.245094 containerd[1591]: time="2025-01-17T12:12:40.245028605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:12:40.247559 containerd[1591]: time="2025-01-17T12:12:40.246308344Z" level=info msg="shim disconnected" id=3008990a528c93aa6ea26be0e12bfcc54a6ceb84b5a67e422670122dcf8c48f2 namespace=k8s.io Jan 17 12:12:40.247559 containerd[1591]: time="2025-01-17T12:12:40.246384537Z" level=warning msg="cleaning up after shim disconnected" id=3008990a528c93aa6ea26be0e12bfcc54a6ceb84b5a67e422670122dcf8c48f2 namespace=k8s.io Jan 17 12:12:40.247559 containerd[1591]: time="2025-01-17T12:12:40.246398393Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:12:40.260500 containerd[1591]: time="2025-01-17T12:12:40.260446575Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:12:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:12:40.262404 containerd[1591]: time="2025-01-17T12:12:40.262306622Z" level=info msg="TearDown network for sandbox \"baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1\" successfully" Jan 17 12:12:40.262404 containerd[1591]: time="2025-01-17T12:12:40.262377796Z" level=info msg="StopPodSandbox for \"baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1\" returns successfully" Jan 17 12:12:40.273715 containerd[1591]: time="2025-01-17T12:12:40.272923038Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:12:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:12:40.278653 containerd[1591]: time="2025-01-17T12:12:40.274355444Z" level=info msg="TearDown network for sandbox \"3008990a528c93aa6ea26be0e12bfcc54a6ceb84b5a67e422670122dcf8c48f2\" successfully" Jan 17 12:12:40.278653 containerd[1591]: time="2025-01-17T12:12:40.274381372Z" level=info msg="StopPodSandbox for \"3008990a528c93aa6ea26be0e12bfcc54a6ceb84b5a67e422670122dcf8c48f2\" returns successfully" Jan 17 12:12:40.388950 kubelet[2831]: I0117 12:12:40.388623 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4nv7\" (UniqueName: \"kubernetes.io/projected/37103937-d9e7-479c-862b-5bcc5bd85c19-kube-api-access-l4nv7\") pod \"37103937-d9e7-479c-862b-5bcc5bd85c19\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " Jan 17 12:12:40.388950 kubelet[2831]: I0117 12:12:40.388748 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-hostproc\") pod \"37103937-d9e7-479c-862b-5bcc5bd85c19\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " Jan 17 12:12:40.388950 kubelet[2831]: I0117 12:12:40.388807 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-host-proc-sys-kernel\") pod \"37103937-d9e7-479c-862b-5bcc5bd85c19\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " Jan 17 12:12:40.388950 kubelet[2831]: I0117 12:12:40.388858 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-host-proc-sys-net\") pod \"37103937-d9e7-479c-862b-5bcc5bd85c19\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " Jan 17 12:12:40.391184 kubelet[2831]: I0117 12:12:40.389194 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-xtables-lock\") pod \"37103937-d9e7-479c-862b-5bcc5bd85c19\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " Jan 17 12:12:40.391184 kubelet[2831]: I0117 12:12:40.389266 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-bpf-maps\") pod \"37103937-d9e7-479c-862b-5bcc5bd85c19\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " Jan 17 12:12:40.391184 kubelet[2831]: I0117 12:12:40.389355 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37103937-d9e7-479c-862b-5bcc5bd85c19-clustermesh-secrets\") pod \"37103937-d9e7-479c-862b-5bcc5bd85c19\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " Jan 17 12:12:40.391184 kubelet[2831]: I0117 12:12:40.389413 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37103937-d9e7-479c-862b-5bcc5bd85c19-hubble-tls\") pod \"37103937-d9e7-479c-862b-5bcc5bd85c19\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " Jan 17 12:12:40.391184 kubelet[2831]: I0117 12:12:40.389460 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-lib-modules\") pod \"37103937-d9e7-479c-862b-5bcc5bd85c19\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " Jan 17 12:12:40.391184 kubelet[2831]: I0117 12:12:40.389514 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tlnh\" (UniqueName: \"kubernetes.io/projected/408f15eb-f99b-4824-abbd-82aeb2cb028a-kube-api-access-7tlnh\") pod \"408f15eb-f99b-4824-abbd-82aeb2cb028a\" (UID: \"408f15eb-f99b-4824-abbd-82aeb2cb028a\") " Jan 17 12:12:40.391620 kubelet[2831]: I0117 12:12:40.389570 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/408f15eb-f99b-4824-abbd-82aeb2cb028a-cilium-config-path\") pod \"408f15eb-f99b-4824-abbd-82aeb2cb028a\" (UID: \"408f15eb-f99b-4824-abbd-82aeb2cb028a\") " Jan 17 12:12:40.391620 kubelet[2831]: I0117 12:12:40.389625 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-cilium-run\") pod \"37103937-d9e7-479c-862b-5bcc5bd85c19\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " Jan 17 12:12:40.391620 kubelet[2831]: I0117 12:12:40.389679 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37103937-d9e7-479c-862b-5bcc5bd85c19-cilium-config-path\") pod \"37103937-d9e7-479c-862b-5bcc5bd85c19\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " Jan 17 12:12:40.391620 kubelet[2831]: I0117 12:12:40.389727 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-etc-cni-netd\") pod \"37103937-d9e7-479c-862b-5bcc5bd85c19\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " Jan 17 12:12:40.391620 kubelet[2831]: I0117 12:12:40.389773 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-cilium-cgroup\") pod \"37103937-d9e7-479c-862b-5bcc5bd85c19\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " Jan 17 12:12:40.391620 kubelet[2831]: I0117 12:12:40.389821 2831 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-cni-path\") pod \"37103937-d9e7-479c-862b-5bcc5bd85c19\" (UID: \"37103937-d9e7-479c-862b-5bcc5bd85c19\") " Jan 17 12:12:40.391972 kubelet[2831]: I0117 12:12:40.389916 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-cni-path" (OuterVolumeSpecName: "cni-path") pod "37103937-d9e7-479c-862b-5bcc5bd85c19" (UID: "37103937-d9e7-479c-862b-5bcc5bd85c19"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:12:40.391972 kubelet[2831]: I0117 12:12:40.389988 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-hostproc" (OuterVolumeSpecName: "hostproc") pod "37103937-d9e7-479c-862b-5bcc5bd85c19" (UID: "37103937-d9e7-479c-862b-5bcc5bd85c19"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:12:40.391972 kubelet[2831]: I0117 12:12:40.390028 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "37103937-d9e7-479c-862b-5bcc5bd85c19" (UID: "37103937-d9e7-479c-862b-5bcc5bd85c19"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:12:40.391972 kubelet[2831]: I0117 12:12:40.390069 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "37103937-d9e7-479c-862b-5bcc5bd85c19" (UID: "37103937-d9e7-479c-862b-5bcc5bd85c19"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:12:40.391972 kubelet[2831]: I0117 12:12:40.390107 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "37103937-d9e7-479c-862b-5bcc5bd85c19" (UID: "37103937-d9e7-479c-862b-5bcc5bd85c19"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:12:40.392868 kubelet[2831]: I0117 12:12:40.390144 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "37103937-d9e7-479c-862b-5bcc5bd85c19" (UID: "37103937-d9e7-479c-862b-5bcc5bd85c19"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:12:40.392868 kubelet[2831]: I0117 12:12:40.392780 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37103937-d9e7-479c-862b-5bcc5bd85c19-kube-api-access-l4nv7" (OuterVolumeSpecName: "kube-api-access-l4nv7") pod "37103937-d9e7-479c-862b-5bcc5bd85c19" (UID: "37103937-d9e7-479c-862b-5bcc5bd85c19"). InnerVolumeSpecName "kube-api-access-l4nv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:12:40.396537 kubelet[2831]: I0117 12:12:40.396452 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "37103937-d9e7-479c-862b-5bcc5bd85c19" (UID: "37103937-d9e7-479c-862b-5bcc5bd85c19"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:12:40.397793 kubelet[2831]: I0117 12:12:40.397696 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "37103937-d9e7-479c-862b-5bcc5bd85c19" (UID: "37103937-d9e7-479c-862b-5bcc5bd85c19"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:12:40.398050 kubelet[2831]: I0117 12:12:40.397794 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "37103937-d9e7-479c-862b-5bcc5bd85c19" (UID: "37103937-d9e7-479c-862b-5bcc5bd85c19"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:12:40.398050 kubelet[2831]: I0117 12:12:40.397846 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "37103937-d9e7-479c-862b-5bcc5bd85c19" (UID: "37103937-d9e7-479c-862b-5bcc5bd85c19"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:12:40.403791 kubelet[2831]: I0117 12:12:40.403458 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37103937-d9e7-479c-862b-5bcc5bd85c19-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "37103937-d9e7-479c-862b-5bcc5bd85c19" (UID: "37103937-d9e7-479c-862b-5bcc5bd85c19"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:12:40.405227 kubelet[2831]: I0117 12:12:40.404921 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/408f15eb-f99b-4824-abbd-82aeb2cb028a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "408f15eb-f99b-4824-abbd-82aeb2cb028a" (UID: "408f15eb-f99b-4824-abbd-82aeb2cb028a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:12:40.405227 kubelet[2831]: I0117 12:12:40.405097 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/408f15eb-f99b-4824-abbd-82aeb2cb028a-kube-api-access-7tlnh" (OuterVolumeSpecName: "kube-api-access-7tlnh") pod "408f15eb-f99b-4824-abbd-82aeb2cb028a" (UID: "408f15eb-f99b-4824-abbd-82aeb2cb028a"). InnerVolumeSpecName "kube-api-access-7tlnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:12:40.407690 kubelet[2831]: I0117 12:12:40.407590 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37103937-d9e7-479c-862b-5bcc5bd85c19-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "37103937-d9e7-479c-862b-5bcc5bd85c19" (UID: "37103937-d9e7-479c-862b-5bcc5bd85c19"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:12:40.411522 kubelet[2831]: I0117 12:12:40.411433 2831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37103937-d9e7-479c-862b-5bcc5bd85c19-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "37103937-d9e7-479c-862b-5bcc5bd85c19" (UID: "37103937-d9e7-479c-862b-5bcc5bd85c19"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:12:40.490619 kubelet[2831]: I0117 12:12:40.490545 2831 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-hostproc\") on node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" DevicePath \"\"" Jan 17 12:12:40.490848 kubelet[2831]: I0117 12:12:40.490646 2831 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-l4nv7\" (UniqueName: \"kubernetes.io/projected/37103937-d9e7-479c-862b-5bcc5bd85c19-kube-api-access-l4nv7\") on node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" DevicePath \"\"" Jan 17 12:12:40.490848 kubelet[2831]: I0117 12:12:40.490683 2831 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-host-proc-sys-kernel\") on node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" DevicePath \"\"" Jan 17 12:12:40.490848 kubelet[2831]: I0117 12:12:40.490718 2831 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-host-proc-sys-net\") on node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" DevicePath \"\"" Jan 17 12:12:40.490848 kubelet[2831]: I0117 12:12:40.490748 2831 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-xtables-lock\") on node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" DevicePath \"\"" Jan 17 12:12:40.490848 kubelet[2831]: I0117 12:12:40.490779 2831 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37103937-d9e7-479c-862b-5bcc5bd85c19-clustermesh-secrets\") on node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" DevicePath \"\"" Jan 17 12:12:40.490848 kubelet[2831]: I0117 12:12:40.490808 2831 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-bpf-maps\") on node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" DevicePath \"\"" Jan 17 12:12:40.490848 kubelet[2831]: I0117 12:12:40.490836 2831 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37103937-d9e7-479c-862b-5bcc5bd85c19-hubble-tls\") on node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" DevicePath \"\"" Jan 17 12:12:40.491260 kubelet[2831]: I0117 12:12:40.490867 2831 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-lib-modules\") on node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" DevicePath \"\"" Jan 17 12:12:40.491260 kubelet[2831]: I0117 12:12:40.490898 2831 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7tlnh\" (UniqueName: \"kubernetes.io/projected/408f15eb-f99b-4824-abbd-82aeb2cb028a-kube-api-access-7tlnh\") on node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" DevicePath \"\"" Jan 17 12:12:40.491260 kubelet[2831]: I0117 12:12:40.490930 2831 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/408f15eb-f99b-4824-abbd-82aeb2cb028a-cilium-config-path\") on node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" DevicePath \"\"" Jan 17 12:12:40.491260 kubelet[2831]: I0117 12:12:40.490958 2831 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-cilium-run\") on node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" DevicePath \"\"" Jan 17 12:12:40.491260 kubelet[2831]: I0117 12:12:40.490989 2831 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37103937-d9e7-479c-862b-5bcc5bd85c19-cilium-config-path\") on node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" DevicePath \"\"" Jan 17 12:12:40.491260 kubelet[2831]: I0117 12:12:40.491018 2831 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-etc-cni-netd\") on node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" DevicePath \"\"" Jan 17 12:12:40.491260 kubelet[2831]: I0117 12:12:40.491048 2831 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-cilium-cgroup\") on node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" DevicePath \"\"" Jan 17 12:12:40.491757 kubelet[2831]: I0117 12:12:40.491076 2831 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37103937-d9e7-479c-862b-5bcc5bd85c19-cni-path\") on node \"ci-4081-3-0-0-25eb0cd39e.novalocal\" DevicePath \"\"" Jan 17 12:12:40.511402 kubelet[2831]: I0117 12:12:40.511000 2831 scope.go:117] "RemoveContainer" containerID="759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57" Jan 17 12:12:40.522434 containerd[1591]: time="2025-01-17T12:12:40.521723533Z" level=info msg="RemoveContainer for \"759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57\"" Jan 17 12:12:40.619998 containerd[1591]: time="2025-01-17T12:12:40.619891750Z" level=info msg="RemoveContainer for \"759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57\" returns successfully" Jan 17 12:12:40.620984 kubelet[2831]: I0117 12:12:40.620607 2831 scope.go:117] "RemoveContainer" containerID="ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70" Jan 17 12:12:40.627618 containerd[1591]: time="2025-01-17T12:12:40.626996753Z" level=info msg="RemoveContainer for \"ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70\"" Jan 17 12:12:40.638550 containerd[1591]: time="2025-01-17T12:12:40.638433566Z" level=info msg="RemoveContainer for \"ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70\" returns successfully" Jan 17 12:12:40.639036 kubelet[2831]: I0117 12:12:40.638900 2831 scope.go:117] "RemoveContainer" containerID="f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561" Jan 17 12:12:40.645363 containerd[1591]: time="2025-01-17T12:12:40.645271378Z" level=info msg="RemoveContainer for \"f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561\"" Jan 17 12:12:40.657818 containerd[1591]: time="2025-01-17T12:12:40.657705232Z" level=info msg="RemoveContainer for \"f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561\" returns successfully" Jan 17 12:12:40.658201 kubelet[2831]: I0117 12:12:40.658122 2831 scope.go:117] "RemoveContainer" containerID="48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8" Jan 17 12:12:40.661782 containerd[1591]: time="2025-01-17T12:12:40.661209301Z" level=info msg="RemoveContainer for \"48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8\"" Jan 17 12:12:40.667071 containerd[1591]: time="2025-01-17T12:12:40.666916894Z" level=info msg="RemoveContainer for \"48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8\" returns successfully" Jan 17 12:12:40.667376 kubelet[2831]: I0117 12:12:40.667281 2831 scope.go:117] "RemoveContainer" containerID="107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9" Jan 17 12:12:40.669702 containerd[1591]: time="2025-01-17T12:12:40.669645449Z" level=info msg="RemoveContainer for \"107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9\"" Jan 17 12:12:40.675933 containerd[1591]: time="2025-01-17T12:12:40.675853780Z" level=info msg="RemoveContainer for \"107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9\" returns successfully" Jan 17 12:12:40.676312 kubelet[2831]: I0117 12:12:40.676221 2831 scope.go:117] "RemoveContainer" containerID="759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57" Jan 17 12:12:40.677137 containerd[1591]: time="2025-01-17T12:12:40.676967118Z" level=error msg="ContainerStatus for \"759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57\": not found" Jan 17 12:12:40.677639 kubelet[2831]: E0117 12:12:40.677577 2831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57\": not found" containerID="759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57" Jan 17 12:12:40.677819 kubelet[2831]: I0117 12:12:40.677762 2831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57"} err="failed to get container status \"759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57\": rpc error: code = NotFound desc = an error occurred when try to find container \"759a19eca48dcef7f2f108101c23d3cc6159e5698eedd91d4b35bde75da3ce57\": not found" Jan 17 12:12:40.677819 kubelet[2831]: I0117 12:12:40.677807 2831 scope.go:117] "RemoveContainer" containerID="ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70" Jan 17 12:12:40.678230 containerd[1591]: time="2025-01-17T12:12:40.678124468Z" level=error msg="ContainerStatus for \"ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70\": not found" Jan 17 12:12:40.678556 kubelet[2831]: E0117 12:12:40.678435 2831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70\": not found" containerID="ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70" Jan 17 12:12:40.678556 kubelet[2831]: I0117 12:12:40.678529 2831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70"} err="failed to get container status \"ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac0795861946825d3083aa90660de6022703091429fae277663956a31b198f70\": not found" Jan 17 12:12:40.678556 kubelet[2831]: I0117 12:12:40.678559 2831 scope.go:117] "RemoveContainer" containerID="f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561" Jan 17 12:12:40.679486 containerd[1591]: time="2025-01-17T12:12:40.679153197Z" level=error msg="ContainerStatus for \"f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561\": not found" Jan 17 12:12:40.679631 kubelet[2831]: E0117 12:12:40.679503 2831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561\": not found" containerID="f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561" Jan 17 12:12:40.679631 kubelet[2831]: I0117 12:12:40.679578 2831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561"} err="failed to get container status \"f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561\": rpc error: code = NotFound desc = an error occurred when try to find container \"f609912988e3ad4102f68fdb3322cc330c6730f57102630941c1b70850f91561\": not found" Jan 17 12:12:40.679631 kubelet[2831]: I0117 12:12:40.679606 2831 scope.go:117] "RemoveContainer" containerID="48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8" Jan 17 12:12:40.680242 containerd[1591]: time="2025-01-17T12:12:40.680094131Z" level=error msg="ContainerStatus for \"48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8\": not found" Jan 17 12:12:40.680709 kubelet[2831]: E0117 12:12:40.680633 2831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8\": not found" containerID="48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8" Jan 17 12:12:40.680820 kubelet[2831]: I0117 12:12:40.680753 2831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8"} err="failed to get container status \"48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"48a2267bc7cc346bd4c24471417746aa150178bab0145dbc9566ba5e0aded6f8\": not found" Jan 17 12:12:40.680891 kubelet[2831]: I0117 12:12:40.680819 2831 scope.go:117] "RemoveContainer" containerID="107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9" Jan 17 12:12:40.681347 containerd[1591]: time="2025-01-17T12:12:40.681233017Z" level=error msg="ContainerStatus for \"107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9\": not found" Jan 17 12:12:40.681884 kubelet[2831]: E0117 12:12:40.681643 2831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9\": not found" containerID="107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9" Jan 17 12:12:40.681884 kubelet[2831]: I0117 12:12:40.681724 2831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9"} err="failed to get container status \"107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9\": rpc error: code = NotFound desc = an error occurred when try to find container \"107bd5dc4b6b70ed06228b27890628a903b4b1783226eaf332d74b3eca479ae9\": not found" Jan 17 12:12:40.681884 kubelet[2831]: I0117 12:12:40.681747 2831 scope.go:117] "RemoveContainer" containerID="dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d" Jan 17 12:12:40.684694 containerd[1591]: time="2025-01-17T12:12:40.684596193Z" level=info msg="RemoveContainer for \"dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d\"" Jan 17 12:12:40.692231 containerd[1591]: time="2025-01-17T12:12:40.692101996Z" level=info msg="RemoveContainer for \"dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d\" returns successfully" Jan 17 12:12:40.693153 containerd[1591]: time="2025-01-17T12:12:40.692799594Z" level=error msg="ContainerStatus for \"dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d\": not found" Jan 17 12:12:40.693202 kubelet[2831]: I0117 12:12:40.692425 2831 scope.go:117] "RemoveContainer" containerID="dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d" Jan 17 12:12:40.693202 kubelet[2831]: E0117 12:12:40.692966 2831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d\": not found" containerID="dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d" Jan 17 12:12:40.693202 kubelet[2831]: I0117 12:12:40.693000 2831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d"} err="failed to get container status \"dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd6e92f2dd2fc5532e78cbe629b9ef41fbde2d401b86708dafde8fab878bc41d\": not found" Jan 17 12:12:40.962270 kubelet[2831]: I0117 12:12:40.961902 2831 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="37103937-d9e7-479c-862b-5bcc5bd85c19" path="/var/lib/kubelet/pods/37103937-d9e7-479c-862b-5bcc5bd85c19/volumes" Jan 17 12:12:40.963603 kubelet[2831]: I0117 12:12:40.963454 2831 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="408f15eb-f99b-4824-abbd-82aeb2cb028a" path="/var/lib/kubelet/pods/408f15eb-f99b-4824-abbd-82aeb2cb028a/volumes" Jan 17 12:12:41.004816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3008990a528c93aa6ea26be0e12bfcc54a6ceb84b5a67e422670122dcf8c48f2-rootfs.mount: Deactivated successfully. Jan 17 12:12:41.005139 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3008990a528c93aa6ea26be0e12bfcc54a6ceb84b5a67e422670122dcf8c48f2-shm.mount: Deactivated successfully. Jan 17 12:12:41.005835 systemd[1]: var-lib-kubelet-pods-408f15eb\x2df99b\x2d4824\x2dabbd\x2d82aeb2cb028a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7tlnh.mount: Deactivated successfully. Jan 17 12:12:41.006326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baf6c2422aab4a74ade3e3001b541c1069b73f52503ac383b6ea335047421fc1-rootfs.mount: Deactivated successfully. Jan 17 12:12:41.006856 systemd[1]: var-lib-kubelet-pods-37103937\x2dd9e7\x2d479c\x2d862b\x2d5bcc5bd85c19-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl4nv7.mount: Deactivated successfully. Jan 17 12:12:41.007398 systemd[1]: var-lib-kubelet-pods-37103937\x2dd9e7\x2d479c\x2d862b\x2d5bcc5bd85c19-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 12:12:41.007666 systemd[1]: var-lib-kubelet-pods-37103937\x2dd9e7\x2d479c\x2d862b\x2d5bcc5bd85c19-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 12:12:42.124992 kubelet[2831]: E0117 12:12:42.124495 2831 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:12:42.125825 sshd[4389]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:42.135310 systemd[1]: Started sshd@21-172.24.4.251:22-172.24.4.1:35860.service - OpenSSH per-connection server daemon (172.24.4.1:35860). Jan 17 12:12:42.141468 systemd[1]: sshd@20-172.24.4.251:22-172.24.4.1:35852.service: Deactivated successfully. Jan 17 12:12:42.154288 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:12:42.159574 systemd-logind[1572]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:12:42.164085 systemd-logind[1572]: Removed session 23. Jan 17 12:12:43.777284 sshd[4556]: Accepted publickey for core from 172.24.4.1 port 35860 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:43.780232 sshd[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:43.793751 systemd-logind[1572]: New session 24 of user core. Jan 17 12:12:43.803885 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:12:45.461186 kubelet[2831]: I0117 12:12:45.461122 2831 topology_manager.go:215] "Topology Admit Handler" podUID="785e9db9-bbc6-47a9-ab01-0f8988407c7c" podNamespace="kube-system" podName="cilium-7p9fv" Jan 17 12:12:45.461186 kubelet[2831]: E0117 12:12:45.461183 2831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37103937-d9e7-479c-862b-5bcc5bd85c19" containerName="mount-bpf-fs" Jan 17 12:12:45.461186 kubelet[2831]: E0117 12:12:45.461194 2831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37103937-d9e7-479c-862b-5bcc5bd85c19" containerName="clean-cilium-state" Jan 17 12:12:45.461186 kubelet[2831]: E0117 12:12:45.461203 2831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="408f15eb-f99b-4824-abbd-82aeb2cb028a" containerName="cilium-operator" Jan 17 12:12:45.461186 kubelet[2831]: E0117 12:12:45.461212 2831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37103937-d9e7-479c-862b-5bcc5bd85c19" containerName="mount-cgroup" Jan 17 12:12:45.461186 kubelet[2831]: E0117 12:12:45.461221 2831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37103937-d9e7-479c-862b-5bcc5bd85c19" containerName="apply-sysctl-overwrites" Jan 17 12:12:45.461186 kubelet[2831]: E0117 12:12:45.461229 2831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37103937-d9e7-479c-862b-5bcc5bd85c19" containerName="cilium-agent" Jan 17 12:12:45.466801 kubelet[2831]: I0117 12:12:45.461253 2831 memory_manager.go:354] "RemoveStaleState removing state" podUID="37103937-d9e7-479c-862b-5bcc5bd85c19" containerName="cilium-agent" Jan 17 12:12:45.466801 kubelet[2831]: I0117 12:12:45.461262 2831 memory_manager.go:354] "RemoveStaleState removing state" podUID="408f15eb-f99b-4824-abbd-82aeb2cb028a" containerName="cilium-operator" Jan 17 12:12:45.524535 kubelet[2831]: I0117 12:12:45.524487 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/785e9db9-bbc6-47a9-ab01-0f8988407c7c-cilium-ipsec-secrets\") pod \"cilium-7p9fv\" (UID: \"785e9db9-bbc6-47a9-ab01-0f8988407c7c\") " pod="kube-system/cilium-7p9fv" Jan 17 12:12:45.524535 kubelet[2831]: I0117 12:12:45.524538 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/785e9db9-bbc6-47a9-ab01-0f8988407c7c-host-proc-sys-net\") pod \"cilium-7p9fv\" (UID: \"785e9db9-bbc6-47a9-ab01-0f8988407c7c\") " pod="kube-system/cilium-7p9fv" Jan 17 12:12:45.524703 kubelet[2831]: I0117 12:12:45.524562 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/785e9db9-bbc6-47a9-ab01-0f8988407c7c-hubble-tls\") pod \"cilium-7p9fv\" (UID: \"785e9db9-bbc6-47a9-ab01-0f8988407c7c\") " pod="kube-system/cilium-7p9fv" Jan 17 12:12:45.524703 kubelet[2831]: I0117 12:12:45.524586 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvgh8\" (UniqueName: \"kubernetes.io/projected/785e9db9-bbc6-47a9-ab01-0f8988407c7c-kube-api-access-lvgh8\") pod \"cilium-7p9fv\" (UID: \"785e9db9-bbc6-47a9-ab01-0f8988407c7c\") " pod="kube-system/cilium-7p9fv" Jan 17 12:12:45.524703 kubelet[2831]: I0117 12:12:45.524610 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/785e9db9-bbc6-47a9-ab01-0f8988407c7c-cilium-run\") pod \"cilium-7p9fv\" (UID: \"785e9db9-bbc6-47a9-ab01-0f8988407c7c\") " pod="kube-system/cilium-7p9fv" Jan 17 12:12:45.524703 kubelet[2831]: I0117 12:12:45.524632 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/785e9db9-bbc6-47a9-ab01-0f8988407c7c-cni-path\") pod \"cilium-7p9fv\" (UID: \"785e9db9-bbc6-47a9-ab01-0f8988407c7c\") " pod="kube-system/cilium-7p9fv" Jan 17 12:12:45.524703 kubelet[2831]: I0117 12:12:45.524657 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/785e9db9-bbc6-47a9-ab01-0f8988407c7c-clustermesh-secrets\") pod \"cilium-7p9fv\" (UID: \"785e9db9-bbc6-47a9-ab01-0f8988407c7c\") " pod="kube-system/cilium-7p9fv" Jan 17 12:12:45.524703 kubelet[2831]: I0117 12:12:45.524679 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/785e9db9-bbc6-47a9-ab01-0f8988407c7c-xtables-lock\") pod \"cilium-7p9fv\" (UID: \"785e9db9-bbc6-47a9-ab01-0f8988407c7c\") " pod="kube-system/cilium-7p9fv" Jan 17 12:12:45.524870 kubelet[2831]: I0117 12:12:45.524836 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/785e9db9-bbc6-47a9-ab01-0f8988407c7c-host-proc-sys-kernel\") pod \"cilium-7p9fv\" (UID: \"785e9db9-bbc6-47a9-ab01-0f8988407c7c\") " pod="kube-system/cilium-7p9fv" Jan 17 12:12:45.524870 kubelet[2831]: I0117 12:12:45.524866 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/785e9db9-bbc6-47a9-ab01-0f8988407c7c-hostproc\") pod \"cilium-7p9fv\" (UID: \"785e9db9-bbc6-47a9-ab01-0f8988407c7c\") " pod="kube-system/cilium-7p9fv" Jan 17 12:12:45.524946 kubelet[2831]: I0117 12:12:45.524924 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/785e9db9-bbc6-47a9-ab01-0f8988407c7c-lib-modules\") pod \"cilium-7p9fv\" (UID: \"785e9db9-bbc6-47a9-ab01-0f8988407c7c\") " pod="kube-system/cilium-7p9fv" Jan 17 12:12:45.525031 kubelet[2831]: I0117 12:12:45.524956 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/785e9db9-bbc6-47a9-ab01-0f8988407c7c-cilium-config-path\") pod \"cilium-7p9fv\" (UID: \"785e9db9-bbc6-47a9-ab01-0f8988407c7c\") " pod="kube-system/cilium-7p9fv" Jan 17 12:12:45.525075 kubelet[2831]: I0117 12:12:45.525035 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/785e9db9-bbc6-47a9-ab01-0f8988407c7c-bpf-maps\") pod \"cilium-7p9fv\" (UID: \"785e9db9-bbc6-47a9-ab01-0f8988407c7c\") " pod="kube-system/cilium-7p9fv" Jan 17 12:12:45.525075 kubelet[2831]: I0117 12:12:45.525058 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/785e9db9-bbc6-47a9-ab01-0f8988407c7c-cilium-cgroup\") pod \"cilium-7p9fv\" (UID: \"785e9db9-bbc6-47a9-ab01-0f8988407c7c\") " pod="kube-system/cilium-7p9fv" Jan 17 12:12:45.525134 kubelet[2831]: I0117 12:12:45.525079 2831 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/785e9db9-bbc6-47a9-ab01-0f8988407c7c-etc-cni-netd\") pod \"cilium-7p9fv\" (UID: \"785e9db9-bbc6-47a9-ab01-0f8988407c7c\") " pod="kube-system/cilium-7p9fv" Jan 17 12:12:45.710643 sshd[4556]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:45.716625 systemd[1]: sshd@21-172.24.4.251:22-172.24.4.1:35860.service: Deactivated successfully. Jan 17 12:12:45.719832 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:12:45.723883 systemd-logind[1572]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:12:45.728976 systemd[1]: Started sshd@22-172.24.4.251:22-172.24.4.1:49112.service - OpenSSH per-connection server daemon (172.24.4.1:49112). Jan 17 12:12:45.733179 systemd-logind[1572]: Removed session 24. Jan 17 12:12:45.771980 containerd[1591]: time="2025-01-17T12:12:45.771935624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7p9fv,Uid:785e9db9-bbc6-47a9-ab01-0f8988407c7c,Namespace:kube-system,Attempt:0,}" Jan 17 12:12:45.944969 containerd[1591]: time="2025-01-17T12:12:45.944718837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:12:45.945184 containerd[1591]: time="2025-01-17T12:12:45.945025833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:12:45.945250 containerd[1591]: time="2025-01-17T12:12:45.945150717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:45.945943 containerd[1591]: time="2025-01-17T12:12:45.945676744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:12:46.011661 containerd[1591]: time="2025-01-17T12:12:46.011514361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7p9fv,Uid:785e9db9-bbc6-47a9-ab01-0f8988407c7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e0eeec0d3532c16c59e8741539df8b51cb49dc1a334a61a4b117c73cee92b40\"" Jan 17 12:12:46.016085 containerd[1591]: time="2025-01-17T12:12:46.015981238Z" level=info msg="CreateContainer within sandbox \"2e0eeec0d3532c16c59e8741539df8b51cb49dc1a334a61a4b117c73cee92b40\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:12:46.033995 containerd[1591]: time="2025-01-17T12:12:46.033897859Z" level=info msg="CreateContainer within sandbox \"2e0eeec0d3532c16c59e8741539df8b51cb49dc1a334a61a4b117c73cee92b40\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c24c998b48c60b95418ec9d44d409c44bdd37bd9a8aa3ce521983dd0e9fe5f1b\"" Jan 17 12:12:46.034936 containerd[1591]: time="2025-01-17T12:12:46.034737062Z" level=info msg="StartContainer for \"c24c998b48c60b95418ec9d44d409c44bdd37bd9a8aa3ce521983dd0e9fe5f1b\"" Jan 17 12:12:46.083279 containerd[1591]: time="2025-01-17T12:12:46.083236556Z" level=info msg="StartContainer for \"c24c998b48c60b95418ec9d44d409c44bdd37bd9a8aa3ce521983dd0e9fe5f1b\" returns successfully" Jan 17 12:12:46.131847 containerd[1591]: time="2025-01-17T12:12:46.131732144Z" level=info msg="shim disconnected" id=c24c998b48c60b95418ec9d44d409c44bdd37bd9a8aa3ce521983dd0e9fe5f1b namespace=k8s.io Jan 17 12:12:46.131847 containerd[1591]: time="2025-01-17T12:12:46.131818225Z" level=warning msg="cleaning up after shim disconnected" id=c24c998b48c60b95418ec9d44d409c44bdd37bd9a8aa3ce521983dd0e9fe5f1b namespace=k8s.io Jan 17 12:12:46.131847 containerd[1591]: time="2025-01-17T12:12:46.131830628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:12:46.555722 containerd[1591]: time="2025-01-17T12:12:46.553984924Z" level=info msg="CreateContainer within sandbox \"2e0eeec0d3532c16c59e8741539df8b51cb49dc1a334a61a4b117c73cee92b40\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:12:46.586772 containerd[1591]: time="2025-01-17T12:12:46.586683736Z" level=info msg="CreateContainer within sandbox \"2e0eeec0d3532c16c59e8741539df8b51cb49dc1a334a61a4b117c73cee92b40\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7cd2bae13be13b4897f058c1ac6e6e0c42e79c7dd4a04833813aca749c378e1a\"" Jan 17 12:12:46.591743 containerd[1591]: time="2025-01-17T12:12:46.591657143Z" level=info msg="StartContainer for \"7cd2bae13be13b4897f058c1ac6e6e0c42e79c7dd4a04833813aca749c378e1a\"" Jan 17 12:12:46.727571 containerd[1591]: time="2025-01-17T12:12:46.727513584Z" level=info msg="StartContainer for \"7cd2bae13be13b4897f058c1ac6e6e0c42e79c7dd4a04833813aca749c378e1a\" returns successfully" Jan 17 12:12:46.750097 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cd2bae13be13b4897f058c1ac6e6e0c42e79c7dd4a04833813aca749c378e1a-rootfs.mount: Deactivated successfully. Jan 17 12:12:46.762748 containerd[1591]: time="2025-01-17T12:12:46.762682249Z" level=info msg="shim disconnected" id=7cd2bae13be13b4897f058c1ac6e6e0c42e79c7dd4a04833813aca749c378e1a namespace=k8s.io Jan 17 12:12:46.762748 containerd[1591]: time="2025-01-17T12:12:46.762731982Z" level=warning msg="cleaning up after shim disconnected" id=7cd2bae13be13b4897f058c1ac6e6e0c42e79c7dd4a04833813aca749c378e1a namespace=k8s.io Jan 17 12:12:46.762748 containerd[1591]: time="2025-01-17T12:12:46.762742722Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:12:46.902457 sshd[4575]: Accepted publickey for core from 172.24.4.1 port 49112 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:46.903454 sshd[4575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:46.913440 systemd-logind[1572]: New session 25 of user core. Jan 17 12:12:46.919865 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:12:47.126200 kubelet[2831]: E0117 12:12:47.126112 2831 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:12:47.565387 containerd[1591]: time="2025-01-17T12:12:47.564164254Z" level=info msg="CreateContainer within sandbox \"2e0eeec0d3532c16c59e8741539df8b51cb49dc1a334a61a4b117c73cee92b40\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:12:47.611163 containerd[1591]: time="2025-01-17T12:12:47.610868455Z" level=info msg="CreateContainer within sandbox \"2e0eeec0d3532c16c59e8741539df8b51cb49dc1a334a61a4b117c73cee92b40\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f553b1ba832f83522c75ad173cd94abf5f47cd184aad0b61e9760731879ddab4\"" Jan 17 12:12:47.612907 containerd[1591]: time="2025-01-17T12:12:47.612839792Z" level=info msg="StartContainer for \"f553b1ba832f83522c75ad173cd94abf5f47cd184aad0b61e9760731879ddab4\"" Jan 17 12:12:47.704971 containerd[1591]: time="2025-01-17T12:12:47.704732881Z" level=info msg="StartContainer for \"f553b1ba832f83522c75ad173cd94abf5f47cd184aad0b61e9760731879ddab4\" returns successfully" Jan 17 12:12:47.709537 sshd[4575]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:47.714910 systemd[1]: Started sshd@23-172.24.4.251:22-172.24.4.1:49128.service - OpenSSH per-connection server daemon (172.24.4.1:49128). Jan 17 12:12:47.716389 systemd[1]: sshd@22-172.24.4.251:22-172.24.4.1:49112.service: Deactivated successfully. Jan 17 12:12:47.724844 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:12:47.727668 systemd-logind[1572]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:12:47.731436 systemd-logind[1572]: Removed session 25. Jan 17 12:12:47.741080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f553b1ba832f83522c75ad173cd94abf5f47cd184aad0b61e9760731879ddab4-rootfs.mount: Deactivated successfully. Jan 17 12:12:47.746363 containerd[1591]: time="2025-01-17T12:12:47.746283816Z" level=info msg="shim disconnected" id=f553b1ba832f83522c75ad173cd94abf5f47cd184aad0b61e9760731879ddab4 namespace=k8s.io Jan 17 12:12:47.746587 containerd[1591]: time="2025-01-17T12:12:47.746414151Z" level=warning msg="cleaning up after shim disconnected" id=f553b1ba832f83522c75ad173cd94abf5f47cd184aad0b61e9760731879ddab4 namespace=k8s.io Jan 17 12:12:47.746587 containerd[1591]: time="2025-01-17T12:12:47.746427927Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:12:48.570031 containerd[1591]: time="2025-01-17T12:12:48.569963907Z" level=info msg="CreateContainer within sandbox \"2e0eeec0d3532c16c59e8741539df8b51cb49dc1a334a61a4b117c73cee92b40\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:12:48.625982 containerd[1591]: time="2025-01-17T12:12:48.625789357Z" level=info msg="CreateContainer within sandbox \"2e0eeec0d3532c16c59e8741539df8b51cb49dc1a334a61a4b117c73cee92b40\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"11a0293107655a0f1414044ec1b0ff97089eee260b8bb62fee21aa97fe4cf2f2\"" Jan 17 12:12:48.628426 containerd[1591]: time="2025-01-17T12:12:48.627538287Z" level=info msg="StartContainer for \"11a0293107655a0f1414044ec1b0ff97089eee260b8bb62fee21aa97fe4cf2f2\"" Jan 17 12:12:48.701768 containerd[1591]: time="2025-01-17T12:12:48.701725344Z" level=info msg="StartContainer for \"11a0293107655a0f1414044ec1b0ff97089eee260b8bb62fee21aa97fe4cf2f2\" returns successfully" Jan 17 12:12:48.719072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11a0293107655a0f1414044ec1b0ff97089eee260b8bb62fee21aa97fe4cf2f2-rootfs.mount: Deactivated successfully. Jan 17 12:12:48.726946 containerd[1591]: time="2025-01-17T12:12:48.726884739Z" level=info msg="shim disconnected" id=11a0293107655a0f1414044ec1b0ff97089eee260b8bb62fee21aa97fe4cf2f2 namespace=k8s.io Jan 17 12:12:48.726946 containerd[1591]: time="2025-01-17T12:12:48.726941506Z" level=warning msg="cleaning up after shim disconnected" id=11a0293107655a0f1414044ec1b0ff97089eee260b8bb62fee21aa97fe4cf2f2 namespace=k8s.io Jan 17 12:12:48.727106 containerd[1591]: time="2025-01-17T12:12:48.726953839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:12:48.738407 containerd[1591]: time="2025-01-17T12:12:48.738283618Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:12:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:12:48.985940 sshd[4780]: Accepted publickey for core from 172.24.4.1 port 49128 ssh2: RSA SHA256:OP55ABwl5jw3oGG5Uyw2r2MEDLVxxe9kkDuhos2yT+E Jan 17 12:12:48.989266 sshd[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:49.000130 systemd-logind[1572]: New session 26 of user core. Jan 17 12:12:49.005918 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:12:49.584582 containerd[1591]: time="2025-01-17T12:12:49.581753440Z" level=info msg="CreateContainer within sandbox \"2e0eeec0d3532c16c59e8741539df8b51cb49dc1a334a61a4b117c73cee92b40\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:12:49.631746 containerd[1591]: time="2025-01-17T12:12:49.631605849Z" level=info msg="CreateContainer within sandbox \"2e0eeec0d3532c16c59e8741539df8b51cb49dc1a334a61a4b117c73cee92b40\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9008d2573d4aaf39b353b63b05413fa3fc2ef785f422cd946794b00f5cd6684a\"" Jan 17 12:12:49.633198 containerd[1591]: time="2025-01-17T12:12:49.632285113Z" level=info msg="StartContainer for \"9008d2573d4aaf39b353b63b05413fa3fc2ef785f422cd946794b00f5cd6684a\"" Jan 17 12:12:49.701568 containerd[1591]: time="2025-01-17T12:12:49.701520056Z" level=info msg="StartContainer for \"9008d2573d4aaf39b353b63b05413fa3fc2ef785f422cd946794b00f5cd6684a\" returns successfully" Jan 17 12:12:49.858615 kubelet[2831]: I0117 12:12:49.858510 2831 setters.go:568] "Node became not ready" node="ci-4081-3-0-0-25eb0cd39e.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-17T12:12:49Z","lastTransitionTime":"2025-01-17T12:12:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 12:12:50.040410 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:12:50.086448 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jan 17 12:12:50.619067 kubelet[2831]: I0117 12:12:50.618952 2831 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-7p9fv" podStartSLOduration=5.618871773 podStartE2EDuration="5.618871773s" podCreationTimestamp="2025-01-17 12:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:12:50.618271878 +0000 UTC m=+153.772571265" watchObservedRunningTime="2025-01-17 12:12:50.618871773 +0000 UTC m=+153.773171140" Jan 17 12:12:53.244724 systemd-networkd[1206]: lxc_health: Link UP Jan 17 12:12:53.249558 systemd-networkd[1206]: lxc_health: Gained carrier Jan 17 12:12:54.611553 systemd-networkd[1206]: lxc_health: Gained IPv6LL Jan 17 12:12:58.352049 systemd[1]: run-containerd-runc-k8s.io-9008d2573d4aaf39b353b63b05413fa3fc2ef785f422cd946794b00f5cd6684a-runc.h3acEh.mount: Deactivated successfully. Jan 17 12:12:58.409400 kubelet[2831]: E0117 12:12:58.409288 2831 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53726->127.0.0.1:36863: write tcp 127.0.0.1:53726->127.0.0.1:36863: write: broken pipe Jan 17 12:13:00.978046 sshd[4780]: pam_unix(sshd:session): session closed for user core Jan 17 12:13:00.985825 systemd[1]: sshd@23-172.24.4.251:22-172.24.4.1:49128.service: Deactivated successfully. Jan 17 12:13:00.992793 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:13:00.995561 systemd-logind[1572]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:13:00.998147 systemd-logind[1572]: Removed session 26.